The below post is a guest post written by Samay Lakhani and Sujay Sundar, two budding data scientists. Samay currently interns with a Silicon Valley tech company; Sujay currently does research under the mentorship of a CUNY Staten Island neuroscience professor. Both are sophomores at Jericho High School, showing that they're a lot smarter than the average Roboflow employee was in high school. (The post was edited and compiled by Roboflow.)


Art has the unmatched ability to inspire. That is why we set out to create it.

Inspired by ThisPersonDoesNotExist.com - a website that generates fake people who look like real people - our team built a Generative Adversarial Network - GAN for short. Our goal was to extract and reproduce the most important features in Renaissance paintings, then generated abstract versions of Renaissance art.

A child smiling.
This face is generated by a GAN and does not exist. (Image from ThisPersonDoesNotExist.com.)

We wanted to show anyone that they can build a working computer vision model quickly. We participated in a 24-hour hackathon where we grabbed 300 images from around the Internet. The images we gathered represented the best known paintings from the Renaissance times, like paintings by Michelangelo, da Vinci, and portraits of nobles. We wanted to focus on paintings, so we discarded images of sculptures and 3-dimensional art.

The Mona Lisa. A woman staring, with her right arm on top of her left.
The Mona Lisa. (Source.)

We pre-processed our data using Roboflow by scaling the images down to 32 by 32 pixels. We opted to fit a DCGAN (Deep Convolutional Generative Adversarial Network). A GAN works by taking in usually one category of input images, then minimizes loss on two main pieces: the generator and discriminator.

  • The generator produces noise.
  • The discriminator decides if that noise represents the category.

It eventually should converge to create realistic images.

Initially, we did not augment our images. In this first trial, the model failed and produced what looks like a lot of noise.

16 squares of colored pixels that include cyan and magenta.
The abstractions our model generated initially didn't look like Renaissance-style paintings.

To attempt to improve model performance, we wanted to augment our dataset with reflections and rotations. Because this was a 24-hour hackathon, we didn’t have the time to create a pipeline that would be easily modifiable. With a few clicks in Roboflow, we more than tripled our dataset size via data augmentation. Roboflow saved us hours that we would otherwise spend writing and testing code. This helped us achieve our goal -- we won the hackathon in the social impact category!

Below is some of what we generated after training our model on the original and augmented images. The abstractions are less noisy and the colors are much closer to what you might expect from Renaissance paintings.

16 squares of colored pixels that contain brown, white and colors commonly found in Renaissance paintings.
The abstract output our model generated after using Roboflow. The output had similar colors and shapes to what we see in Renaissance paintings.

This isn’t our first time using Roboflow. We’ve used Roboflow’s model library for proprietary object detection work and gotten over 96% mean average precision.

In order to get similar results without Roboflow, we would have to spend hours writing code, then re-writing it for every new application. Roboflow regularly saves us tremendous amounts of time!