For my event, I plan to train a neural net to ‘undecay’ images. I will use a Generative Adversarial Network. The dataset are pairs of images taken from time lapse videos on Youtube of rotting food. I will then train a discriminator to recognize the fresh food. The generator will be fed images of rotten food and its output will be judged by the fresh-food-recognizing discriminator. After sufficient training, we can feed any image into the generator for an ‘undecayed’ output.
While I’ve started compiling my data set, I only have around 15 image pairs, and will need at least 20 times that to get any sort of interesting generator output. Also, to generate high-resolution images I will either need a gigantic network or some form of invertable feature extractor, neither of which I have experience in.