Quan-PortraitPlan

I’m working with hizlik on this portrait project. Since both of us are photographers, we decided to share a single process that records our photography style over time, split into two different visualizations—lighting preferences and subject preferences. Photographers evolve their style over time, and we wanted to see how ours did.

For the lighting portrait, we grabbed the EXIF data for every photo we’ve taken, and created a value from the aggregate of ISO, Shutter Speed, and Aperture, and plotted them with the appropriate timestamp on a chart over time. This is an example:

The categorical portrait will require us to run all of our photographs through Google Vision, which is a computer vision API that produces keywords from photos. We will use these keywords to figure out what we took pictures of in a general sense.

Author: Quan

Origin of Quan: https://youtu.be/X0fizqifumk?t=30s I am a second year in the School of Design, with a concentration in Environments. I have done photography for many years, and have seen how both the camera and the photos it produces can be tools used to communicate truth, by highlighting and hiding specific elements. I am taking this class because I one day hope to be a designer who is able to develop and leverage unprecedented methods of communication, or as Bret Victor likes to put it, Seeing Tools.