Portrait Plan:

I will collect all the things my subject(a) throws away over a period of time, along with voice recordings of the subject telling some thoughts he has in mind the moment he wants to throw away that piece of trash.

I will then do a photogrammetric scan of all the trash, and place them in a virtual 3D world where the user can wander around, pick them up and listen to the corresponding voice recording. (diagram below)

The voice recording can be simple like “This is a half eaten apple. I’m throwing it away because it tastes awful.” or “It’s Tuesday. I’m so happy” or just any random thought that jumped into the subject’s mind at the very moment.

I’m thinking of the trash as pieces of the subject’s life he left behind, and the voice as a frozen fragment of the subject’s ideas and values. Together they become a trail of clues that we can follow to catch a glimpse of the subject as a being.

I chose photogrammetry to record the trash because I feel that the photogrammetry models have an intrinsic crappy, trash-like quality to them, and will probably be a bonus.

I’ve been thinking about ways I can make the virtual world an immersive experience. The trash can be placed on a vast piece of land, or can be all floating in an endless river in which the user is boating. I will probably make it in Unity.

I’m also thinking about a method to systematically process all the trash and recordings, so everything can be done efficiently in an assembly line manner, and new trash and recordings can be easily added to the collection.


For my portrait assignment I was hoping to use the slit-scanning technique in order to imitate/recreate the flow of water digitally. My assigned partner mentioned she was a rescue diver and described her interaction with bioluminescent plankton, who reacted to her touch.

The idea was to create a video narrating the event, starting from the point my partner enters the water and the transition between earth and ocean. I was planning on using UV light or projections in order to communicate the change of states (currently undecided) + blue glow sticks to re-create the bioluminescence of the plankton. I envision a concentration of shots that specifically concentrate on the somatosensory system.  Finally, I was planning on creating various slit scans in order to recreate different directional and qualitative flows.



After a few hours talking to iciaiot, we started discussing about quirks we noticed in each other. While I was spinning my pen around my fingers, iciaiott was playing a lot with her rings. Later on, iciaiot explained that she would touch her rings usually when she felt nervous or tired. I then started thinking about a way to represent nervousness and use the fidgetiness of the ring manipulation as a way to calm stress. The experience I imagine would consist of someone using iciaiott’s quirk to quell a stressful situation.

I really wanted to try something with an Arduino and a VR headset so I figured I could use such a setting for the purpose of this assignment.

What I envision is an input device made of an Arduino that would capture the position of the ring on a finger. This would be done through a photocell placed at the base of a finger: the ring would block it if normally positioned. The value recorded by the light sensor would notify the system when the user manipulates the ring: the recorded value would then be a sort of “nervousness” value.

Regarding output, I imagine an evolving VR environment. We saw in class how 360 videos and pictures became new way of capturing the world, but I got surprised that this stream was textured in game engine on a sphere. What if we could play with this sphere shape and distort is? I looked up different shaders and shape distortion algorithms. What I want to do is link the distortion of the textured sphere to the stress of the situation. I would trigger stress with the use of a music and distortion of the sphere, while the movement of the ring would bring back the usual 360 picture, in the same way iciaiot does when getting nervous.

Arduino input device sketch
Example of shader I want to try to distrot the scene


Idea 1: A slit-scanning machine with physical (capturer-available) algorithmic control

I’ve been thinking deeply about the notion that that artists can use their own rule-based approaches (sometimes computer-based) to create art. These computational rule-based methods obviously afford results that often never would have been impossible otherwise. That said, I believe the creation of such algorithms places an undue burden on artists to think in ways that are antithetical to traditional creative and designerly processes — processes that are often iterative. When programming, iteration is often much harder and less natural than when drawing on paper or using art-board based applications. With this as a context, I’ve been working to create a method that would allow for an artist (in this case me) to tune and generate the algorithm physically with controls while capturing my subject, instead of before the fact in an act of algorithmic planning and forethought.

ref: An Informal Catalogue of Slit-Scan Video Artworks and Research


Idea 2: A Device to Capture The Hands

Freshman year in Placing (51-171), Cameron Tonkinwise talked about the concept of the Human-Thing where you are what you find yourself in contact with — you are extended, in a way, by the contacted thing. I’ve been throwing around the idea of attaching a device to my subject’s arm that would take pictures of everything they come into contact with. But, I don’t just want to take a video of their entire day (as that wouldn’t be very attuned), so I’ve been thinking about ways to modulate the capturing using sensed-motion or capactive activation.

ref: On the Subject of Objects: Four Views on Object Perception and Tool Use


[more ideas coming]

Mikob – PortraitPlan

I have two ideas in mind for the portrait. One is to manipulate my subject’s handwriting and create different versions of it based on handwriting analyses on personality traits.

Another is to create a grayscale chart of my subject’s clothes or other artifacts. This was inspired from my subject’s account of the color gray: “I happen to have a lot of gray clothes and they are all different. There’s warm gray, cold gray… and they go along with everything.”


Kyin and Weija – PortaitPlan


So kyin and I (weija) were paired up, and we decided to collaborate together to construct our project. As we interviewed each other, we felt like a really interesting way to portray someone was through their youtube suggestions and other youtube history related content. Basically, the motivation behind this was that we think someone’s youtube video history is quite personal to them, since a lot of the time when we watch youtube videos its rather in a private setting. At least for us, we both agreed it was rather personal and sometimes even embarrassing when we revealed both of our video suggestions.

Our idea is to create a universal portrait machine, that can be applied to anyone. The idea is for a user to input their youtube username, and have a photo taken of them. The program will parse all youtube video content related to their username, such as view history, subscriptions, suggestions, etc. Our goal is to compile ~in the order of 1,000 video thumbnails, which we can then use to “paint” the portrait of the person. (The generated pixelated art should look similar to the photo, our goal is to gather enough photos with a large array of color choices so that each thumbnail can cover an accurate color of the person’s face.)

Ideally, this is the kind of image that we are hoping to portray.

Our main inspirations from this came from guest speaker Kyle Mcdonald’s piece where he documents his keyboard activity through a twitter bot, as well as (not related to Kyle), the person who took screen shots of all their website links that they clicked on their laptop (sorry, completely blanking on the name of this person).


-weija and kyin


My portrait project is a collage drawing attention to the expressions we make in between expressions. The strange, fake, unnatural, and uncanny expressions that exist when we are not making a single expression, but mixing between them. Photographs capture these all the time (think of unnatural smiles) but in real life we tend to filter them out.

By capturing a portrait with a high speed camera at 700 frames per second, I don’t merely snapshot these strange expressions, but bring full attention to them. More than just slowing down the face, I collage various expressions on top of each other in order to 1) make the high speed video less boring by providing the eye more points of interest and 2) create more mis-matched expressions to amplify and draw attention to the phenomenon.

I did test shoots on Friday and Saturday. Friday I made myself familiar with the camera, light, and workflow. On Saturday I shot what could be a version of my project, shown below.

This was so I could have a clear head about how I would handle masking various layers, and make sure I had a competent strategy for managing my files and organizing my after effects composition. When using up my partners time, I wanted to make sure everything would go without a hitch.

Plus ,the friend above knows after effects better than I, and provided guidance. Shout out to her for the help. It’s nice to be able to basically complete the project once and let oneself mess up/chase down rabbit holes, and see what pit-falls exist.

On Sunday I met with my subject and we completed principle filming. It was a long, slow, and uncomfortable process; which is just fine for capturing uncomfortable and unnatural facial expressions.

In the image above you can see we are filming in a doorway, limited by the length of the Ethernet cable to my workstation. Hitting focus was the second largest challenge, and plenty of the intake was out of focus. With so much time needed for the camera to process and save the high speed footage the subject inevitably relaxes and moves between each and every shot. I couldn’t monitor the feed closely, preferring to be closer to the subject and use the trigger at the camera.

Now I have to filter, organize, compose, edit, and color-grade the video; which will take me some time. If my initial tests taught me anything, it’s that the after effects work is going to be slow going.


My portrait project is an interactive virtual reality experience primarily developed with the Vive in mind. However, I am also making a non-VR version that will be playable on Mac and Windows operating systems.
Conceptually, the portrait addresses achievement, and the flow of achievement’s value. My portrait’s subject draws strength from past achievements and events in time. The power of past events can exist abstractly within thought and memory, and concretely within mementos.
Technically, I am using this assignment to experiment with different forms of immersion in VR. The subject moves through an interactive ‘hub world’ space which presents doorways into more cinematic environments. Each of these environments experiments with the capabilities of VR in their own way (one is largely based upon Mark Leckey’s GreenScreenRefrigeratorAction). Below are some images of my current progress:

fatik – PortraitPlan

For this project I really liked the fact that I had to capture a quiddity of my partner. So my first instinctive step was to really get to know her. After a first couple of quick talks and meet ups I forgot how much actual time and effort it requires to really get to know someone. I honestly couldn’t even think of a medium or anything like that because it felt like I didn’t know this person enough to even make anything. So I decided to at least document my process of getting to know her.

I’ve been playing around with some of the cameras and I’ve been planning on using them to document our meetings and hangouts. I’ve also been kind of stalking her and trying to document her in moments of real time and capturing her persona.

We’ve been chilling quite a bit and I think what I end up with is going to be a bunch of clips of us doing random things together. I’m still uncertain of what I am going to do exactly but I thought of making a short film with these clips really thinking about Jean-Luc Godard’s Kuleshov and how he was able to convey different feelings with assembly from parts. I am also very inspired by Koyaanisqatsi and how many different clips achieved so many odd emotions, situations, and insights.


I’m working with hizlik on this portrait project. Since both of us are photographers, we decided to share a single process that records our photography style over time, split into two different visualizations—lighting preferences and subject preferences. Photographers evolve their style over time, and we wanted to see how ours did.

For the lighting portrait, we grabbed the EXIF data for every photo we’ve taken, and created a value from the aggregate of ISO, Shutter Speed, and Aperture, and plotted them with the appropriate timestamp on a chart over time. This is an example:

The categorical portrait will require us to run all of our photographs through Google Vision, which is a computer vision API that produces keywords from photos. We will use these keywords to figure out what we took pictures of in a general sense.