Arousal vs. Time

Arousal vs. Time from Miles Peyton on Vimeo.

Arousal vs. Time: a seismometer for arousal, as measured by facial expressions.

Overview

One way to infer inner emotional states without access to a person’s thoughts is to observe their facial expressions. As the name suggests, Arousal vs. Time is a visualization of excitement levels over time. The more you deviate from your resting expression, the more excited you are presumed to be. An interesting context for this tool is in everyday social interactions. Watching the seismometer while talking to a friend can generate insights into the nature of that relationship. It might reveal which person tends to lead the conversation, or who is the more introverted of the two. Watching a conversation unfold in this visual manner is both soothing and unsettling.

Inspiration

Arousal vs. Time is the latest iteration in a series of studies. After receiving useful feedback on my last foray into face tracking, I decided to rework the piece to include sound, two styrofoam heads, and text for clarity. Daito Manabe’s and Kyle McDonald’s face-related projects – ”Face Instrument”, “Happy Things” – informed the sensibility of this work.

“Face Instrument” – Daito Manabe

 

“Happy Things” – Kyle McDonald

Implementation 

A casual conversation between myself and a friend was recorded on video and in XML files. I wrote the two software components of this artwork – the seismometer and the playback mechanism – in openFrameworks 0.8. I used the following three addons:

  1. ofxXMLSettings – for recording and playing back face data
  2. ofxMtlMapping2D – projection mapping
  3. ofxFaceTracker – tracking facial expressions

 

The set The set

The projection mapping on the styrofoam heads was carried out on two laptops with two pico projectors. I stored facial data in XML files, and recorded video and audio with an HD video camera and an audio recorder.

The audio file was manipulated in Ableton Live to obscure the content of the conversation. I used chroma keying in Adobe Premiere to remove the background of the video, such that the graphs would seem to emerge from behind the heads, and not from  some unseen bounding box. Finally, the materials – a video file, two XML files, and an audio file – were brought together in a second “player” application, also built in openFrameworks.

Reflection

Regarding a conceptual impetus for this project, I keep thinking back to a point professor Ali Momeni made when I showed an earlier version of this project during critique. He questioned not my craft, but my language:  the fact that I used the word ”disingenuous” to describe my project. I’m still don’t have a satisfying response to this, just more speculation.

Am I trying to critique self-quantification by proposing an alienating use of face tracking? Or am I making a sincere attempt to learn something about social interaction through technology? The ambivalence I feel toward the idea of self-quantification leads me to believe that it is worthwhile territory for me to continue to explore.

Comments are closed.