03 Apr 2013

For my final project I want to create a gesture recognition driven photo booth animation tool. So here is my research for that:


The Robot Readable World by Timo Arnall

This piece is a non-expert reflection on the nature of computer’s gaze into the world. It is comprised of appropriated footage from research on computer vision.

This piece is one of the most beautiful pieces I have ever seen that engages the idea of the aesthetics of technology. It expresses the insect like movement of the computational gaze, which is sometimes terribly precise and sometimes wondering and almost whimsical. It also presents a pleasing juxtaposition the mundane footage being analyzed and the powerful math being pointed at it.

My final project will use gesture and facial recognition to trigger photo taking and I hope that there will be the same hint of wandering computational gaze.

Content Retargeting Using Parameter-Parallel Facial Layers by Prof. Yaser

Professor Yaser has led an amazing series of projects that deal with reconstructing human motion from all fidelities of images. In this project Yaser uses face tracking to puppet the movements of the various avatar.


The tracking data is stored and expressed in three different layers: emotion, mouth movement, and blinking. They use a series of extreme stored images in each category to serve as references. To get the face tracking data they used motion capture to creates a mesh and then they only pay attention to the mouth and eye only when changes occurred.

Although, this project is way out of my technical scope; it’s a good reference to see what has been done before.

Venus Webcam by  Addie Wagenknecht & Pablo Garcia

This project asks internet posers to pose in positions depicted in famous paintings. These images are then sent off to china to be painted.

This project describes itself as hacking a community of people rather than a code environment. It creates an unexpected link between the internet culture and high culture. I read it as a commentary on how we assign value to images. This is especially evident in the decision to get them fabricated in paint. 

One thought on “Caroline Record – Looking Outward 7: Project Research

  1. Dev


    I think there is a lot of diversity in your looking outwards here, and I am curious to know what approach you plan on taking with this. Gesture and photo recognition to take photos can be done with relative ease using FaceOSC. I am curious to know whether your goal is to capture certain gestures in photo format, or to simply use gesture to capture photos.

    Something you mentioned in your post is how quirky detection can sometimes be. If this is meant to be a tool to capture pictures, the gestures you provide would have to be adequately unique and detectable so that the user doesn’t accidentally trigger the shot.


Comments are closed.