Akiva Krauthamer

28 Apr 2016

Project
This project is inspired by two previous works: The Improvisation Technologies by William Forsythe and the “Put that there” project by the MIT Media Lab. I wanted to update the technology behind both of these project. Since my major is in the School of Drama I planed to create something performance based. I started this project for the “drawing machine assignment. Because I enjoyed working on it and didn’t feel like it had reached it’s potential I choose to continue working on it for the final assignment.


 

Forsythe
William Forsythe is a choreographer best known for his work on ballet. He created a set of videos demonstrating the underlying geometry of humans in motion. His videos were created on film and the hand rotoscoped.


 

Put That There

In 1979 the MIT Media Lab Speech Interface group created a project called Put That There. The project combined a laser pointer device with and a voice recognition system. This was an early experiment in human interaction using voice recognition.


Phase One

Using a Kinect, chrome voice recognition and a custom Processing app I created the following tech demo. Each join on a single person is tracked. When the user says a sentence that follows a set structure the program created a line connecting any two joints on the body.

Lines can be placed in space or continuously attached to the body. Lines can stop on the two joints or extend past them off screen.

(Draw / Place) a (Line / Ray) from my (Joint name) to my (Joint name)


Phase Two

As I continued this project I wanted to bring this from a tech demo to something closer to a final project. In the first iteration the program was not very stable and this made it really hard to record a real performance. Another goal for the second phase was to add two person support.


 

Challenges

One of the hardest things about this project was the requirements for testing and recording. I needed a Kinect, a large quite room with controlled lighting and background, a windows computer, a computer with a Blackmagic card to record the output, lavalier microphones for each actor, and one or two actors. Gathering all these elements takes a fair amount of work and so I was only able to do three recording sessions in total. Unfortunately due to an under powered computer the second of the recording sessions didn’t produce any usable footage of the two actors doing small scenes.


 

Final Results

Ultimately I created four gifs, a video demonstrating the voice control, and a video set to music. I am happy with these results, but I’m also interested in pushing this project farther and finding powerful uses for it.




 

Code

You can find the code online on Github.


 

Thanks

  • William Forsythe
  • MIT Media Lab Speech Interface group
  • Kevin Karol
  • Daniel Shiffman
  • Golan Levin
  • Jenni Oughton
  • Ruben Markowitz
  • Thomas Ford
  • Jimmy Brewer
  • Freddy Miyares
  • Justice Frimpong
  • Ashley Lee