Erkki Kurenniemi, DIMI-O (1971)
Used the body, as tracked by the camera, as the basis of a musical instrument.
Dimi-O (1971) is based on an optical interface, the purpose of which was to read sheet music graphically. The instrument was played by means of a video camera. DIMI-O was also performed by a dancer, whose movements were transformed into music. [YouTube]

Myron Krueger, Videoplace (1974-1989)
Krueger represented the body in a virtual environment, endowed with new powers. This is not only some of the first full-body computer-based interactive art to use a camera; it is some of the first interactive computer art, period,and some of the first telematic (networked) art as well. Krueger sought to make computing a full-body activity. Keep in mind that in 1974, the mouse had not even come into widespread use.
Two people in different rooms, each containing a projection screen and a video camera, were able to communicate through their projected images in a “shared space” on the screen. [YouTube1, YouTube2]


David Rokeby, Very Nervous System (~1986-1990)
Another use of the body in a musically instrumental way, but Rokeby eliminates the screen entirely, and instead surrounds the body by a tightly responsive sound-environment.
“I created the work for many reasons, but perhaps the most pervasive reason was a simple impulse towards contrariness. The computer as a medium is strongly biased. And so my impulse while using the computer was to work solidly against these biases. Because the computer is purely logical, the language of interaction should strive to be intuitive. Because the computer removes you from your body, the body should be strongly engaged. Because the computer’s activity takes place on the tiny playing fields of integrated circuits, the encounter with the computer should take place in human-scaled physical space. Because the computer is objective and disinterested, the experience should be intimate.”
Rafael Lozano-Hemmer, Surface Tension (1992)
Position is mapped to position: simple and compelling. [YouTube]

Camille Utterback & Romy Achituv, Text Rain (1999)
The body surrounded by responsive virtual objects. [YouTube]

Daniel Rozin, Wooden Mirror (1999), Peg Mirror and Weave Mirror (2007) [YouTube, Vimeo]
Literal mirrors–but with sculpturally-expanded concepts of pixel-based screens.

Scott Snibbe, Boundary Functions (1998)
The Voronoi plane partitioning algorithm is used to illustrate personal space. [YouTube]

Brian Knep, Healing Series (2003-2009)
Where Snibbe explores the meanings made when people interact with the Voronoi algorithm, Knep explores the expressive potential of the body an an input to a reaction-diffusion algorithm. [YouTube]

Tmema, Blonk + La Barbara, Messa di Voce (2003)
Bodies (and voices) interact with simulations to produce both sound and image. [Vimeo]

Chris O’Shea, Hand from Above (2008)
A giant hand that plays with you on the street. [Vimeo]

Shadow Play
Scott Snibbe, Make Like a Tree (2005) [YouTube]. Check out Snibbe’s Screen Series.

Rafael Lozano-Hemmer, Underscan (2005) [YouTube]

Philip Worthington, Shadow Monsters (2005) [YouTube]

Golan Levin, Interstitial Fragment Processor (2007) [Vimeo]

Influence of the Kinect
The Kinect depth sensor obliviated many of the hardest problems in vision-based body understanding. Within days of its release, it was seized upon by new-media artists eager to explore its possibilities.
Robert Hodgin, Body Dysmorphic Disorder (2010) [Vimeo]

Karolina Sobecka, Sniff (2010) [Vimeo]

Chris Milk et al., The Treachery of Sanctuary (2012) [YouTube]

Design-IO, Puppet Prototype (2010).
Just days after the release of the Kinect, Theo Watson and Emily Gobeille created this quick prototype. [Vimeo]

This led to a commission to produce a larger work, Puppet Parade (2011) [Vimeo]

Design-IO, Night Bright (2011)
Night Bright is an interactive installation of nocturnal discovery where children use their bodies to light up the nighttime forest and discover the creatures that inhabit it. Listening to the creatures’ sounds children can locate them in the forest, as they play a nighttime game of hide and seek. [Vimeo]
Design-IO, Weather Worlds (2013) [Vimeo]
Weather Worlds is an interactive installation that grants children weather controlling superpowers.
Utilizing a camera and real-time greenscreening, the installation allows children to see themselves immersed in an interactive and dynamic environment. The custom computer vision system tracks the heads, hands, feet and movement of children on the platform and also recognizes gestures. Using their bodies children can conjure a storm, release a twisting tornado or rain down bolts of lightning from their fingertips. There are mighty wind fields to move through, stomping earthquakes, light bending sunshine and blizzards that will make you shiver!

One of the core advantages of the Kinect is that it provides a skeleton for the body. This labels the parts of the body so that one knows, for example, the location of the head, the location of the arms, etc. Once you have this, it’s easy to make conceptual transformations based on such identities. Here, for example, is MoMath: Human Tree, a fractal body experience created by design studio, Blue Telescope: [Vimeo]

Some Dance/Performance and Technology
Klaus Obermaier, Apparition (2004) [YouTube]
One of the first uses of augmented projection on a computationally-tracked body.

Obermaier also has a sense of humor, as in his Ego installation [YouTube].

Chunky Move, Mortal Engine (2008) [YouTube]
Adrien M / Claire B, AMCB-introduction (2013) & Pixel (2014) [YouTube]
Bill T. Jones & Google Creative Lab: Body, Movement, Language: AI Sketches (2019). [YouTube]
“A Visual Journey Through Addiction”. Shreeya Sinha with Zach Lieberman and Leslye Davis. New York Times, 12/18/2018. More Info
Opportunities, and Previous Student Work
You could make a body-controlled game. An example is shown above; Lingdong Huang made this “Face Powered Shooter” in 60-212 in 2016, when he was a sophomore. Another example, Face Pinball, is shown below.
You could make a sound-responsive costume. You can develop a piece of interactive real-time audiovisual performance software (perhaps similar to Setsuyakurotaki, 2016, by Zach Lieberman + Rhizomatiks).

You could make a creativity tool, like a drawing program. In 2019, Design junior Eliza Pratt built this eye-tracking drawing program in 60-212.
Mary Huang made a project to control the parameters of a typeface with her face.
You may capture more than one person. Your software doesn’t have to be limited to just one body. Instead, it could visualize the relationship (or create a relationship) between two or more bodies (as in Scott Snibbe’s Boundary Functions or this sketch by Zach Lieberman). It could visualize or respond to a duet. It could visualize the interactions of multiple people’s bodies, even across the network (for example, one of Char’s templates transmits shared skeletons, using PoseNet in a networked Glitch application.)
You may focus on just part of the body. Your software doesn’t need to respond to the entire body; it could focus on interpreting the movements of a single part of the body (as in Emily Gobeille & Theo Watson’s prototype for Puppet Parade, which responds to a single arm).
You may focus on how an environment is affected by a body. Your software doesn’t have to re-skin or visualize the body. Instead, you can develop an environment that is affected by the movements of the body (as in Theo & Emily’s Weather Worlds).

You may control the behavior of something non-human. Just because your data was captured from a human, doesn’t mean you must control a human. Just because your data is from a hand, doesn’t mean it has to control a representation of a hand. Consider using your data to puppeteer an animal, monster, plant, or even a non-living object (as in this research on “animating non-humanoid characters with human motion data” from Disney Research, and in this “Body-Controlled Head” (2018) by 60-212 student, Nik Diamant). Here’s a simple sketch for a quadruped which is puppeteered by your hand (here).
You could make software which is analytic. You might instead elect to create an “information visualization” that presents an ergonometric analysis of the body’s movements over time. Your software could present comparisons different people making similar movements, or could track the accelerations of movements by a violinist.
You could make something altogether unexpected. Above is a project, What You Missed (2006), by CMU student Michael Kontopoulos. Michael built a custom blink detector, and then used it to take photos of the world that he otherwise missed when blinking.
Cheese by Christian Moeller is an experiment in the “architecture of sincerity”. On camera, six actresses each try to hold a smile for as long as they could, up to one and half hours. Each ongoing smile is scrutinized by an emotion recognition system and whenever the display of happiness fell below a certain threshold, an alarm alerted them to show more sincerity. The performance of sincerity is hard work.