Project3 – New Interactions with Kinect and Computer Vision!

by Golan Levin @ 1:19 pm 28 February 2011

Students in our Spring 2011 freestyle arts-computing course at Carnegie Mellon — “Special Topics in Interactive Art and Computational Design” under the direction of Prof. Golan Levin and teaching assistant Dan Wilcox — developed eighteen projects to explore the interactive and artistic potential of computer vision and the new Microsoft Kinect depth-sensing camera. All of the projects were created in just a little over two weeks, using a wide variety of free, open-source development environments for arts engineering, including OpenFrameworks, Processing and Cinder. Students in the class span the range from sophomores to graduate students, and come from six departments (Art, Design, Architecture, Computer Science, HCII, and Robotics.) The original assignment statement can be viewed here.

This course unit was made possible by a microgrant from CMU’s Center for Computational Thinking, with funding from Microsoft Research, which provided 12 Kinect sensors with which the students developed their projects. Additional thanks to the developers of ofxKinect, ofxOpenNI, and OpenNI for their enabling contributions!

 

Comic Kinect

Maya Irvine, Ward Penney, Emily Schwartzman, and Mark Shuster

Kinect meets comic effects! Punch and kick interactions are visualized in comic book style — with onomatopoeic typography — in realtime.

Project page / video

     
 

Mario Boo

John Horstman and Nisha Kurani

Boo, a Super Mario ghost character, appears when the Kinect senses a person’s body. He’s shy and always stays behind the person he is following, while floating gently and laughing manically. The size of Boo is depth dependent as well.

Project page / video

     
 

Magrathea

Tim Sherman and Paul Miller

Magrathea uses the Kinect to generate a dynamic landscape from any desktop object. The camera reads the depth of whatever is built on the table in front of it, which is then rendered as a slowly evolving earthlike terrain.

Project page / video

     
 

We be Monsters

Caitlin Boyle and Asa Foster

We be Monsters may be the world’s first two-person Kinect-based puppet! Inspired by multi-person Chinese dragon costumes and Snuffleupagus, participants collaborate to direct the virtual monster’s limbs.

Project page / video

     
 

Mix & Match Interactive Toy

Meg Richards

A virtual exquisite corpse using 1956 Ed-U-Cards, controlled using the Kinect/OpenNI skeleton. Your body is composed of 3 cards which you can change by swiping your hands.

Project page / video

     
 

Kinect Flock

Alex Wolfe and Honray Lin

Alex and Ray created a particle system that exhibits flocking and swarming behaviors when the user is moving, and flocks to the participant’s depth field when they’re standing still. The resulting simulation ebbs and flows between the recognizable and the abstract.

Project page / video

     
 

roboScan

Shawn Sims

roboScan is a 3D modeler + scanner that uses a Kinect mounted on an ABB 4400 robot arm. Motions planned in Robot Studio and Robot Master control to the robot as well as the 3D position of the camera. The Kinect depth data is then used to produce an accurate model of the environment.

Project page / video

     
 

Neurospasta

Huaishu Peng and Charles Doomany

Neurospasta is a free-form game platform for full-body experimentation and play. Participants can control their own Kinect-based puppet. They can also select 3 functions (Scale Body, Repel Head, and Switch Head) to control the other player’s avatar!

Project page / video

     
 

Will-o-the-Wisp

Le Wei and James Mulholland

In folklore, the Will-o-the-Wisp is an enigmatic fairy-like creature that appears as a glowing light to weary travelers in swamps. These travellers follow the light deep into the swamp and mysteriously disappear… However, some of these Wisps are friendly and like to play!

Project page / video

     
 

Balloon Grab

Susan Lin and Chong Han Chua

By detecting open-handed versus closed-hand postures, Susan and Chong Han developed a simple storytelling tool based on simulated balloon flight.

Project page / video

     
 

Hand-Tracking Visualization

Ben Gotow

Ben’s software uses hand gestures to control an audio visualization, incorporating depth data from the Kinect to identify hands in a scene. The position, velocity and other parameters of the participant’s hands are then used to create an interactive visualization of sound.

Project page / video

     
 

Kinect Tracer

Eric Brockmeyer and Jordan Parsons

Eric and Jordan’s system maps the paths of people moving through a space, and projects interpretations of these paths back onto the floor in realtime.


Project page
/ video

     
 

Automatic Spotlight

Samia Ahmed

A Kinect-controlled DMX spotlight automatically tracks a person within a space. The spotlight follows any people it sees, and jumps back and forth between them.


Project page
/ video

     
 

Kinect Interactive Projection Mapping

Marynel Vázquez and Madeline Gannon

Projection mapping with realtime lighting onto 3D forms. Move the virtual coordinates to match the physical projection and use the Kinect to control the virtual color and lighting.


Project page
/ video

     
 

2D♥3D

Mauricio Giraldo

2D♥3D is a Kinect-based online multi-user interactive environment. The project allows multiple web-based users to interact with and augment the physical space of the Kinect participants with virtual objects.

Project page / video

     
 

Kinect VJ’ing

Riley Harmon

Realtime control of interactive color and visuals for VJ’ing. One hand controls the position of the rotating arm, and the other changes the color.


Project page
/ video

     
 

The Pet Flower Project

Sharon Hoosein

Use a webcam to interact with a virtual pet flower with emotional states. Pet the flower to make it happy, or it will cry if it’s been left alone.

Project page

     

You Mirror Me

Dane Pieri and Max Hawkins

You Mirror Me is a site for creating collaborative video animations. Animations start with one base sequence that is a five second video recorded by one person. This video is split up into 50 frames which are then served to users. When contributing individual final frames you are served a random frame from a base video and have five seconds to re-create it. A photo is taken by your webcam and sent back to the server.

Project page / Project website

   
     

Related press:

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Interactive Art & Computational Design / Spring 2011 | powered by WordPress with Barecity