I first thought of this idea a year and a half ago, when I saw the following video created by Tell No One (Luke White and Remi Weekes). I believe they achieved this effect with careful compositing and I wanted to replicate it interactively and automatically. So at the risk of further highlighting the flaws in my implementation, I think sharing this best explains what I set out to accomplish.
[vimeo 12825278 w=625&h=352]
(I recommend you check out their other videos as well)
In particular, I really like the abstract forms that are generated by freezing motion and I designed my system with the goal of generating form rather than scenes.
The plot thickens…
I thought this would be an ideal project for figuring out how to use the Kinect with openFrameworks, two things I was familiar with but had never seriously explored. But when I showed Duncan and some other friends the video above, they were immediately reminded of a game made by Double Fine called Happy Action Theater. As a part of this game, there’s a mini-game called Clone-O-Matic, which is exactly what I was planning to do but on a timer and with still images instead of video. I decided to go ahead with my project anyway.
[vimeo 39259832 w=625&h=352]
The video shows the process of recording loops (accelerated) and then the final products. Before you start recording, there’s a setup screen where you can adjust the thresholds and the angle of the Kinect. Having good threshold settings is important if you want the object to appear to float instead of appearing attached to a limb or other prop.
Technically, the software is pretty simple. The depth image from the Kinect is filtered with a median filter to reduce the noise on the edges of objects and then thresholded. The thresholded image is blurred and then used as the alpha channel for the color frame. Initially, this step was severely broken because the depth and color images were not aligned. Thankfully, the development version of ofxKinect includes built-in calibration functionality. Once I discovered this, my alignment problem disappeared. Loops are stored in memory as a sequence of frames, so the software will quickly slow down with long loops or high-resolution input.
The biggest current problem is usability. Starting and stopping recording is awkward because you have to use the mouse or keyboard; automatic triggering or a footswitch would be better. Second, you have to look at the computer to see your position in the frame (and the video is mirrored) — a projector would help. Finally, the layers are stacked with the oldest layer on top. I suspect this is counter-intuitive and some other z-ordering scheme should be used.
The next thing to improve is the video quality. It would be great to have a higher resolution camera, but more importantly, the segmentation and looping can be improved. Like most Kinect projects, I had trouble getting clean depth data, but if this were cleaner, segmentation would be better. I’d also like to explore automatically finding loop points that minimize the visible skip when the video layer loops.
And of course, the output is only as good as the input, so I’d like to see other people give it a try.