Project 4: Final Days…

by Ben Gotow @ 3:16 am 25 April 2011

For the last couple weeks, I’ve been working on a kinect hack that performs body detection and extracts individuals from the scene, distorts them using GLSL shaders, and pastes them back into the scene using OpenGL multitexturing. The concept is relatively straightforward. Blob detection on the depth image determines the pixels that are part of each individual. The color pixels within the body are copied into a texture, and the non-interesting parts of the image are copied into a second background texture. Since distortions are applied to bodies in the scene, the holes in the background image need to be filled. To accomplish this, the most distant pixel at each point is cached from frame to frame and substituted in when body blobs are cut out.

It’s proved difficult to pull out the bodies in color. Because the depth camera and the color camera in the Kinect do not align perfectly, using a depth image blob as a mask for color image does not work. On my Kinect, the mask region was off by more than 15 pixels, and color pixels flagged as belonging to a blob might actually be part of the background.

To fix this, Max Hawkins pointed me in the direction of a Cinder project which used OpenNI to correct the perspective of the color image to match the depth image. Somehow, that impressive feat of computer imaging is accomplished with these five lines of code:


// Align depth and image generators
printf("Trying to set alt. viewpoint");
if( g_DepthGenerator.IsCapabilitySupported(XN_CAPABILITY_ALTERNATIVE_VIEW_POINT) )
{
printf("Setting alt. viewpoint");
g_DepthGenerator.GetAlternativeViewPointCap().ResetViewPoint();
if( g_ImageGenerator ) g_DepthGenerator.GetAlternativeViewPointCap().SetViewPoint( g_ImageGenerator );
}

I hadn’t used Cinder before, and I decided to migrate the project to Cinder since it seemed to be a much more natural environment to use GLSL shaders in. Unfortunately, the Kinect OpenNI drivers in Cinder seemed to be crap compared to the ones in OpenFrameworks, et. al. The console often reported that the “depth buffer size was incorrect” and that the “depth frame is invalid”. Onscreen, the image from the camera flashed and occasionally frames appeared misaligned or half missing.

I continued fighting with Cinder until last night, when at 10PM I found this video in an online forum:

This video is intriguing, because it shows the real-time detection and unique identification of multiple people with no configuration. AKA it’s hot shit. It turns out, the video is made with PrimeSense, the technology used for hand / gesture / person detection on the XBox.

I downloaded PrimeSense and compiled the samples. Behavior in the above video achieved. The scene analysis code is incredibly fast and highly robust. It kills the blob detection code I wrote performance-wise, and doesn’t require that people’s legs intersect with the bottom of the frame (the technique I was using assumed the nearest blob intersecting the bottom of the frame was the user.)

I re-implemented the project on top of the PrimeSense sample in C++. I migrated the depth+color alignment code over from Cinder and built a background cache and rebuilt the display on top of a GLSL shader. Since I was just using Cinder to wrap OpenGL shaders, I decided it wasn’t worth linking it in to the sample code. It’s 8 source files, it compiles on the command line. It was ungodly fast. I was in love.

Rather than apply an effect to all the individuals in the scene, I decided it was more interesting to distort one. Since the PrimeSense library assigns each blob a unique identifier, this was an easy task. The video below shows the progress so far. Unfortunately, it doesn’t show off the frame rate, which is a cool 30 or 40fps.

My next step is to try to improve the edge of the extracted blob and create more interesting shaders that blur someone in the scene or convert them to “8-bit”. Stay tuned!

1 Comment

  1. Comments from the crit 4/25:

    petra cortright with a webcam using slit scan technique
    https://www.facebook.com/video/video.php?v=597525094774
    COOL YOU DID IT! GO BEN :)
    Add some beats and you’re good to go!
    Dude, I told you it does person detection without the Psi pose! Those demos are really good.
    Hm—I haven’t had problems with the Cinder stuff. You should post a bug report.
    Raw C++, ouch. You should try porting some of that into Cinder to take care of their nice texture/shader abstractions.
    http://machinesdontcare.wordpress.com/
    Steal this guy’s shaders
    Blur shader is super easy:
    http://www.gamerendering.com/2008/10/11/gaussian-blur-filter-shader/
    In fact, just steal everyone’s shaders. Then let nerds like me write their own shaders executed in real time while the program runs.
    Also:
    http://kineme.net/search/apachesolr_search/glsl < -- Mostly QC but there are some gems there http://3dshaders.com/home/index.php?option=com_weblinks&catid=14&Itemid=34 <-- The Orange Book is classic, one of the better resources for shaders I also posted a blog article a few crits ago that implements the photobooth shaders in GLSL. It's really simple math. That is freaking awesome. You have to do more with this than making yourself wavy. Thats like eh, but now you could do ANYTHING, and all the hard work is done. Find some bigger badder GLSL shaders. This is really really cool. No blur! Everyone does blur! Do something cool! Woa that is seriously impressive. Lots of potential. Nice. You should do something like The Ring styled ghost/static-y haunting shader, so I can haunt other people.

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Interactive Art & Computational Design / Spring 2011 | powered by WordPress with Barecity