DMGordon – Place

For my place project, I made a point cloud compositing system which condenses multiple point clouds into a single three dimensional pixel grid.

I began this project with the intent of capturing the freeway I live next to and a creek in Frick Park. I started with attempting to generate point clouds using a stereoscopic video camera and Kyle McDonald’s OFCV OpenFrameworks addon. The resulting images were too noisy for compositing, however, and I had to switch capture devices to the Kinect. While the Kinect provides much cleaner point clouds, it requires an AC outlet for additional power, tethering me to wall sockets. Others have found portable power supplies to remedy this, and my next step is to follow their lead in making a mobile capture rig.

The Kinect automatically filters its infrared and visible color data through its own computer vision algorithms to produce denoised depth data, which can then be projected onto the visible color image to create colored, depth-mapped images. I could then take each pixel of these images and use the depth data as a z-coordinate to populate a three dimensional color grid. By adding multiple frames of a depth-mapped video into a single color grid, we are treating the color grid like a piece of photopaper during a long exposure. The resulting images contain the familiar effects of long exposure photography in a three dimensional vessel:

In addition to containing an additional dimension of space, we can run colorpoint-by-colorpoint analysis on the grid to split and extract subsections of the image based on various factors. Here are some tests experimenting with simple color-range partitioning:

Full Spectrum:

Filtered for Greenness:

Filtered for Not-Greenness:

Going forward, I see several ways to expand these methods into provocative territory. With a portable capture rig, we could capture more organic and active locations, where movement and changes in light could lead to more dynamic composites. Also, more intense analytics on the composites, such as filtering blocks of the images based on facial recognition or feature classification, would produce more compelling cross sections of the data. Adding multiple Kinects compositing their point clouds into multi-perspective frames would open the door to volumetric analysis. Even rigging a single Kinect up to a Vive controller or an IMU could provide a global coordinate space for expanding the composites beyond a single frustrum of reference.

Here are a couple more images of backyard experiments:

BBQ + House true-color:

BBQ + House 360:

More BBQ related pictures to come

One comment

  1. golan

    Comments from the Group Review.

    Made long-exposure point clouds using the KinectV2.

    General: Look up Larry’s portable kinect rig powered by a drill battery. Defamiliarization by filtering has a lot of potential. Different ways of seeing the same place – more than just color.

    Might be worth adding a very miniscule amount of randomness into the point cloud rendering so that the points don’t line up quite so well. Occasionally the viewer can see all of the points lining up.

    The STUDIO has some ultra long-distance USB cables which are specially made for the Kinect, so that you could scan larger places. You could get an extra 40-50’ feet of distance.

    I’m impressed that the outdoor scenes work as well as they do. I thought there would be too much ambient IR light. Nice work.

    Look into the Point Cloud Library (PCL) for seeing what you can make of these clouds. There’s an OF addon for that too.

    I love your horrifying centipede man.

    I wonder if you took this device to a place where there were a lot people, you would have a bunch of human centipedes roaming around. -airports/malls?

    Data viz

    Are there other filters you can apply during capture? Hyperspectral/multispectral/lens filters/etc. May be neat.

    So interesting with people/moving subjects

    SICK. Interesting colours.
    What are the walls on the outdoor point cloud?

    What are the most interesting natural motions for you? Human movement like dancers or capturing natural time-based phenomena like streets? I think dancers, choreographing this, is more interesting than public places – ie: your human centipede is more interesting than your backyard. Deformed humans have something naturally interesting/engaging/uncanny to look at. For me..

    You should 3d print this in coloured resin +1
    ^^+1++
    Print only the green pixels of a scene in green resin. Print the red pixels of the same scene in red resin. Print the blue in blue.

    What if you did the filtering before making the point clouds e.g face tracking, cutting out people, adding/removing objects

    I love the way you captured your motion with the “human centipede” photogrammetry
    Why did you choose the location you chose?

    Figrid example

    Love the filtering +

    You’re describing this setup but I want to see pictures. You say you looked silly – I want to see you looking silly 🙂

    The filtered for greenness gif is beautiful