Magrathea uses the kinect camera to dynamically generate a landscape out of any structure or object. The kinect takes an depth reading of what’s built on the table in front of it, which is then rendered live onscreen as terrain using openFrameworks and OpenGL.
The depth image is used as a heightmap for the terrain. A polygon mesh gradually morphs to match the heightmap, creating a nice rise and fall behavior. Textures are dynamically applied based on the height and slope of the mseh. For example, steep slopes are given a rocky texture, and flatter areas a grassy one. As the user builds and removes, the landscape correspondingly grows and sinks out of the ocean, shifting into a new configuration.
Landscapes can be made from anything, such as blocks, boxes, the human body, and even a giant mound of dough.
We both learned OpenGL and openFrameworks for this project.
If we were to continue this project further, we’d do so by adding more textures with more complex conditions, learning shaders and improving the graphics, populating the world with flora and fauna based on certain conditions, and possibly allowing for color-coded objects that could be recognized and rendered as specific features, say, a statue, or giant Yggdrasil like tree.