The Comic Kinect
Our group came together over the realization that everyone wishes they were in a comic book. We were interested in working with multiple users and encouraging them to interact in new ways.
With this in mind:
Our project aims to create an experience that detects, visualizes, and encourages two users’ physical contact with one another.
Identifying the Challenge
After looking at a lot of comics, tv shows, and movies, we came to the conclusion that one of the most significant elements of comic book fighting is the visualization of impact energy or the impending impact of energy. This effect is achieved through the combined use of several techniques such as:
These characteristics make up a recognized language of comic book fighting that can be seen across styles and mediums. Here we see similar methods being used in a Manga illustration, a contemporary X-Men panel, and a scene from the 1960’s Batman tv show.
Scoping: Making It Manageable
There were a lot of ideas we wanted to try with this project including tracking velocity, giving depth to graphics, and integrating sound, but after experimenting with speech bubbles and glow we decided to focus our energy on typography and exploding symbols as a way to visualize users’ interactions with one another, and use filters to place them in the comic book world.
Given the multifaceted nature of this project, we decided to have a clear separation of tasks amongst our team..
The movie A Scanner Darkly was a big inspiration. It’s application of filters borrows heavily from the comic book hand-drawn aesthetic, despite it’s use of code.
Mark started by mastering mean-shifting in openCV, followed by edge detection to create a black outline. Mad props for the valiant attempt at Shaders, and for fixing the insane memory leaks.
Motion Graphics: Emily and Maya
Emily and Maya focused on creating the dynamic visualizations that occur on impact.
We started by deciding by experimenting with several words and symbols, both hand-drawn and generated.
Our effects were created with a number of openFrameworks addons.
Our biggest challenge was making these programs integrate with the skeleton interaction detection. There was much discussion about variables and what information we were going to have access to for each limb. Emily was dubbed the “Addon Ninja.”
Skeleton Interaction Detection: Ward
Ward really took on openNI and did a lot of the heavy lifting as far as detecting limb interaction. We had a lot of entertaining user tests.
Some of our challenges:
Leaking 30mb of memory every second.
Addon and pointer madness!
A very slow debug cycle.
Integrating many different parts.
Animating over time.
Continually shifting scope.
Our Video After Integration:
There are still a lot of bugs to work out in order to achieve the interaction we wanted, but hopefully those will be resolved in the future.
New Demo Video!
After a bit more tinkering with the code, we we able to get a much clearer result. :D