Project 3: You Control Mario

by Nara @ 11:56 am 3 March 2010

The Idea

For my project, I was inspired by this augmented reality version of the retro game Paratrooper. My first idea was to create an “augmented reality” 2-player Pong, but I decided not to pursue that because I was worried it had been done many times before and that the Pong implementation would not be challenging enough. Then I started thinking about what else could be done with games, and during my searching I found that there were some remakes of retro games that used camera input as the controls, but often the gestures they were using were not analogous to the gestures of the character in the game. I used that as my jumping-off point and decided I wanted to do something where the player could actually “become” the character, so that when they move, the character moves, and when they jump, the character jumps, etcetera. To make sure that there were analogous movements for all of the game’s controls, the game I decided to implement was Mario.

The Implementation

I knew almost straight away that this project was best implemented in C++ and openFrameworks, both because any OpenCV implementation would likely be much faster, and because there is a much larger library of open source games available for C++. (Golan gave me permission to hack someone else’s game code for this since there realistically was no time to implement Mario from scratch.) I even found a Visual Studio project for a Mario game I wanted to try, but I basically spent all of last Saturday trying to get Visual Studio and openFrameworks to work, to no avail. So, I ended up using Java and Processing for this project, which is one of the reasons why it isn’t as successful as it could’ve been (which I’ll discuss later). The source code for the Mario implementation I used is from here.

The program basically has 3 parts to it: the original Mario source code (which, other than making a couple of variables public, was untouched), a Processing PApplet that sets up an OpenCV camera input and renders it to the screen if called, and then a package of classes for an event listener that I created myself to do some motion detection and then send the right signals to the game to control the character’s movements. In essence, when it detects movement in a certain direction, it’ll tell the game that the corresponding arrow key was pressed so that the character will respond.

The Problems

First of all, the OpenCV library for Processing is pretty bad. It’s not a full implementation (it doesn’t do any real motion detection), the documentation is pretty vague and not helpful, and I even read somewhere that it has a memory leak. Just running OpenCV in a Processing applet has a slight lag. Also, I wanted to use full body tracking for the motion detection (my ultimate goal if I got it to work was to use this implementation with a port of Mario War, a multiplayer version of Mario, although I never got that far) but the body tracker was extremely buggy and would lose the signal very often, so I ended up just using the face detector, which was the least buggy.

Using a combination of the Mario game (which is implemented in a JFrame) and a PApplet together in the same window also doesn’t really work well. I read somewhere that even without OpenCV, the fastest framerate you can get when using both a JFrame and a PApplet together is about 30fps.

Because of the combination of all of these factors, even though the game technically works (it can pick up the movements and Mario will respond accordingly), there is a big lag between when the user moves, the camera detects it, the motion event listener is called to action, and Mario moves — usually at least 1-2 seconds if not longer. The consequence is that the user is forced to try to anticipate what Mario will need to do 2 seconds from now, which on a static level is not too bad, but on a level with a lot of enemies, it’s almost impossible. I still haven’t been able to make it more than 2/3 of the way through a level.

The Merits

Even though my implementation wasn’t working as well as I would’ve liked, I’m still really proud of the fact that I did get it working — I’m pretty sure the problem isn’t so much with the code as it is with the tools (Java and Processing and the OpenCV for Processing library). I know that there’s room for improvement, but I still think that the final product is a lot of fun and it certainly presents itself as an interesting critique of video games. I’m a hardcore gamer myself (PS3 and PC) but sometimes it does bother me that all I’m doing is pressing some buttons on a controller or a keyboard, so the controls are in no way analogous to what my avatar is doing. Hopefully Project Natal and the Sony Motion Controller will be a step in the right direction. I have high hopes for better virtual reality gaming in the future.

The code is pretty large — a good 20-30MB or so, I think — so I’ll post a video, though probably not until Spring Break.

1 Comment

  1. Hi Nara – here are the group comments from the crit.
    ——————————–

    well done, great work. please make a vimeo showing ~30 seconds of interaction. let’s try to get that stuff working in visual studio. You may not need face tracking! The face tracker is very error prone, slow, and has a lot of failure modes.

    awesome idea.

    fabulous! just…well done. and a nice demo. -SB

    super fun … super, mario!
    Cool, it reminds me of a Building Virtual Worlds project using motion detection to control mario. That one didn’t use face detection, though.

    Love this idea

    Very, very nice for the time you had! I wonder if displaying what the web cam is filming is what’s slowing up your project? Maybe make the webcam video smaller to help out with that. Otherwise, very nice! I would like to see a demo video too…very cool, nice! 😀 –Amanda

    Comment by golan — 6 March 2010 @ 7:01 pm

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2016 Special Topics in Interactive Art & Computational Design | powered by WordPress with Barecity