Project 3: Makes You Dance and Sing

by jsinclai @ 10:46 am 3 March 2010

I had given up on my first idea for this project when I just didn’t feel it was interesting (or appropriate) anymore. I was looking around for some motivation and looked at my older projects. I saw my Nees sketch and wondered what it would be like to throw a webcam feed on it. Of course, though, I couldn’t have the same interaction with the mouse position. I wanted this project to deal with people on their feet!

And so, this project started as an installation that responded to the audience. When there is movement in the audience, the display would shake around and go crazy. I played around with a bunch of different forms, sizes, and representations.

STIA Project 3 – Submission from Jordan Sinclair on Vimeo.

I was fairly pleased with how these worked, but felt that it still lacked some “Jordan.”

Then, someone walked in my room and started singing and dancing to some music I was listening to. I got up as well, and we goofed around for a bit. The video started going crazy and we couldn’t ourselves having lots of fun. That’s when it clicked! I want to encourage people to be in this elated, excited state. Instead of rewarding people for being sedentary, I want to reward people for being active and having fun. This kind of ties back to my first project about Happy Hardcore, the music that I love so dearly. It’s dance music! It’s music that screams “Get up on your feet! Dance and sing along!”

And so I flipped the interaction paradigm around. When you’re having fun (dancing and singing) you can see yourself having fun. When you’re still, you don’t really see anything.

STIA Project 3 – Submission from Jordan Sinclair on Vimeo.

Some implementation details:
-I use frame differencing to detect movement.
-Every frame “fades” out. This creates a background that is something other than the plain grey. When the background is just grey, there is usually not enough data on screen to make an interesting display. You see a few crazy blocks and that’s it.
-Movement alone cannot bring the display into focus. Movement is a big motivator (more people dance than sing). It accounts for about 70% of the ‘focus’ (e.g. If there is “maximum” movement, then the display is 70% in focus). But if you want to achieve full focus, you need to sing along as well (or at least make some noise)!

TODO:
-Make a full screen version and hook it up to a projector!
-Frame differencing uses linear mapping “focus” values. I need to scale these to use the differencing values that are most common.
-The audio detection isn’t as solid as it could be. It should certainly be more inviting so that users know what the audio does. I also would like to implement some sort of “fade in/out focus offset.” Currently, the audio only creates focus when you make a noise, but you lose focus in between every word you sing.
-The colors are a little dulled out. Maybe it’s just the lighting, or maybe I can do something to help.

1 Comment

  1. Hi Jordan – here are the group comments from the crit.
    ——————————–

    A) This lacks a certain je ne sais quoi when compared to your original idea from last week. But so it goes. Changed expectations. ๐Ÿ˜›
    B) Live demo is fantastically effective.
    C)The moving and sound responsiveness is really cool (where you move and it stands still.) It really does feels like it belongs in a hip hop video.
    D) The original art piece had the image looking more solid at the top and breaking towards the bottom. Why don’t I see more of that here? (Worth playing with that original idea more if you’re still interested in it – but clearly you’ve wandered away from it, which is certainly fine.)
    –yeah, I did play with it being more solid at the top (actually started by pasting the video feed into the first Nees sketch that I showed), but I didn’t feel like it made sense with the video.

    Woooahh! Really awesome idea! I like how you experimented with different shapes and ideas. The inverse relationship is very interesting and surprising–it almost encourages the audience to interact (and be wacky and loud). I think the streaming is a little slow due to the projection. Really nice project. This would be very cool on a large scale! I do think that the squares can be a little trippy–either increasing your “square resolutinon” or slowing down the movement of the squares would help with this. Although, the trippiness does make it more fun to interact with as well. ๐Ÿ™‚ Actually, increasing the “square resolution” would make it more interesting in any case. –Amanda

    This is a really great idea for a project.
    See the related Mustick: http://www.musticktue.nl/

    I like the relationship. I wish it was a little slower, though. Reminds me of the T-Rex scene from Jurassic Parkโ€”if you don’t move they can’t see you.

    I think if you tweaked the effect so that even a little movement made a small (but noticeable) difference, it would be easier and more friendly.

    I like the inverse relationship between movement and image-destruction. That, for me, is new. You might want to try using openFrameworks, you’ll get better performance.

    Comment by golan — 6 March 2010 @ 6:56 pm

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2016 Special Topics in Interactive Art & Computational Design | powered by WordPress with Barecity