Monthly Archives: March 2014

Sama Kanbour

30 Mar 2014

Title Soundscapes Installation

Description An imaginary landscape is animated in real time as a person whistles into a microphone. The outcome is a unique representation of a nature filled with trees and animals.

 

Title Living Image

Description An image that looks like a Picasso painting is animated as sensors detect human movement.

 

Title Trying to animate a biker leg

Description Oriol Ferrer Mesià animates a leg image using ofx addons.

Spencer Barton

28 Mar 2014

Looking Glass: Idea and Update

I want to explore non-linear storytelling. The reader will guide a character through the story using a display. The character will know where it is on the page and behave accordingly.

The user will control a see-through display that will have the character displayed. The display will know where it is over the page so that the character can be animated to interact with the page.

2014-03-28 13.44.03

OLED Diplay Tests

I got animations working on the OLED display. More details on this piece of hardware..

2014-03-24 16.02.02

In this case the character is the caterpillar from The Very Hungry Caterpillar by Eric Carle.

Moving Forward

The key details at this point are:

  • Localization – I’ve got a few ideas
    • Capacitive plate under a page with learning
    • Microphones attached to page to listen for movement with learning
    • Camera mounted above to track
    • IR grid/ id tags on the page
    • Look at color beneath display
  • The story – I can use an existing story or create a new one. I am able to develop a story idea but would not be able to create a story graphically. I need to talk to others in the class and see if anyone would be interested in working on this part of the project with me.
  • Building the module piece – The current OLED display is flimsy and uninteresting. The reader deserves to hold a more interesting object. I need to create an object that pertains to the reading process that will contain the display, a processor, batteries and sensors. As a result it will be a nice hand-sized object. As a default I 3D print an object with a form similar to a computer mouse.

I am currently exploring the color detection case. With 2 or more color sensors it would be potentially possible to know exact location. That said the sensors are noisy so detection will likely occur based on regions. For example saying the the caterpillar is over the leaf or over a blank part of the page. The key advantage of the color sensors is that the display can become a completely independent object. All of the other ideas involve modifying the book or setting-up external hardware. The OLED display enable the reader to still engage with the book in a semi-natural manner. External hardware makes the reading situation more contrived and forces the reader to conform to my project set-up.

This project may be nice for the Pittsburgh Children’s Museum if ruggedized enough.

Sama Kanbour

28 Mar 2014

Idea Historical figures miming people in real time!

Tools OpenFrameworks – ofxPuppet and ofxFaceTracker

Team Sama & Afnan

 

 

Andre Le

27 Mar 2014

For my final project, I want to explore the power of computation to recognize biofeedback patterns and present them to the user in a recognizable way.

I’ve been working with an EEG headset called the MindWave, which is a single-channel wireless EEG sensor. The headset outputs EEG power band values such as Delta, Theta, Alpha, Beta, and Gamma waves. I’ve successfully been able to output these features to Wekinator, a machine learning application that allows for discrete classification of input features and outputs OSC messages. With this, I have been able to train the system to recognize colors that I am thinking.

What I would really like to do is map a person’s mental state such as emotions to a social feed of memories. For example, I could display several images from a person’s Facebook feed to train the system. Then, as the user lets their thoughts roam, the application would pull up other related images, videos, and posts as a way for the user to visually see what they’re thinking.

Some questions about this:

  • Does seeing these memories allow you to maintain to these emotions?
  • Can you control what you’re thinking?
  • Can we build 3d environment to fully immerse yourself in your “thoughts and memories”?
  • What if you can invite others to “experience” what you’re thinking?

Reference:

http://frontiernerds.com/brain-hack

http://www3.ntu.edu.sg/home/eosourina/Papers/RealtimeEEGEmoRecog.pdf

 

Wanfang Diao

25 Mar 2014

For my final project, I would like to explore in interaction and visualization of sound/music. My inspiration comes from the 3 projects below:

This is a ipad app created by student from lifelong kindergarden group in media lab. It allows people create music instruments and playable compositions. I feel that there is still more potential space to create interaction and connection between graphic symbols and sound.  Colors can be mixed together, sounds can be harmony together.  Graphics can change from one shape to another, sound can rise from one pitch to another. Shapes can shake, spin or jump, so does sound! So I think the interaction and the mapping should not be limited in tapping a colorful graphic shape and it make a sound.

What makes me more excited is the two Japanese MVs below:

I am impressed by the matrix of camera flashlights with the wonderful music. The connection between the sparks and the rhythm strongly touched me! These two works inspire me that besides some thing playful and funny, I can also try something touching or exciting. Anyway, I’ll start from some simple experiments by processing and openFrameworks.

Here is another project I admire  and similar with my idea called patatap.

http://tytel.org/lissa/

 

Haris Usmani

23 Mar 2014

I am looking into implementing a novel approach for 2-D camera rectification as an OFx plugin. This method of rectification requires no input from the user (provided your image/camera has EXIF data which is almost always the case) – forget about making and using a checkerboard to rectify your image!

paper_results

I learned about this technique when I took the Computer Vision course (taught by Dr. Sohaib Khan) in Fall 2012 at LUMS. We covered one of the recent publications by the CV Lab at LUMS: “Shape from Angle Regularity” by Zaheer et al. in Proceedings of the 12th European Conference on Computer Vision, ECCV, October 2012. This paper uses ‘angle regularity’ to automatically reconstruct structures form a single view. As part of their algorithm, Zaheer et al. first identify the planes in the image and then automatically 2D rectify this image solely relying on ‘angle regularity’. That’s the part I’m interested in.

Angle regularity is a geometric constraint that relies on the fact that in structures around us (buildings, floors, furniture etc.), straight lines in 3-D meet at a particular angle most commonly 90 degrees. Look around your room, start counting the number of 90 degree angles you can find and you’ll see what I mean. Zaheer et al. use the ‘distortion of this angle under projection’ as a constraint for 3D reconstruction. Quite simply, if you look at plane from a fronto-parallel view you shall see the maximum number of 90 degree angles possible. That’s what we’ll search for: we look for the “… homography that maximizes the number of orthogonal angles between projected line-pairs” (Zaheer et al.).

Following the algorithm used in the paper (and MATLAB code available at http://cvlab.lums.edu.pk/zaheer2012shape/) I plan to generate a few more results to see how practical it is- from the results given in the paper, it seems very promising. The algorithm for 2D rectification relies on searching the lines in the image, assuming line-pairs to be perpendicular, and then using RANSAC to separate inliers and outliers line-pairs in order to optimize 2 variables (camera pan and tilt, Note: focal length is known from EXIF data). The MATLAB code relies on toolboxes provided within MATLAB (for example RANSAC) which I should be able to find open-source C++ implementations of- The algorithm, although conceptually straight-forward, might not be as easy to implement and optimize when working with C++. I would work towards it and judge the time commitment it requires.

Once I’m done coding the plugin, I would want to make some cool examples that demonstrate the power of this algorithm. If the frame-to-frame optimization is fast (i.e. the last frame’s homography seeds the initial value for the next one), I could try to make this real-time.

I have not yet come across a C++ implementation of this technique, and the only OFx plugin for camera rectification that exists right now (ofxMSAStereoSolver) depends on the checkerboard approach.

Paper: http://cvlab.lums.edu.pk/sfar/
Aamer Zaheer, Maheen Rashid, Sohaib Khan, “Shape from Angle Regularity”, in Proceedings of the 12th European Conference on Computer Vision, ECCV, October 2012

Related OFx Plugin: ofxMSAStereoSolver by memo

Shan Huang

23 Mar 2014

Ideas for final project

For my final project I want to do something fun with projectors. For several reasons:1. I have a projector at home and I immensely enjoy it. 2. Among all forms of displays, projectors probably have least defined shape and scale. It can magnify something that only occupies a few pixels on screen to a wall-sized (even building-sized) image. I think it is really magical. 3. I found some really good inspirations in the projects Golan showed in class. One of them is Light Leaks by Kyle McDonald and Jonas Jongejan:

This project reminds me of a thing that almost everyone used to do: using a mirror to reflect light spots onto something / someone. I still do that with the reflective apple sign on the back of my cellphone. I think the charm of this project comes from the exact calculation they did that resulted in the beautiful light patterns. Though reflecting light spots with disco balls alone is a cool enough idea, the project would not be as stunning as it is if the light leaks had been randomly reflected onto the walls in random directions.

A little experiment

I was really amazed by the concept that ‘projection can be reflected’, so much that I carried out this little experimentation at home:

1 2

This is the projector I have at home. We normally just put it on a shelf and point it at a blank wall.

3 4I

The other night I put a mirror in front of the light beam it projected. The mirror reflected a partial view of the projected image onto a completely different wall (the one that’s facing my bedroom, perpendicular to the blank wall). It’s hard to tell from the photo but the reflected image was actually quite clear. As I rotate the mirror around its holder the partial image started to move around the room in a predictable elliptical trajectory. That meant I could sort of control the location of the image in 3D space by setting the orientation of the mirror. If I could mount a mirror on a servo and control the orientation of the servo, with some careful computation (that I don’t know how to do yet), I’d be able to cast the projection to anywhere in a room. Furthermore, the projected image doesn’t have to be a still image – it could be a video, or be interactively determined by the orientation of the mirror.

This opens an exciting possibility of using the reflected light spot as a lens into some sort of “hidden image”. For example, the partial image could show the scene behind the wall it’s projected onto. The light spot in a sense becomes a movable window into the scene behind – and it’s interactive, inviting people to move it around to explore more of the total view. Or, the projector + mirror could become a game where the game world is mapped to the full room. Players can only see part of the world at once through the projected light spot, and they move their characters by rotating the mirror / interacting with other handles that manipulate the mirror.

If all I want is to project an image to an arbitrary surface, why not just move the projector?

Well, the foremost reason is that moving a mirror is way easier than moving a projector. The shape of the projection can also be based on the mirror shape so that we don’t always get an ugly distorted rectangle. The idea can be relatively easily set up with the tools we have in hand. Another motivation is that by having multiple mirrors, each reflecting certain region of the raw projection, the original image can be broken into parts and reflected in diverging directions. They can all move independently, but ultimately using light from the same projector. Light Leaks uses this advantage very well to emit light spots in numerous directions.

 

So that’s what I’ve thought of so far. There are still many undetermined parts, and I’m not sure how challenging it would be implementation wise. I have played with Arduino only in middle school and it was such a disaster that I completely stayed away from hardware in college. But I’m willing to learn whatever is needed to make the idea come true. I’m currently still researching other works for inspirations, also trying to make sure that my idea isn’t already done by someone else.

Other inspirations:

Chase No Face – face projection. Also discussed in class.

More to come…

Update:

Taeyoon and Kyle pointed me to this project.

Chanamon Ratanalert

19 Mar 2014

Capstone Research

Making this blogpost might be a little difficult for me considering I don’t know what I want my Capstone project to be. I’m going to assume for now that it’ll be along the lines of my interactivity project but extended further. I assume this because my general thoughts about what I want my interactivity project to be are what I want to do for any project (it’ll maybe make sense below). Also, this will provide me a deeper learning process because I’ll have learned from the interaction project and can carry those learning points into the capstone project.

What I imagine I want my project to be is based on creation. I want the user to be able to interact with the project in a way that they are creating something. I also want to pick a specific mood for the project to convey, most likely calm and happy, so that the interaction can be a soothing and memorable experience.

Here are 3 projects that I’m inspired by and wish to create my project off of some of these projects’ ideas (a couple of them are repeated from previous blogposts):

The Little Prince
What I really enjoy about this project is its overall experience. Emotionally, the game very much keeps the innocent cuteness that the children’s book of which it is based. It provides an interactive entrance into a internationally cultural idea and executes it with the vibe that the book is giving off. Aesthetically, the illustrations and animations are flawless. Keeping the hand-drawn illustration remained true to the book and each animation and movement flow well with the style of illustration as well as the interactions that cause them. Technically, the project covers a wide range of interaction, from clapping to mic-blowing to face positioning. It truly immerses the person into the project and, more importantly, the story.
I wish to create my project that immerses the player into the “story” (may it be a story or just a general theme) and gives off the emotion I want.

Your line or mine – Crowd-sourced animations at the Stedelijk Musem
This project is wholly amazing in its root idea. Culturally, it provides a means of communication and collaboration for anyone who visits the exhibit. Aesthetically, I think it could be a little more communicative between each drawing, since you kind of only watch the dots moving back and forth in the video. The idea that everyone’s pictures get put together in a video is nice, but as one of the contributors, I would like to be able to see my contribution for more than half a second. I love that this project is very simple in its technical aspects, but speaks so much with just a scan of an image and a video.
The project fuels my inspiration (and determination) to create a project in which the user is creating something. If I could make it as contributory as this project in which anyone who has ever interacted with the project form one whole creation, that’d be great–but I have to discover that idea first. Nevertheless, I really enjoy the idea of my project’s interaction being the user creating something.

La Monde des Montagnes

Continuing with my interactive storybook idea, this project very much captures the essence and feeling of what I would like to achieve. I really like the magical feel of what the project creates from just a book being see on a camera. The illustrations and animations themselves are visually spectacular and quite mesmerizing. Technically, it is very simple (in a sense) but makes great moves for what it is.

Rise and Fall

Another storybook idea, this project shows the other aspect of what I would like to achieve: telling a story. This project is more along the lines of what I’m looking for because the user is able to unfold the story with their interactions. I really want to capture the essence of a story by having the user encompassed by the interactions and uses them to expand the story so it’s as if they are creating it themselves. The animations and illustrations for this project are astonishing and I would like to look further into how they created them parametrically, especially with part where the birds follow and loop around the balloon. Technically, this project starts from a very simple idea of flipping a book right-side up and up-side down (as well as showing its back) and transforms it into a great interactive experience.

Alz

This slightly haunting “game” is similar to what I want to create in that the user unfolds the story themselves. Of course this one is just a progression of scenes and only requires the right arrow key and the space bar, but the story was already built before the player got there–they just opened up the story and ran its course on their own.

Emily Danchik

17 Mar 2014

I’m still figuring out what I want to work on for my final project. There are a few qualities that I’d like my project to have, so I’ll focus on those for now:

1. The ability to be collaborative

I would like to make a project which multiple people can use at once, and coordinate if they’d like to. I’d also like my project to be worthwhile for a single individual to interact with.
CLOUD, shown above, has this quality, although I’m not sure if it’s intentional. The cloud is made up of light bulbs, many of which can be turned off and on with a pull. Individual people can walk through and interact with the object in this way. Around the one minute mark, the crowd coordinates turning on all of the bulbs at once, and then cheers at its accomplishment. The artists intended for the people interacting with their art to feel a sense of wonder and collaboration, and it seems to have worked!
Aesthetically: I think the cloud looks beautiful, with its simple color palette and consistent constituent shapes. I also like the idea of interacting with a physical object, rather than a gesture.
Technically: It’s a bunch of light bulbs with pulls.
Culturally: Children come together to create wonderful experiences all the time. CLOUD invites people of all ages, presumably mostly adults, to relive that experience. I think that that’s pretty wonderful in itself.

2. Large physical movements

Outside of walking and exercise, I honestly don’t move much, and I feel like other adults don’t, either. I would like for my project to call for large, physical movements that aren’t too awkward, but that we definitely don’t perform every day as desk-bound adults.
White, shown above, is an art installation which is completely climbable, and also explores collaborative themes, like CLOUD. Climbing is so out of the ordinary for adults, and is such a wonderful experience, you can even see the artists smiling as they explore their own creation for the video.
I don’t plan to build a jungle gym in the Studio, but I would like to look into this more!
Aesthetically: Like CLOUD, I appreciate the simple, consistent shapes that constitute the project.
Technically: It’s a whole lot of plastic, looped together.
Culturally: Like CLOUD, it’s bringing child-like, wonderful experiences to an audience with older ages. We need more installations like this!


Above is 21 Balançoires, another installation that follows the themes of collaboration and physical movements. When multiple people coordinate their motions in a swingset, the installation produces pleasant music.
Aesthetically: This installation has a brighter, more colorful color scheme, but doesn’t overdo it. The parts of the swingset look clean and modern.
Technically: I’d guess that there are accelerometers hidden in the chunky bottoms of the swings, and wires leading up from them through the wide ropes holding the swing. These could then be sent to a computer which would determine the movement of the swings, and play sound clips appropriately.
Culturally: More child-like, wonderful, collaborative experiences. To bring these to desk-bound adults would be a breath of fresh air.

If all else fails, I’ll just fill the Studio with a zillion balloons and call it a day.

Andre Le

16 Mar 2014

I’m still working out the details for my final project, but I’ve always been fascinated with the ability to gain “superpowers” from technology. For example, being able to perceive something that coexists with us in the real world, but is undetectable with human senses.

The following projects have inspired me to see how else we can map invisible world, such as electromagnetic fields, radiation, or air quality. What if we use the Oculus Rift and a sensor array to map, overlay, and experience all of the real-time sensor data in the world?

What can these technologies tell us about ourselves? From a quantified self approach, what if a wearable heart rate or galvanic skin response sensor can detect your stress or excitement level and relay that to your Pebble watch?

Does knowing this undetectable information change your behavior? Does the behavior change last even without the augmentation? Is it possible for wearables to re-wire our brains and act as extensions to our bodies?

EIDOS
(http://timbouckley.com/work/design/eidos.php)

Eidos Vision is a project by Tim Bouckley, Millie Clive-Smith, Mi Eun Kim, and Yuta Sugawara that allows users to overlay visual echoes on top of their vision. This allows users to perceive time visually, and become aware of their temporal surroundings.

 

The Creators Project: Make it Wearable: Becoming Superhuman
(http://thecreatorsproject.vice.com/blog/make-it-wearable-part-4-becoming-superhuman)

The Creators Project has a great blog post on several other wearable technologies that allow people to sense the world in ways that were previously impossible. A notable one was with Neil Harbisson, who is compensating for his colorblindness with a device that maps color to sound.

Spider Sense Suit

The Spider Sense Suit is a collection of ultrasonic distance sensors and servos attached at various locations on the body to provide feedback on the proximity of the wearer’s environment. This project was created by Victor Mateevitsi and showcased at the Augmented World Expo 2013 where I witnessed a live demo. Aesthetically, it wasn’t much to look at, but the possibilities were impressive. By mapping distance sensors to pressure, his body was able to quickly and automatically adapt to the stimulus around him.