Category Archives: Looking-Outwards

Sama Kanbour

30 Mar 2014

Title Soundscapes Installation

Description An imaginary landscape is animated in real time as a person whistles into a microphone. The outcome is a unique representation of a nature filled with trees and animals.

 

Title Living Image

Description An image that looks like a Picasso painting is animated as sensors detect human movement.

 

Title Trying to animate a biker leg

Description Oriol Ferrer Mesià animates a leg image using ofx addons.

Wanfang Diao

25 Mar 2014

For my final project, I would like to explore in interaction and visualization of sound/music. My inspiration comes from the 3 projects below:

This is a ipad app created by student from lifelong kindergarden group in media lab. It allows people create music instruments and playable compositions. I feel that there is still more potential space to create interaction and connection between graphic symbols and sound.  Colors can be mixed together, sounds can be harmony together.  Graphics can change from one shape to another, sound can rise from one pitch to another. Shapes can shake, spin or jump, so does sound! So I think the interaction and the mapping should not be limited in tapping a colorful graphic shape and it make a sound.

What makes me more excited is the two Japanese MVs below:

I am impressed by the matrix of camera flashlights with the wonderful music. The connection between the sparks and the rhythm strongly touched me! These two works inspire me that besides some thing playful and funny, I can also try something touching or exciting. Anyway, I’ll start from some simple experiments by processing and openFrameworks.

Here is another project I admire  and similar with my idea called patatap.

http://tytel.org/lissa/

 

Haris Usmani

23 Mar 2014

I am looking into implementing a novel approach for 2-D camera rectification as an OFx plugin. This method of rectification requires no input from the user (provided your image/camera has EXIF data which is almost always the case) – forget about making and using a checkerboard to rectify your image!

paper_results

I learned about this technique when I took the Computer Vision course (taught by Dr. Sohaib Khan) in Fall 2012 at LUMS. We covered one of the recent publications by the CV Lab at LUMS: “Shape from Angle Regularity” by Zaheer et al. in Proceedings of the 12th European Conference on Computer Vision, ECCV, October 2012. This paper uses ‘angle regularity’ to automatically reconstruct structures form a single view. As part of their algorithm, Zaheer et al. first identify the planes in the image and then automatically 2D rectify this image solely relying on ‘angle regularity’. That’s the part I’m interested in.

Angle regularity is a geometric constraint that relies on the fact that in structures around us (buildings, floors, furniture etc.), straight lines in 3-D meet at a particular angle most commonly 90 degrees. Look around your room, start counting the number of 90 degree angles you can find and you’ll see what I mean. Zaheer et al. use the ‘distortion of this angle under projection’ as a constraint for 3D reconstruction. Quite simply, if you look at plane from a fronto-parallel view you shall see the maximum number of 90 degree angles possible. That’s what we’ll search for: we look for the “… homography that maximizes the number of orthogonal angles between projected line-pairs” (Zaheer et al.).

Following the algorithm used in the paper (and MATLAB code available at http://cvlab.lums.edu.pk/zaheer2012shape/) I plan to generate a few more results to see how practical it is- from the results given in the paper, it seems very promising. The algorithm for 2D rectification relies on searching the lines in the image, assuming line-pairs to be perpendicular, and then using RANSAC to separate inliers and outliers line-pairs in order to optimize 2 variables (camera pan and tilt, Note: focal length is known from EXIF data). The MATLAB code relies on toolboxes provided within MATLAB (for example RANSAC) which I should be able to find open-source C++ implementations of- The algorithm, although conceptually straight-forward, might not be as easy to implement and optimize when working with C++. I would work towards it and judge the time commitment it requires.

Once I’m done coding the plugin, I would want to make some cool examples that demonstrate the power of this algorithm. If the frame-to-frame optimization is fast (i.e. the last frame’s homography seeds the initial value for the next one), I could try to make this real-time.

I have not yet come across a C++ implementation of this technique, and the only OFx plugin for camera rectification that exists right now (ofxMSAStereoSolver) depends on the checkerboard approach.

Paper: http://cvlab.lums.edu.pk/sfar/
Aamer Zaheer, Maheen Rashid, Sohaib Khan, “Shape from Angle Regularity”, in Proceedings of the 12th European Conference on Computer Vision, ECCV, October 2012

Related OFx Plugin: ofxMSAStereoSolver by memo

Shan Huang

23 Mar 2014

Ideas for final project

For my final project I want to do something fun with projectors. For several reasons:1. I have a projector at home and I immensely enjoy it. 2. Among all forms of displays, projectors probably have least defined shape and scale. It can magnify something that only occupies a few pixels on screen to a wall-sized (even building-sized) image. I think it is really magical. 3. I found some really good inspirations in the projects Golan showed in class. One of them is Light Leaks by Kyle McDonald and Jonas Jongejan:

This project reminds me of a thing that almost everyone used to do: using a mirror to reflect light spots onto something / someone. I still do that with the reflective apple sign on the back of my cellphone. I think the charm of this project comes from the exact calculation they did that resulted in the beautiful light patterns. Though reflecting light spots with disco balls alone is a cool enough idea, the project would not be as stunning as it is if the light leaks had been randomly reflected onto the walls in random directions.

A little experiment

I was really amazed by the concept that ‘projection can be reflected’, so much that I carried out this little experimentation at home:

1 2

This is the projector I have at home. We normally just put it on a shelf and point it at a blank wall.

3 4I

The other night I put a mirror in front of the light beam it projected. The mirror reflected a partial view of the projected image onto a completely different wall (the one that’s facing my bedroom, perpendicular to the blank wall). It’s hard to tell from the photo but the reflected image was actually quite clear. As I rotate the mirror around its holder the partial image started to move around the room in a predictable elliptical trajectory. That meant I could sort of control the location of the image in 3D space by setting the orientation of the mirror. If I could mount a mirror on a servo and control the orientation of the servo, with some careful computation (that I don’t know how to do yet), I’d be able to cast the projection to anywhere in a room. Furthermore, the projected image doesn’t have to be a still image – it could be a video, or be interactively determined by the orientation of the mirror.

This opens an exciting possibility of using the reflected light spot as a lens into some sort of “hidden image”. For example, the partial image could show the scene behind the wall it’s projected onto. The light spot in a sense becomes a movable window into the scene behind – and it’s interactive, inviting people to move it around to explore more of the total view. Or, the projector + mirror could become a game where the game world is mapped to the full room. Players can only see part of the world at once through the projected light spot, and they move their characters by rotating the mirror / interacting with other handles that manipulate the mirror.

If all I want is to project an image to an arbitrary surface, why not just move the projector?

Well, the foremost reason is that moving a mirror is way easier than moving a projector. The shape of the projection can also be based on the mirror shape so that we don’t always get an ugly distorted rectangle. The idea can be relatively easily set up with the tools we have in hand. Another motivation is that by having multiple mirrors, each reflecting certain region of the raw projection, the original image can be broken into parts and reflected in diverging directions. They can all move independently, but ultimately using light from the same projector. Light Leaks uses this advantage very well to emit light spots in numerous directions.

 

So that’s what I’ve thought of so far. There are still many undetermined parts, and I’m not sure how challenging it would be implementation wise. I have played with Arduino only in middle school and it was such a disaster that I completely stayed away from hardware in college. But I’m willing to learn whatever is needed to make the idea come true. I’m currently still researching other works for inspirations, also trying to make sure that my idea isn’t already done by someone else.

Other inspirations:

Chase No Face – face projection. Also discussed in class.

More to come…

Update:

Taeyoon and Kyle pointed me to this project.

Emily Danchik

17 Mar 2014

I’m still figuring out what I want to work on for my final project. There are a few qualities that I’d like my project to have, so I’ll focus on those for now:

1. The ability to be collaborative

I would like to make a project which multiple people can use at once, and coordinate if they’d like to. I’d also like my project to be worthwhile for a single individual to interact with.
CLOUD, shown above, has this quality, although I’m not sure if it’s intentional. The cloud is made up of light bulbs, many of which can be turned off and on with a pull. Individual people can walk through and interact with the object in this way. Around the one minute mark, the crowd coordinates turning on all of the bulbs at once, and then cheers at its accomplishment. The artists intended for the people interacting with their art to feel a sense of wonder and collaboration, and it seems to have worked!
Aesthetically: I think the cloud looks beautiful, with its simple color palette and consistent constituent shapes. I also like the idea of interacting with a physical object, rather than a gesture.
Technically: It’s a bunch of light bulbs with pulls.
Culturally: Children come together to create wonderful experiences all the time. CLOUD invites people of all ages, presumably mostly adults, to relive that experience. I think that that’s pretty wonderful in itself.

2. Large physical movements

Outside of walking and exercise, I honestly don’t move much, and I feel like other adults don’t, either. I would like for my project to call for large, physical movements that aren’t too awkward, but that we definitely don’t perform every day as desk-bound adults.
White, shown above, is an art installation which is completely climbable, and also explores collaborative themes, like CLOUD. Climbing is so out of the ordinary for adults, and is such a wonderful experience, you can even see the artists smiling as they explore their own creation for the video.
I don’t plan to build a jungle gym in the Studio, but I would like to look into this more!
Aesthetically: Like CLOUD, I appreciate the simple, consistent shapes that constitute the project.
Technically: It’s a whole lot of plastic, looped together.
Culturally: Like CLOUD, it’s bringing child-like, wonderful experiences to an audience with older ages. We need more installations like this!


Above is 21 Balançoires, another installation that follows the themes of collaboration and physical movements. When multiple people coordinate their motions in a swingset, the installation produces pleasant music.
Aesthetically: This installation has a brighter, more colorful color scheme, but doesn’t overdo it. The parts of the swingset look clean and modern.
Technically: I’d guess that there are accelerometers hidden in the chunky bottoms of the swings, and wires leading up from them through the wide ropes holding the swing. These could then be sent to a computer which would determine the movement of the swings, and play sound clips appropriately.
Culturally: More child-like, wonderful, collaborative experiences. To bring these to desk-bound adults would be a breath of fresh air.

If all else fails, I’ll just fill the Studio with a zillion balloons and call it a day.

Ticha Sethapakdi

11 Mar 2014

To be honest, I still don’t know what exactly I want to make for my capstone project–but I do know that I want to create something tangible, interactive, and playful.

MIDI Sprout

MIDI Sprout is a kickstarter project that aims to convert the activities of plants into music. By sending a small electrical current to a plant and measuring the plant’s resistance to the current, the MIDI Sprout is able to create music from the plant’s natural biorhythms. While the project is still in its early development phase, I feel like it has a very poetic concept with a lot of potential. Recognizing the omnipresence of music in our daily lives and actually manifesting it in some way is one trait of MIDI Sprout which caught my attention–another trait is its ability to depict plants as sentient beings capable of making music rather than just…plants. Because of these characteristics, the project sparks a lot of curiosity in me and makes me want to test the device on all the plants I can find to see the results. I hope I can evoke a similar feeling from people for my capstone project.

 

Fine Collection of Curious Sound Objects by Georg Reil

This project exhibits the same ‘playful’ characteristic I described above. What I particularly like about this project is that the behavior of each object is not immediately apparent from the object’s appearance. That characteristic adds more wonder to each object and encourages viewers to discover each object’s function. If I am unable to come up with other ideas, I may end up doing a project that will be very similar to this one–I just need to find ways to distinguish my project from Reil’s.

 

Mew by Emily Groves

What captivated me about this project is its poetic simplicity. Mew is an interactive sound piece that basically purrs as the viewer approaches the piece, and makes distorted cat sounds when the viewer strokes the fur. The piece is simultaneously charming, playful, uncanny, and unnerving–and it’s only a lump of fur on a crudely-made wooden stand. Its unassuming appearance prevents viewers from having many predisposed expectations of the object, so its behavior when the fur is stroked would come as a weird surprise to people. Ideally, I would like to include a similar element of surprise in my capstone project, but ‘surprise’ is difficult to achieve because it’s hard to create a novel idea that not many people have seen before.

 

MacKenzie Bates

06 Mar 2014

I’m going to write words later … just keeping a list of links at the moment as I look

http://tritri.triobelisk.com

 

http://www.indiegamemag.com/corporate-lifestyle-simulator-review-the-working-dead/

 

http://www.indiegamemag.com/ether-one-review-more-than-meets-the-minds-eye/

 

papers please

 

http://www.godswillbewatching.com/#about

Collin Burger

06 Mar 2014

Jason Salavon – Every Playboy Centerfold, The Decades (Normalized) (2000)

PlayboyDecades_60s PlayboyDecades_70s
1960s                                                      1970s

PlayboyDecades_80s PlayboyDecades_90s
1980s                                                    1990s

Every Playboy Centerfold, The Decades (Normalized) by Jason Salavon is a series of image averages of Playboy centerfolds from the 1960s though the 1990s. They reveal interesting trends in the popular images of women in the West, such as the increasing lightness of skin and hair as well as the desire for skinnier women as the decades pass. The change in perception and objectification of women is a subject with an immense scope, but the specificity choice of material in order to convey an aspect of this subject works greatly in its favor.

Memo Akten and Quayola – Forms (2012)

Forms, by Memo Akten and Quayola, is a series of audio-visual, computer-generated, graphical sculptures based on the motion of athletes. The finely-tuned movements of these athletes is abstracted into dynamic forms composed of meshes and particles that mimic their movement through space and is accompanied by audio effects. Culturally, the motion of high-caliber athletes is something that fascinates most people, making the content very enthralling.  I think the audio, which is composed of echoing mechanical noises is accompanies the visuals well, but I think the music in the background is an odd choice. Also, the lack of an interesting background to the kinetic sculptures seems like an oversight.

Jim Campbell – Ambiguous Icon 5: Running and Falling (2000)

Jim Campbell’s Ambiguous Icon 5: Running and Falling is another work that deals with the abstraction of human motion, however it does not seek to glorify or portray it in a beautiful manner. The work is a video of a man running and falling and running played on a very low resolution binary display composed of red LEDs.  The work explores the capability of the human mind to recognize patterns and forms with which it is familiar. In its low resolution format, the images are nearly impossible to recognize when taken apart, however the temporal aspect of the video enables the viewer to discern the subject of the work. Unfortunately, I think that the content of the video is lacking. I understand that it was important to choose video of something that the viewer had not ever seen, however I think choosing something more aesthetically interesting would add to the work.

Spencer Barton

05 Mar 2014

For my final project I am thinking about controlled story telling. The reader will control the pace of a story by dragging a see through display over a map showing the whole setting for the story. As the display is dragged over the map the characters will be animated underneath bringing the story to life.

Imisphyx V : A tabletop novel experience

Imisphyx is a non-linear story where viewers explore conversations between the characters by placing various tiles on the tabletop.

I saw this project last year and remembered it as a nice example of the reader dictating the story. The table-top set-up is also something similar to what I plan to employ and that format was accessible. This story was all about text. I will instead focus around the visual story with audio potentially also included.

Augmented Shadow

This project explore the interaction between multiple people who have control over a scene. There are a number of parameter cubes that set the scene but nothing is illuminated until the light cube is used.

I like the use of a  physical to explore the scene. The display that I plan to use will provide a similar illuminating effect by uncovering the scene underneath. The Augmented Shadow project drew the viewers attention to what was going on in the periphery – the shadows – instead of focusing in on the light. Using only an led display on my project will lose this effect. The display will be more of a window that focuses attention. It may be wise to include projection as well so as to mitigate some of the window effect.

Ouija board

While not a project, Ouija boards may serve as an interesting new take on the idea of uncovering the unknown. When a user of an Ouija board navigates the planchette around the board they are uncovering secrets. The OLED display presents a similar opportunity. It may be better to focus on the mystery/discovery piece of the display as opposed to the novelty of the technology.

Shan Huang

21 Feb 2014

CONTACT: augmented acoustic

Can it get any cooler? This project turns a table into a tangible surface by collecting sounds generated from interaction with the wooden surface. The project coincides with an idea I had a while ago – making a sort of electronic drum kit by collecting sounds of “table drumming”. While it’s a little disappointing to find that it has been done, I admire how the system seems to be very responsive and accurate. The visuals and audios do a good job of augmenting the surface and giving feedback to users on their actions. The only thing I find a bit redundant is the leap sensor. I don’t fully get what the sensor is doing there. Though the system also reacts to waving your hand in the air, the interaction is way less intuitive than just knocking on an solid object.

Rain Room at MOMA

Rain room is an installation in MOMA in which people can walk through a rainy room without getting wet. The room has 3D cameras that track people’s movement so that it temporarily and regionally stops raining when people pass by. I find it an inspiring illustration of how technology can surprise people by simulating nature in an unauthentic way. The lighting setup in the room also creates a mysterious atmosphere that turns rain into an unfamiliar matter. Therefore even though rain is so ordinary, rain falling down in an unexpected way in an unfamiliar environment is a piece of art.

The kids stay in the picture + more Yorgo Alexopoulos’s work

Yorgo Alexopoulos brings still images to life by cutting them into planes and overlaying them with shapes and moving them around to create a parallax effect. His work strikes me by showing how much power rests in still images. By moving planes of images around in different directions he can control how audiences experience these images. He can easily shift audiences’ focus around, by doing which he turns images into narratives. I guess the project doesn’t show much interactivity because the movements are all predefined, but I think his idea could get even cooler by integrating some interactive technologies.