Category Archives: LO-5

Matthew Kellogg-Looking Outwards 5-OpenCV

The Abovemarine

This project by Adam Ben-Dror uses OpenCV to track the position of a fish in a mobile fishbowl. It then moves the bowl based on the direction the fish is swimming, allowing the fish to roam “freely” on land.

I like this project because it is an interesting use of computer vision, and technology as a whole. It reminds me of the “reverse SCUBA suit” created by Dr. Wernstrom in the eighth episode of Futurama. While it is a similar concept, Adam Ben-Dror’s project is much more simple and elegant. I also appreciated the simple, no-frills design that was used in this project. The documentation is also, very professionally done.

adovemarine_05

Bit Planner – A Cloud Synchronized Lego Calendar

This project created by Vitamin Design is a calendar that organizes workflow, where projects are represented by different colors, people by rows, and half-days by columns. I like it because of its simplicity and that they made an app to allow themselves to update their google calendars based on a picture of the calendar and some simple OpenCV use.

lego-calendar-vitamins_08 copy

As I enjoy Legos, and am in constant need of more organization, this project really struck me. Where a simple google calendar would have sufficed, this goes beyond by adding the ability for people to interact with the schedule in the physical world (and using Legos).

Tunetrace-Drawings to Music

Ed Burton’s app, Tunetrace, allows users to take pictures of drawings, and then creates a tune based on their shape, their topographies, and a simple set of rules. I like it because it shows another interesting use of OpenCV, and that it is an iOS app created in openFrameworks. I like to see that oF is capable of making streamlined apps for mobile devices.tunetrace

I thought though that the melodies created were not very interesting. It also seemed to me that the user has no control over what tones will be created. After searching online, I found no posts of people’s tunetraces which they thought were intereseting, and this makes me assume that no one was capable of making something amazing with this system or interested enough in the system to take the time to make something to share. I would like to see a system like this that elegantly created music from a drawing but at the same time gave the user more control.

Amy Friedman

14 Feb 2015

City Pulse

City Pulse – a biometric interface from Kalle Hübinger on Vimeo.

Installation created by Kalle Hubinger called City Pulse in January 2014 for Interior Design Week Köln 2014. This installtion utilizes pulse rhythms to inform the light and sound of the space. The biometric data is analyzed, stored and displayed. The piece comments “The user is part of an experimental interfaces at the edge between action/reaction, inside/outside and part of the overall pulse of the city”. I dont know if I truly understand how this connection is informed, but the more people who participate the greater the data set becomes. Except the installation only remembers the last 3 users and the most current is displayed, therefore I dont understand the connection to the city as a whole. This installation uses MaxMSP, LEDs, and arduinos to create the ambiance. It sees to create an immersive atmosphere, but I dont know if it succeeds at encapsulating the pulse of the “city” itself. This reminds me of the installation I had posted in the past “Qi Visualizer” by Yuan Yifan.

Humanthesizer

Calvin Harris – Humanthesizer by CInq7

Calvin Harris takes a new perspective on synthesizers by combining human touch and foot connections to create music. Using MaxMSP, Arduinos, and bodysafe conductive paint Harris is able to program and create his own music. Human contact closes the circuits and allows for instruments to orchestrate. I think this installation succeeds in creating interactive experience, but I dont know if it is necessary to have half naked girls for the musics to succeed. This was an interesting take and allows for people to personalize their own music, based on what they are programmed to produce when the circuit closes. This reminds me of the Piano Steps by Fun Theory but this creates music on people, it would be intriguing to allow for this to be apart of clothing to have a collective of people orchestrate their own music and be a way for people to create their own rhythms through others.

 

Zack Aman

13 Feb 2015

D.O.R.T.H.E. by Lasse Munk and Søren Andreasen

D.O.R.T.H.E. is takes input from a typewriter and analyzes the words and uses both the easily quantifiable aspects (such as number of letters in a word) as well as the more intangible aspects (sentiment analysis) and then uses Max/MSP to route that input to a noise-generating machines built from scrap electronics.

The part that I find most inspiring about the project is the output; the use of actual, electronics and mechanics give the sound produced a much richer, expressive tone. My favorite part is when the dangling plastic gets moved onto the fan for a quick percussive roll. There’s as much an art with finding great sounds as there is for composition (harkening back to found sound pieces starting in the early 1900’s), and these guys nail the sound selection. It reminds me of a project I wanted to do with recording the sounds of 3D printers and trying to compose music based on the sound output and then having physical, printed representations of the music. It’s important to remember that generative form is both input and output, and both sides are solid in D.O.R.T.H.E.

 

City Symphonies by Mark McKeague

This project uses a traffic simulation built in Processing and MaxMSP to create generative music based on the connections and placements of different cars.

The concept of this piece is great: cars are an interesting, dynamic spatial data set that are ripe for audio synthesis. Where I think this project falls down is on the output side; the sounds themselves are uninteresting and the relation between input and output is unclear. The simple sin-wave form of the synthesized output is clean (too clean), a pure signal that harkens back to dial-up modems. It gives the impression of music by computers for computers — if this is the intention, it would be great if the sound could hold enough information for the visual to be reconstructed from the sound alone.

Making music from transit has a long history, such as Pierre Schaeffer’s “Etude aux Chemins de Fer,” but in each case the musical output should be something interesting. Schaeffer was revolutionary for suggesting that the everyday sounds of trains could be art. McKeague is attempting to find beauty in traffic patterns, but does not manage to find beautiful patterns nor does he translate complexity into intriguing sounds.

dantasse

13 Feb 2015

Machine Learning in the arts

Computers Watching Movies by Benjamin Grosser. Takes computer vision algorithms and applies them to movies. At first it’s just kind of cute, but upon a closer look, there’s something really interesting there. For example, look at the Matrix and Inception links vs. Annie Hall and American Beauty. Matrix and Inception cover more of the screen; they look more “epic” maybe. For example, maybe before watching the movie, you watch the computer watching the movie. Or maybe there’s something you can tell about people’s movie tastes based on the CV outputs of what they watch. (I know Netflix would love to know about that.)

Genetic Algorithm Walkers by Rafael Matsunaga.

Genetic algorithms are a lot of fun, but it’s particularly fun when you’re genetic-algorithming something silly like humanoids walking. It shows the algorithm in action in a way that’s pretty easy to understand. Also, it maps the variables to pretty-easily-rememberable names. It’s a neat way to show evolution, to teach how genetic algorithms can come up with something that works pretty well for difficult tasks like walking with joints.

 

Ron

13 Feb 2015

DJ Light Peru

This Max MSP-based sound and light installation allows users to serve as a DJ, controlling the audio and visual performance over a large physical area. By using a thermal camera to detect movement, this project translates live arm movement and hand gestures to produce sound and specify light color. I think the concept is well done; the light is not only influenced by the user’s gestures but is also generated by the audio feeds. To me, the sound that is generated by this system seem to cacophonous and dissonant; they seem to clash with the lighting’s soft colors. This installation was part of a Christmas celebration in Lima, Peru, and I expected the effect to be more celebratory, harmonious.

LOVE – Interactive 3D Mapping

A monochromatic version of Robert Indiana’s LOVE sculpture is present at the Pratt Institute in Brooklyn, NY. At the time of the Max MSP-based installation, the sculpture wasn’t well lit at night, leading the artist to create an interactive element. Looped videos were projected three-dimensionally onto the sculpture, and colors would shift based on proximity sensors that detected a person’s position as she walked by. This implementation brings a certain liveliness to an existing monochromatic sculpture without physically altering it, and provides passers-by with an element of surprise. From the video, it was a little difficult to tell how a user’s position was mapped to the shifting of colors; I think presenting a more obvious mapping would make it more engaging. I also felt like the projection’s movements could have been slowed down to make it seem less busy/distracting.