Category Archives: LO-5

rlciavar

12 Feb 2015

This week I’m going to focus on MAX MSP based projects.

AudioDome SoundBlox is an interactive sequencer installation. Different sound inputs are controlled by flipping and positioning large blocks. These blocks are tracked from above sounds are played back based on the image tracker exposed.

I found this project most interesting because of its scale. There is a history of sound generating projects using image tracking or object positioning. However, these projects usually remain table top pieces and maintain a lot of the characteristics of classic digital sound manipulation tools. This SoundBlox are interesting because the blox themselves begin to take on anthropomorphic qualities because of their sound and the positioning of the speaker inside of the block. Their position in relation to a person experiencing the installation becomes relevant. The method of sound manipulation is more related to the sound itself.

I wish that the image trackers were more indicative of the type of sound they produced. It often became difficult to tell which block was responsible for which sounds or how they would change. It could also be interesting if the shape of the box itself was reflective of this. Maybe the boxes aren’t box shaped at all.

NOISY JELLY from Raphaël Pluvinage on Vimeo.

Noisy Jelly is a fun, silly project that uses capacitive touch sensing in molded colorful jello shapes to generate sounds through a MAX patch. I think the most interesting moments with this project are when the shapes create an unexpected sound in relation to their form. And when people being to experiment with the shapes in a new way. particularly when the shapes are broken to create new shapes or stacked on top of each other to activate multiple sounds at once.

sejalpopat

12 Feb 2015

For this looking outwards I mainly looked at papers that related to extracting patterns from 2d visuals.

Pattern Recognition Using Genetic Algorithms
In this paper the author recounts how his approach to designing “creatures” in a genetic algorithm and how they perform at recognizing patterns in 2d visuals. I thought this was interesting because the author emphasizes drawing from existing visual systems in animals and refers to that in his design. One problem with this paper is it reads like a journal entry about ideas that may potentially be more fully explained later but are not quite fleshed out yet; given this it was hard to follow some of the paragraphs that trail off into different possible explanations for the observed results.

A Language for representing and extracting 3D semantics from Paper-Based Sketches
I
 liked this paper a lot more because the application of the research was not unclear; I think its really interesting to think of pattern recognition in terms of recognizing parts of a 3D geometry and not just the repetition of 2d patterns like the previous paper. This paper also appealed to me because I find the idea of paper-based programming and languages that are not linear (but spatially organized) super fascinating. The goal of the paper is to allow sketching in conjunction with annotation that defines operations (i.e. “extrude”, “sweep”, “revolve”) to result in 3d forms.

mileshiroo

12 Feb 2015

Caffe / ofxCaffe

Caffe is an open source deep learning framework Yangqing Jia developed during his PHD at UC Berkeley. The framework can be used for image classification, and a demo on the site lets you submit images to the system and get words back in return. I submitted an illustration of a man to the service and got back the words “consumer goods, commodity, clothing, covering, garment.” Since I don’t know a lot about machine learning or neural networks, it’s difficult for me to understand exactly what this framework is, but I just have to read more at this point. The site is comprehensive and includes links to tutorials, examples and other documentation. Parag Mital made a wrapper for this library called ofxCaffe, which he describes as follows on the GitHub page: “openFrameworks addon for visualizing and interfacing with pre-trained models in Caffe.” I’d like to try to use this library in a future project, but I have to read up first.

“NSA-Tapped Fiber Optic Cable Landing Site, 
Mastic Beach, New York, United States” by Trevor Paglen

“NSA-Tapped Fiber Optic Cable Landing Site, 
Mastic Beach, New York, United States” is an interactive diptych by artist and geographer Trevor Paglen, included in the Data Issue of Dis Magazine. One interacts with the diptych using the Google Maps interface, which is a smart UI choice and an ironic gesture in light of the subject matter. The left side of the diptych features an image of Mastic Beach, one of several NSA-tapped fiber-optic cable landing sites in the US. On the right side is a collage of image and documents relating to the site — gathered from the Snowden archive and other sources — with annotations that appear when you mouse over them. The base document is a map used for marine navigation, which indicates the location of undersea cables. Paglen’s diptych avoids the abstract metaphors of mass surveillance, and instead draws from the methodologies of experimental geography. I appreciate this work’s emphasis on the physical sites and infrastructure of surveillance, and its clear presentation of multiple layers of a complex subject.

Yeliz Karadayi

12 Feb 2015

Twitter Bot: “The Sorting Hat Bot” by Darius Kazemi. 2015

sortinghat

The clever thing about this bot is that it takes a popular character that everyone wishes they could interact with, and allows them to interact with it. Everyone wants to know what Hogwarts house they belong in, and that’s what make this bot so engaging. Throw in the rhyming and it’s a home run. Only problem I have with this is after a while of looking at posts I start to see some bad rhymes or repeated rhymes. It could have been smarter but who is going to put in the effort to do that, honestly. This was good enough to make it a huge hit.

More

“EMERGING FACADE – swarm-designed structure in Grasshopper” by Jan Pernecky. 2015

EMERGING FACADE – swarm-designed structure in Grasshopper from Novedge on Vimeo.

You know what’s insane? I posted my swarm jewelry … February 10th? And this video was posted around the same time. Great minds think alike, I suppose…Jump to exactly 1:02:00 to see what I”m talking about. It’s EXACTLY the same as what I made, except he rendered it better. I have no words. Well, I do have words. Mine was a necklace, and his is a ring I think. Not that that makes a difference. Yeah no, I really have no words.

ST

12 Feb 2015

My Looking Outwards this week is about narrative and the unique timeline techniques that computationally delivered stories can employ.

The first is Taboo, created in 2008 by Carmen Olmo-Terrasa. The work consists of web pages of ASCII art. The imagery is drawn from religion and sexual fetish.

lo2

Each image has several hyperlinks embedded, that take the viewer to a new page, and new image. It reminds me of interactive fiction, in that there is an ending, and a point at which the narrative must start over. This point is denoted by this awesome page:

lo1

The project is mostly in Spanish, so I wasn’t able to get the whole sense of the narrative. However, I did enjoy the relationships between the text that I could understand and the imagery. This relationship was even more interesting because the image was made of text.

 

The next project is Short Story by Jon Thomson and Alison Craighead.

This story was arranged into 7 steps. Each step had two distinct options. When you click on the image, the option changed. Then, clicking on the text would transport you to the next step. So, this story could be anywhere from 7 to 14 steps long! It was also looping, so besides the enumeration, there was no clear beginning and end.

lo4 lo3

The story was fairly interesting.  I found some steps more intriguing than others, especially enjoying the ones that featured dialogue transcript or described the image they were paired with.