Looking Outwards: Final Project

(apologies for posting this so late)

For the past year or so, I’ve been very interested in surveillance conducted by machines using hidden/mysterious/proprietary algorithms and giant databases. These three projects are very relevant to this idea and gave me a lot of inspiration for my final ‘panopticon’ project.

1-tscktfS2eUZdjr6vsyBDIg

“Data Masks” by Sterling Crispin, 2013-present

“Sterling Crispin’s “Data Masks” use raw data to show how technology perceives humanity…Reverse-engineered from surveillance face-recognition algorithms and then fed through Facebook’s face-detection software, the Data Masks confront viewers with the realization that they’re being seen and watched basically all the time.”

_sample4_face_web

“Stranger Visions” by Heather Dewey-Hagborg, 2012-2013

“In “Stranger Visions”, Heather Dewey-Hagborg analyses DNA from found cigarette butts, chewed gum and stray hairs to generate portraits of each subject based on their genetic data…While not so exact as to readily identify an individual, the portraits demonstrate the disquieting amount of information that can be derived from a single strand of a stranger’s hair and the disturbing potential for surveillance of our most personal information.”

ssbkyh_captcha_tweet_01

“CAPTCHA Tweet” by Shin Seung Back and Kim Yong Hun, 2013

“‘CAPTCHA Tweet’ is an application that users can post tweets as CAPTCHA. Since computers can hardly read it, humans can communicate behind their sight.”

Final Project Sketch: Panopticon

panopticon

panop2

Here are some links that gave me the idea for this thing:

An article about the history of privacy and surveillance and the NSA

Angels with a bunch of faces

Art about modern surveillance

Art about being watched by bots and algorithms

The Panopticon is an installation that simulates a digital creature with a hundred-sided polygon of faces as a body. It steals the faces of those who view it (using the depth and color data from a kinect) and uses them as tools of surveillance, acting as the center of a giant spherical panopticon. The database of faces is never purged, and you cannot remove yourself from the sphere once you are a part of it.

With this project, I want to try to take the unseen and secret processes of modern surveillance and have them manifested as a mythical digital object/creature.

I also want to explore the idea of “surveillance by identity” with this project. Today, surveillance is not just conducted by physical cameras/microphones, but also by the mass collection of data and the use of algorithms to parse the identity of people in the database and analyze them for threats.

LO & Final Project

I’ve been recently interested in the text-based gaming revival started by Porpentine. Porpentine created this platform called Twine where anyone is able to create their own text-based video game (some examples http://aliendovecote.com/intfic.html). The game borders an immersion of story/plot and contemporary poetic elements.

Screen Shot 2014-11-19 at 20.09.33

After thinking about this, I was inspired to create a sort of facial simplification via prose by creating a software that would recognize elements of your face (level of eyebrows, face structure, expression, facial direction, colour of shirt below face). The data would then give the participant a small snippet of prose based upon these elements.

Looking Outwards: Final Project

Audience – rAndom International with Chris O’Shea

Audience consists of a set of robotic mirrors that orient themselves to face an individual that steps near them. I really enjoy how much character these machines have. Seeing them move with and without a target makes them seem easy to interact with, and audience members clearly also have this sentiment. Their semi-random placement coupled with their low placement help with placing the viewer in an imaginative space.

I would say that the mirrors themselves seems too small and disparate to be engaging, and that their movement can be slightly uncanny at times.

Pulse Machine – Alicia Eggert & Alexander Reben

Pulse Machine is an artwork with a lifespan. it consists of a drum that kicks at 60 bpm, and a counter which started with enough beats for the piece to ‘live’ for 78 years. Each beat takes off 1 from the counter, and the drum stops when the counter hits 0.

pulsemachine_1

This is a strange insight into humanity, or at least what makes a life. It’s a simple comment on cause and effect in existence, playing with inevitability, and, to an extent, fate.

I think that the drum itself may is a slightly jarring element.

Collected Works – Zimoun

Zimoun works around a central theme of creating physical and audio spaces out of simple forms and machines on a huge scale. There is a simple power to these pieces, and they speak well in relation to each other as a series. I also like the element of directed randomness in this piece almost meditative. Seeing the machines stop is also powerful

I do have to say that I would like to see more variance in the work.

Skylines III: Point Cloud City – Patricio Gonzales Vivo

Skylines are a series of 3D renderings made of the landscapes of major cities. This video is a flythrough of one of these renderings. I think that this way of representing a city street is wonderful, and the scale and approach of the camera’s view makes the scene seems more imaginative, magical almost. It’s an excellent example of environment building.

I would really like to see something done with this technology rather than a simple demonstration of a city.

Sketch for final

I know that I would like to use passive audio recordings in my final project but I’m not exactly sure what I will do with them. My current best idea, seeks to primitively figure out when “interesting audio” is being made in the environment my object is placed in. When it believes interesting things are happening it sends the audio to be evaluated by a mechanical turk where they will decide what the “tone” of the audio is. For example the Turkers will respond with an “emotion” which corresponds to the audio. That emotion will be parsed into some kind of visual or audio response which can be recorded throughout the day.

Here’s an odd sketch.

sketchforFinal

Project Ideas

Ghost Narrative

Using a fog machine, projectors, and lenses, I hope to make an animated light sculpture in which I will narrate a series of events from my past using spirit-like figures.

sketch2

Window into another world

In this I will create a user interactive scene display. I will use a kinect or other similar device to track user movement to make it appear that the user is looking through a window rather than at a rear projection screen, monitor, or TV. The scene I would project would change over time in order to narrate a small story.

sketch1

Looking Outward Final Project

“Nomis is a musical instrument created with the aim of making loop based music more expressive and transparent through gesture and light.” For this project, the artist, Jonathan Sparks, uses Maxmsp to allow a viewer to became a music creator by using their hands to activate certain position on a circle and two vertical columns.

“Voice Lessons is an electronic, audio device that interrogates the popular myth that every musical instrument imitates the human voice. Touching the screen allows the participant to manipulate the visuals and vocalizations of the “voice teacher” as he recites vocal warm up exercises”  For this piece, even though we already have viewed it in class, is important to my final project because it engages the viewers hand movement in controlling the different aspects of  sound of the video.  These aspects include, speed, pitch, and vowel sound.  However, it is not as successful for me because the viewer is force to engage a screen and does not quite as free of a range of motion and I would like my piece to have.

“Move is a technology garment designed by Electricfoxy that guides you toward optimal performance and precision in movement in an ambient, precise, and beautiful way. “I chose this piece because I enjoyed how it was a wearable object, similar to the glove I’m plan on using, that coordinates the performers movements and visual displays their movement.  Through these displays, the performer is also able to see improvements that the cloths calculate you could make in order to add precision to the performers form.

All of these other projects are  interesting to me because they actively engage the viewer and force a gesture or movement of their body in order to create a sound.  This translation from a tactile sense to a auditory/visual sense is a theme I am looking for in my final project.

 

 

Looking Outwards Final Project

For my final project I’m not exactly sure what I want to do. I only know that I want to work with sound. Without getting my expectations too high, i’d like to attempt to create a work which reacts to the conversations and interactions in a room. The projects I have selected each reacts to information presented to it. Some sound and some visuals.

Conversnitch – Conversnitch by Kyle McDonald is a device which listens to conversations around it, secretly uploads them to mechanical turk to be transcribed, and then tweets to the world what was supposed to be a private conversation. The integration of turking into this project is extremely interesting in that it is very difficult for computers themselves to transcribe audio. Integrating a “silent human” element to the work is extremely powerful because it makes the process still seem automated even though a majority of the difficult work is done by humans.

Conversnitch from Kyle McDonald on Vimeo.

Descriptive Camera – Descriptive Camera by Matt Rishardson is a device which snaps an image of an area and rather than out out that image, “develops” the image into a description of the scene in words. This project also uses mechanical turk to transcribe information but what’s most interesting about this project is that it changes what we expect. When a photo is taken we as digital individuals expect a lasting snapsht and when we are returned a description we are both jarred and freed. Freed in the sense that we can now use this information as we wish.

Giver Of Names – Giver of Names by David Rokeby literally gives names to objects which are placed in it’s view point. What intrigues me about this piece is that the computer is actively attempting to describe what’s in front of it with a name. It is immediately responding to information presented to it and then allowing the participants in the room to know it’s interpretation. I hope to achieve this in my final project.

The Giver of Names from David Rokeby on Vimeo.

Final Ideas

photo-7

Both ideas utilize the same mechanism set up diagramed above.

Sound and Body

I got this idea when we were first learning how to use the sound coding software of Maxus.  I wanted to control the pitch and tone of a sound with the movement of my body. I want the sound to change when I bend my fingers and tilt my hand.  Therefore, I would require the a flex sensor and a tilt sensor from Adafruit.  For the sound one, I would only use one flex sensor to indicate volume and the tilt sensor to indicate pitch. As one would turn their palm upward, the pitch would lower and as the palm was turn downward, the pitch would become higher.  And, as the pointer finger flexed, the the volume would lower and as the finger relaxed the sound would become louder.

Color and Body

This idea is similar to the sound one except it would use Processing to create interesting visuals.  I would require three flexors one to control the red value, one the green value, and one the blue value.  The tilt sensor would then be used to control the tone.  As the fingers flexed, the values of RGB would decrease and as they relaxed the values would increase.  Therefore, a clenched fist would create black and a relaxed hand could create white.  The reason I came up with these ideas is because I am interested in creating an intimacy with a viewer and forcing them to move thus making the viewer more aware of their own control over their own physical actions.