Category: Assignment-13-Sketch

Final Project Sketches

There is a lot of great pseudoscience around what one’s body resistance or frequency can reveal/do, or how it can be changed.  Some examples include:

http://www.templeofwellness.com/rife.html

http://www.lermanet.com/e-metershort.htm

http://www.highfrequencyhealing.org/

Like my automaton, I wanted to work with moiré patterns again. I simply would like to make a visualization of the participant’s body resistance, or thats all I have right now, itll get more flesh to it as I work on it. Heres a prototype of what the sketch will generate:

Screen Shot 2014-11-28 at 2.51.33 PM

The color is generated by taking the average color from the participant using the kinect’s camera. Im also using the kinect so it can stand by itself as an installation, and will only be activated when someone is standing riiight in front of it. As the person is standing in front of it, the kinect will be taking in information about that person. After the person walks away (as detected from the kinect) it will generate the picture using all the information it gathered and post it to a tumblr blog, with a number so the participant  can see which picture represents them.

Photo on 11-26-14 at 5.06 PM #2 Photo on 11-26-14 at 5.06 PM

Final Project Sketch: Panopticon

panopticon

panop2

Here are some links that gave me the idea for this thing:

An article about the history of privacy and surveillance and the NSA

Angels with a bunch of faces

Art about modern surveillance

Art about being watched by bots and algorithms

The Panopticon is an installation that simulates a digital creature with a hundred-sided polygon of faces as a body. It steals the faces of those who view it (using the depth and color data from a kinect) and uses them as tools of surveillance, acting as the center of a giant spherical panopticon. The database of faces is never purged, and you cannot remove yourself from the sphere once you are a part of it.

With this project, I want to try to take the unseen and secret processes of modern surveillance and have them manifested as a mythical digital object/creature.

I also want to explore the idea of “surveillance by identity” with this project. Today, surveillance is not just conducted by physical cameras/microphones, but also by the mass collection of data and the use of algorithms to parse the identity of people in the database and analyze them for threats.

LO & Final Project

I’ve been recently interested in the text-based gaming revival started by Porpentine. Porpentine created this platform called Twine where anyone is able to create their own text-based video game (some examples http://aliendovecote.com/intfic.html). The game borders an immersion of story/plot and contemporary poetic elements.

Screen Shot 2014-11-19 at 20.09.33

After thinking about this, I was inspired to create a sort of facial simplification via prose by creating a software that would recognize elements of your face (level of eyebrows, face structure, expression, facial direction, colour of shirt below face). The data would then give the participant a small snippet of prose based upon these elements.

Sketch for final

I know that I would like to use passive audio recordings in my final project but I’m not exactly sure what I will do with them. My current best idea, seeks to primitively figure out when “interesting audio” is being made in the environment my object is placed in. When it believes interesting things are happening it sends the audio to be evaluated by a mechanical turk where they will decide what the “tone” of the audio is. For example the Turkers will respond with an “emotion” which corresponds to the audio. That emotion will be parsed into some kind of visual or audio response which can be recorded throughout the day.

Here’s an odd sketch.

sketchforFinal

Project Ideas

Ghost Narrative

Using a fog machine, projectors, and lenses, I hope to make an animated light sculpture in which I will narrate a series of events from my past using spirit-like figures.

sketch2

Window into another world

In this I will create a user interactive scene display. I will use a kinect or other similar device to track user movement to make it appear that the user is looking through a window rather than at a rear projection screen, monitor, or TV. The scene I would project would change over time in order to narrate a small story.

sketch1

Final Ideas

photo-7

Both ideas utilize the same mechanism set up diagramed above.

Sound and Body

I got this idea when we were first learning how to use the sound coding software of Maxus.  I wanted to control the pitch and tone of a sound with the movement of my body. I want the sound to change when I bend my fingers and tilt my hand.  Therefore, I would require the a flex sensor and a tilt sensor from Adafruit.  For the sound one, I would only use one flex sensor to indicate volume and the tilt sensor to indicate pitch. As one would turn their palm upward, the pitch would lower and as the palm was turn downward, the pitch would become higher.  And, as the pointer finger flexed, the the volume would lower and as the finger relaxed the sound would become louder.

Color and Body

This idea is similar to the sound one except it would use Processing to create interesting visuals.  I would require three flexors one to control the red value, one the green value, and one the blue value.  The tilt sensor would then be used to control the tone.  As the fingers flexed, the values of RGB would decrease and as they relaxed the values would increase.  Therefore, a clenched fist would create black and a relaxed hand could create white.  The reason I came up with these ideas is because I am interested in creating an intimacy with a viewer and forcing them to move thus making the viewer more aware of their own control over their own physical actions.

Sketch

 

 

 

Bloom

My original idea was inspired by my looking outwards post, RUAH by Giulia Tomasello. Instead of emphasising the importance of diaphragmatic breathing, I wanted to explore the interaction between two people and flowers blooming. Like free hug that was once popular, I wanted to explore the interaction especially act of hugging between two people, since hug could potentially not only cheer up a person but also cure their sadness in their hearts. I would used motor and pressure sensor to make this project happen. I think I would mostly focus on making the motor open up the back of the clothing so that the inside could be revealed and work one making the flowers look as if they are alive and they are blooming more as the person was hugged more firmly and tightly.

My biggest step in this would be figuring out how to make the motor and the fabric work together to form something that would make it look naturally opening and look as if it was really something alive rather than something human made.

Waves

My idea came from the waves and how it would reflect the sky on to its surface. I wanted to create a simulated nature where there would be small reflective piece on each motor and as a group it would interact with human or show the simulated nature. I originally came up with the idea for the reflective piece as aluminium piece, glass, and acrylic but Golan also gave me an idea about using the dichroic glass and lighting.

My biggest step in this project would be figuring out how to show this with the lighting and the projection that might be projected on to the pieces. How it would be shattered and reflected both the lights and the projection and what effect it would cause the viewers is something to also think about.

20141119_160002

Bird’s Eye View: Concept and Sketches

Bird’s Eye View is a map that one explores using their head. My plan is to use with FaceOSC to create a control scheme allowing one to move around a map using their head position. I also have to make a program that splits my giant map image into tiles, and then displays the relevant tiles based on where the motions of the user’s head places the view port.
finalprojectsketch Big thanks to Golan Levin for explaining to me how a map-tile-loader should work.

Below are a few sample images from the map I’ll be using. It was stitched together from snapshots of satellite images available through Google Maps (In other words, even though I have edited these images, I do not claim ownership over them, and am using them for a purely academic purpose.).
mapview1 mapview2 mapview3

Assignment 13 Sketch

For this final project, I intend on constructing a 2.6-foot humanoid robot named Halley that exists for the purpose of emulating a single student in a classroom.  The robot would be remote controlled from another computer and be able to perform tasks such as raising a hand and asking questions.  Halley would also be equipped with an Android phone for a face, allowing for a wide range of emotions to be displayed.

Here is a concept sketch:

halley_concept001

And here is a concept rendering:

Halley_Ortho

This would probably not be possible to build in the course of two weeks, however, fortunately for me, I have been planning for it since the beginning of last year and assembling it since the beginning of this semester, thanks to the support of an FRFAF Microgrant (am about 80% done with raw assembly), meaning I can get it done just in time for the purposes of a final project.

While I do not expect that I will be able to complete the full functionality as a robot student until after Winter Break, I do think I will be able to get some interesting behaviors working.  Primarily, I expect to have actions mapped to certain keys such that when pressed, the robot will proceed to play that action.  Some actions I will probably set are shaking hands, raising hands, looking up/down/left/right, standing up and sitting down.

Something funny to take out of this, the way I see it, is that students are so detached while in some classrooms and lecture halls that they may as well be robots.