ya-FinalProject

looper is an immersive depth-video delay that facilitates an out-of-body third-person view of yourself.

Using a Kinect 2 and an Oculus Rift headset, participants see themselves in point cloud form in virtual reality. Delayed copies of the depth video are then overlaid, allowing participants to see and hear themselves from the past, from a third-person point of view. Delayed copies are represented using a quarter as many points as the previous iteration, allowing the past selves to disintegrate after several minutes.

I initially experimented with re-projecting color and depth data taken from a Kinect sensor in point cloud form, so that I could see my own body in virtual reality. I repurposed this excellent tool by Roel Kok to convert the video data into a point cloud-ready format.

While it was compelling to see myself in VR, I couldn’t see my own self from a third person point of view. So I made a virtual mirror.

The virtual mirror was interesting because unlike in a real mirror, I could cross the boundary to enter the mirror dimension, at which point I would see myself on the other side, flipped because of the half-shell rendering of the Kinect point cloud capture.

However, the mirror was limiting in the same ways as a regular mirror: any action I made was duplicated on the mirror side, limiting the perspectives that I could have on viewing myself.

I then started experimenting with video delay.

An immediate discovery was that in the VR headset, the experience of viewing yourself from a third person point of view was a striking one. The one minute delay meant that the past self was removed enough from the current self to feel like another, separate presence in the space.

I also experimented with shorter delay times; these resulted in more dance-like, echoed choreography — this felt compelling on video but I felt it did not work as well in the headset.

I then added sound captured from the microphone on the headset. The sound is spatialized from where the headset was when it was captured, so that participants could hear their past selves from where they were.

During the class exhibition, I realized that the delay time of one minute was too long; participants often did not wait long enough to see themselves from one minute ago, and would often not recognize their other selves as being from the past. For the final piece, I lowered the delay time to 30 seconds.

The project is public on GitHub here: https://github.com/yariza/looper

 

 

 

ya-DrawingSoftware

My project is an audiovisual interactive sculpting program that lets participants create shapes using their hands as input. I wanted to explore the act of drawing through pressure sensitivity and motion, using the Sensel Morph as my input device.

A main source of visual inspiration was Zach Lieberman’s blob family series; I wanted to take the concept of never-ending blob columns and allow participants to make their own blobs in a way that visualized their gestural motions on a drawing surface. The sculptures made are ephemeral; when a participant is done making a gesture on the tablet surface, the resulting sculpture slowly descends until out of sight. The orientation of the tablet also controls the camera angle, so that sculptures can be seen from different perspectives before they disappear.

The final experience also contains subtle audio feedback; the trails left by the participant is accompanied by a similar trail of gliding sound.

early sketches. I explored other avenues of visualizing pressure, such as flow maps and liquid drops, before gravitating towards extruded trails.
other avenues of exploration, including using extruded terrain as a bed for growing organic lifeforms.

ya-LookingOutwards-2

Bleep Space is an iOS app and arcade machine in which players explore a sequencer with unfamiliar buttons to create noise-pop music. Each button is tied to a unique sound and visual, and players can assign the button to a sequencer slot to create their own rhythms and melodies.

In the article on Creative Applications Network, Andy Wallace explains that the inspiration for the work came from his experience playing with a Korg synthesizer, a device that he didn’t fully understand. This theme of exploration an unfamiliar space also appears in one of Andy’s other works, Terminal Town, in which players explore the unfamiliar interface of a command-line tool to solve a puzzle.

Perhaps the most compelling part of this work is that the buttons are highly tactile and that hitting them always produces some kind of sound; a common frustration with exploring synthesizers is that some knobs don’t seem to have an immediate effect on the sound, because of different synthesizer “modes” that turn off certain features. However, the interaction of simply triggering the audio samples seems simplistic; and there are other aspects of audio synthesis that could be explored using tactile inputs and explorative play. Works in this area include Rotor by Reactable Systems, which use physical objects on a reactive screen to explore synthesizer systems.

ya-mask

For this assignment, I created a musical instrument using my face. While not strictly a visual mask, I liked the idea of an aural mask augmenting audio input, transforming the human voice into something unrecognizable.

The final result is a performance in which I use the movements of my mouth in order to control the audio effects that are manipulating the sounds from my mouth. An overall feedback delay is tied to the openness of my mouth, while tilting my head or smiling distorts the audio in different ways. Also I mapped the orientation of my face to the stereo pan, making the audio mix move left and right.

One interesting characteristic about real-world instruments as compared to purely digital ones is the interdependence of the parameters. While an electronic performer can map individual features of sound to independent knobs, controlling them separately, a piano player is given overlapped control over the tonality of the notes: hitting the keys harder will result in a brighter tone, but also an overall louder sound. While it may seem like an unnecessary constraint, this often results in performances with more perceived expression, as musicians must take extreme care if they intend to play a note in a certain way. I wanted to mimic this interdependence in my performance, so I purposefully overlapped multiple controls of my audio parameters to the same inputs on my face. Furthermore, the muscles on my face often are affected by one another, so this constrained the space of control that I was given to manipulate the sound. The end result is me performing some rather odd contortions of my face to get the sounds that I wanted.

The performance setup is done using a mix of software and hardware setup. First, I attached a contact microphone to my throat, using a choker to secure it in place. The sound input is then routed to Ableton Live, where I run my audio effects. A browser-based Javascript environment is used to track and visualize the face from a webcam using handsfree.js, and send parameters of the face expression through OSC and WebSockets. Because Ableton Live can only receive UDP, however, a local server instance is used to pass the WebSocket OSC data over to a UDP connection, which Ableton can receive using custom Max for Live devices.

An Ableton Live session and a Node JS server running in the background.

For the visuals of the piece, I wanted something simple that showed the mouth abstracted away and isolated from the rest of the face, as the mouth itself was the main instrument. I ended up using paper.js to draw smooth paths of the contours of my tracked lips, colored white and centered on a black background. For reference, I also included the webcam stream in the top corner; in a live setting, however, I would probably not show the webcam stream as it is redundant.

 

ya-2Dphysics

Try this experience here.

flow is an interactive experience where you can control a fluid simulation with your mouse or webcam. As you move around, the fluid particles get pushed around by your movement, revealing your own image as they pass by.

I’ve personally been a fan of fluid simulations, so I first wanted to explore Google’s LiquidFun library first. One observation that I noticed from the liquidfun demos, however, was that the particles seemed to always be controlled via means other than themselves: they were shot out of a spawner, or pushed around by rigid bodies. Because I wanted to have a more direct interaction with the particles themselves, I first chose to move them around with the mouse cursor.

The colors were chosen in an attempt to make the particles feel organic and natural: Instead of making the particles look like water, I wanted them to feel like fireflies in a night sky, or a flock of creatures moving around.

Although the mouse interaction felt fluid, I wanted to have an even closer connection between the player interaction and the experience. I then investigated optical flow libraries – algorithms that took color video as input and analyzed movement to produce a velocity map. I found a public library called oflow by GitHub user anvaka, and decided to integrate a webcam stream into the experience.

With the webcam stream, I feel the experience takes on a much different feel than the single color particles moved around with the mouse. When particles are pushed around, they now occlude and reveal different areas of the screen, creating a constantly evolving mirror.

Resources used:

Google liquidfun

oflow by anvaka

sketch.js by soulwire

ya-reading1

Reading Mary Flanagan’s article, the notion of critical play leading to new kinds of play, and making familiar types of play unfamiliar, aligns most with my own goals.

In games and in other interactive work, the mode of of play is often defined by the medium being used: rectangular displays, keyboards and mice, or video game controllers all define the mode of input and output of an experience that is expected of the genre. In the players’ minds, all input is interpreted by the game to correspond to actions in the virtual play environment: movement of the joystick is understood to move the character around in space. While these mappings are often ingrained to the point that it feels second nature to players, they nonetheless constitute a barrier between the physical world and the virtual world of the game. Removing these barriers, then, through use of systems that map more directly to human perception and interaction – can lead to familiar experiences in the physical realm being translated in the virtual realm. My explorations in virtual and augmented reality reflect this proposition.

ya-lookingoutwards01

New Nature by Marpi is primarily an interactive exhibition, featuring display panels and surround sound immersing guests in a world filled with virtual creatures – abstract-looking trees, plants, and flowers which react to guests’ presence and hand movements via Kinect and Leap Motion sensors. While the full experience is on exhibit at Artechouse in Washington, DC., accompanying experiences are available as mobile experiences on iOS and Android. The project was a collaboration with Kevin Colorado (technical direction), Bent Stamnes (sound design), Will Atwood (3D art), with documentation by Daniel Garcia and Jeremy ShanahanNew Nature was made using Unity, in conjunction with external software for Kinect and Leap Motion.

I admire the procedural nature of the generative plants and creatures, as well as the physical, tactile nature of “touching” the creatures with your hands. As a person working within Augmented and Virtual Reality, I am also interested in exploring this physicality of virtual objects. Having virtual objects react to your movement through sensors adds a level of involvement and connection to the virtual work that would not otherwise be possible.

Marpi, New Nature