cdslls_Extra_Project

Since I felt like I had hit a wall and could not progress any further with the Schlieren Mirror, I finally decided to re-edit my LIDAR photography video & do an extra project for my final. With the edgertronic high-speed camera, I captured a couple of my friends eating   traditional diner food in extra-high speed. Thank you to Ritter’s Diner in Bloomfield for letting me film in their facilities.

After a long reflection, I decided to add an eating audio track, at low decibels and extra high speed, as I felt like it caused this cringe-worthy feeling at the maximum of its extent. About half way through the video, I switch to play the videos in reverse, simultaneously playing the sound at low speed. After editing all the footage, I decided everything looked more annoying, off-putting and gross with the red-ish tint, which I reverted back to.

Without being a huge comment on society, this idea stemmed after talking to my friend about fast food in America and how much this culture differs from the rest of the world. After reflexion, we came to the conclusion that this part of our culture was also the beauty of this country.

“This is it”

Thank you Golan for a great semester!

PS: There is an error with the rendering at 1:39, I am trying to figure out what it is!

 

 

supercgeek-FinalProcess

While working on my final project (update to my Place project), I spent my time on three main areas:

  1. Creating the Portal Effect from Bioshock Infinite that originally inspired the project.
  2. Getting Vuforia to automatically pin the geometry to real space.
  3. Making the portal “look cool”

01 // The first stage went well and I was able to get a portal like affect working which allows the HoloLens wearer to only see the 3D Geometry through a ‘tear in reality.’ [1]

02 // Vuforia implementation went considerably less well. Though I was able to get some object registering onto a image targets, performance wan’t robust enough to depend on and the size of the great hall made all of these issues worse. In the end, I completely scrapped this part of the project and reverted to a simple manual placing of the old CFA, but I think this significantly undermined the effect I was going for.

03 // This final stage of the project was squeezed for time because of all the time spent searching for Vuforia solutions. That said, I did manage to experiment briefly with particle simulators and effects. [2]

Geep-Final

Sphinct!

The lifestyle sphinctometer

 

Sphinct! is the newest innovation in fitness technology. Sphinct is a sphinctometer which is comfortable inserted into your rectum. It then captures data on pressure, muscular performance, and stress levels. Sphinct redefines the way we approach fitness. Sphinct also syncs to your smartphone. Where you can track your progress and take control of your body. Compare your results to friends and even unlock achievements.

My primary inspiration for the Sphinct manifested from the discovery of the medical sphinctometer and my desire to own one. Then I got the inspiration to take this to a different platform. The health and tech industries cater their content towards the cisgendered and able-bodied individual. I wanted to queer this narrative by branding this as a high end, performance enhancing technological innovation. I created this to exist in the space of speculative design as well as capturing the audience. I wanted to take the took I made and create something with more sustenance.

Firstly looking at speculative designers like Anthony Dunne and Fiona Raby’s Park interactives piece. They created objects that mimiced objects that belonged to that space. They create a public intervention in the space and twist the habitual narratives.

what worked: I physically got it to work which in itself for me is wild. The concept, branding, and logo design were what really saved this. I needed to use my strengths to make something presentable.

What failed: I shot for the moon but I landed amongst the stars. I just went right over my head with fabricating and coding the device, getting accurate and good data, and not being able to make a proper video or a sleeker project. I would have loved to had done this with funding or with a dedicated team that I could direct. Honestly, I’m meant to make ideas and be the boss.

To make this tool I did research on teledildonics and talked to different engineers and artists. I had to hack an inflatable buttplug with an airpressure sensor. I got code from one of the people I was in contact with.

Process photo of us working in the lab (sorry I wish I had others this was unfortunately the only one I took)

Code

8 second video of the plug being squeezed and it reading the data.

IMG_0419.MOV

PDF

https://drive.google.com/file/d/0B4elSq8KsGgfRHZiVmlqbjVLZFk/view?usp=sharing

iciaiot-final

Memory Rooms

This is an interactive projection where the artist used photogrammetry, clay, ink, pigment, and paper to recreate her childhood bedroom.

My project was an attempt to explore the parallels in capture technology/interactive media/virtual space to memory. I wanted to capture, in the most honest way, my childhood room as it exists to me. I drew and modeled in clay everything I could remember from the space using no sensory references– the walls, the furniture, the floor, etc. I then took all the pieces I had created and integrated them into one coherent representation of the room. In Unity, the ink drawings became planes that represented walls in the virtual space. I used photogrammetry to capture the clay models I made and put them into the virtual room as well.

Due to photogrammetry errors, the misalignment of angles and perspectives in my drawings, the imprecision of my memory, and the personal hand-drawn, hand-molded aesthetic of all the assets, the visual nature of the piece was less realistic than most representations of real world places. I used the same techniques and colors when creating the drawings and models so the whole room has a cohesive feel to it, as opposed to how the room would have looked in reality: the eclectic collection of manufactured clothes, books, bed sheets, and furniture. I think these qualities are significant because this is the version of the room that is integrated into my identity; the things I remember and the way I remember them is all that matters in this representation.

I exaggerated the immateriality and intangible qualities of the piece by placing the wall and furniture planes at incorrect scales and positions in Unity so that the room appears “normal” from only one perspective. If that perspective is changed, elements of the room that were remembered and drawn separately (the furniture, doors, and windows, etc) jump out at different depths of field and must be viewed independently of context of the whole room. In this way, the experience of the user is fragmented and incorrect, deceptively concrete, like the experience of remembering something or experiencing anything in a virtual space. I also projected onto a warped, semi-translucent surface in order to emphasize, again the lack of objectivity and precision but also to emphasize the domestic, personal subject matter–my personal bedroom.

My project is interesting because I find that people’s identities are rooted deeply in their memories (of childhood, of their hometowns, houses, possessions, etc) and this is especially true for me. The evolution of capture technologies (like photography, cinematography, audio recording, photogrammetry, videogrammetry, 360 video, 3d interactive experiences, etc) has and will undoubtedly change the way people remember and self-identify. For example, photographs affect the way we see ourselves and form perceptions of who we are. Before cameras, people could only re-experience their childhood via memory.

The expectations we have of capture technologies are in some ways similar to those that we have of memory, and in many ways people expect to use tech as a surrogate or reinforcement for memory. I think a misconception about this tech is that we’ll be able to record and re-experience the events objectively, that recordings are more trustworthy than memory and that our perceptions will be more impartial. But the pitfalls of memory and technology as a way to store our identities and re-experience them at will are similar. Just as we can never remember a moment completely accurately, we can’t (yet) capture the exact experience of a moment with technology. We will always be looking, remembering, and recording through the biased frame of human perception.

For this work, I was most inspired by Displacements by Norman Naimark. His work is a time-based projection piece where the past of a room is made visible in the present by projecting footage of a room onto itself. I love how his work is oriented physically in one space and how the passing of time is exaggerated and anatomized by the contrast: the past and present states are both extremely clear and easy to differentiate between, but they coexist in the same space and blur together. Besides being visually appealing, the piece is suspenseful because of how the projection never reveals everything at once, but rather only one window at a time. Displacements provokes thought about the recording of past events, the re-experiencing of captured events, the strong link spacial connection between past and present, and most importantly, the inherent dependence of a present state on it’s history.

My piece drew on Displacements in a number of ways. Similar to Displacements, I wanted my work to induce a sense of misalignment and I hoped to provoke thought about capturing and re-experiencing something and I used projection to do so. However, my work was much more personal so that although it is heavily dependent on location, the connection is only truly apparent to me, as the only one who’s lived in the real room. I tried, like in Displacements, to reveal connections between what exists now and can be seen, and what existed in the past or what can’t be seen. Unlike Displacements, however, my work also strove to encompass ideas about personal identity as it relates to remembered spaces and recorded or reconstructed spaces.

I was also inspired by Fritz Panzer’s wire sculptures of furniture. His work is described as “gestural contour drawings, creating the volume of an object through a gossamer-like outline that seems to gradually dissipate…an almost ethereal experience, requesting the viewer to rely partly on memory and recognition.” His work, for me, evokes the feeling of spacial vacancy. His sculptures indicate at something that is NOT present and I wanted the same quality to exist in my work, especially with the furniture. In my renderings of furniture, I made the background invisible so that the ink line drawing in space imitated one of Panzer’s wire sculptures. Like his sculptures, my room is only coherent from a certain perspective.

.

For me, this project is not finished. I would love to integrate more rooms, more furniture and more movement. I would love to exhibit the work in a setting where I can furnish the area around the projection to be reminiscent of the virtual content. Also, I am considering including the original room into my work in some way (perhaps as a series of photos or a 360 photo or a 3d scan in a companion project) so that the connection between the real room and this remembered/recreated project is stronger. Evaluate your work. Where did you succeed, what could be better, what opportunities remain.

Making Of

In the beginning, I planned to just re-model my room in unity as a full, solid single object with walls, a floor and a ceiling and I planned to used photogrammetry to capture my actual furniture. This is what it looked like:

I changed my mind as I realized that these techniques did not suit my project, which was about the vagaries of memory, virtual space, and capture tech. Instead of the concrete and uniform room, I used separate drawings as planes which looked more ethereal and pieced together and I used clay models made from memory. These techniques were more in keeping with my concept: to create from memory and highlight the imprecision of memory and the false-ness of virtual spaces/capture technology.

Around this time, only tangentially related to this project, I had been making a lot of sketches using ink, pigment, and whiteout depicting different fragments of memories I had. Here’s one of my sketches:

I used these sketches as inspiration for how the room would look. So I made a bunch of drawings which I scanned and used to texture planes in Unity:

Finally, I placed the assets at incorrect depths and scales to achieve the affect below (first gif below) when the user moves to explore the room.

I originally planned to use a headset to present my work, but I found that the headset was too cumbersome and contained. I didn’t want the user to be totally immersed in a virtual reality environment. Instead I wanted my project to be grounded in a real space so that the user could more easily consider the relationships between present and past/virtual and real/seen and remembered. I opted for projection on a sheer, warped fabric with a kinect to track head movement so that the user would have in intuitive way to explore the scene and be naturally immersed without being taken out of the present moment completely.

cdslls-FinalProgress

For my last project, I was hoping to create something new, as I have reached the technical limitations of the schlieren mirror and what can be achieved with it. I had started to play with the high speed camera in my previous project and was hoping to do explore it further. My current idea is the creation of an exquisite corpse type face, or the creation of an entirely new being by the means of different body parts filmed with the high speed.

Similar projects:

Golan’s project: http://www.flong.com/projects/reface/

Smokey’s project:

fourth-portrait

Geep-FinalProposal

I’m Going to continue to work on my Sphinctometer. Make sure it works properly to collect data. I’m also going to shoot a promotional video and use motion graphics.

Ontop of that Caitlin and I have talked about doing a small collab with the depthkit in the pool. Using the pool, props, and some original music for a mini music video.

Bernie-Event

As we all know, I’ve been playing with cameras and a robot all semester. My inspiration for using a robot to do paintings with light came from Chris Noel who created this KUKA light painting robot for Ars Electronica.

Since painting and animations has already been done, my partner Quan and I decided to still use the robot to light paint, but light paint using computational focus blur.  Quan is the designer, and I am the programmer, so we had very distinct roles in this project.  This truly was an experiment since neither of us knew what to expect.  All we had seen was these pictures of fireworks being focus blurred by hand:

 

 

 

 

In my original endeavor to computationally control focus was to use the Canon SDK, which I have used before to take pictures, but controlling the focus turned out to be much more complicated.  Then we decided to try a simpler solution of 3D printing one gear to put around the focus ring of a DSLR, and one to put on the end of a servo and control the focus ring with a servo.  This was a solid solution, but a cleaner one ended up being to use the Black Magic Micro Cinema Camera.  This is a very hackable camera that allowed me to computationally control the focus blur with a PWM signal.

Then I created an app using ofxTimeline to control the focus of the BMMCC and the colors of an LED that was attached to the end of the robot arm.  The robot arm would then move in predetermined shapes as we computationally controlled the focus.  Focus blur is usually done manually and on events that cannot be controlled, like fireworks.  This was an entirely controlled situation that we could play with every aspect of, because we controlled every variable.  Quan then used Echo in Aftereffects to average the frames and create these “long exposure” images.

The first tests we did were with random focusing, and they looked interesting, but they also looked computer generated.  In the second shoot, we aimed to integrate the streaks with real objects.

Test Shot:

App:

Final Setup:

Final Gifs:

Outline of a reflective object:

Reflected by a Mirror

Through a Glass Bowl

Through a Large Plastic Container

 

a — event process

Event — Progress


so far ‡

I have the multi-person pose estimation code running, and am able to feed into it the IP camera images from across the world. As the cameras run at a low frame rate, and the code as of yet only runs at ~5 fps, I need to develop some method of temporal interpolation to have smooth-ish movement. I also need to optimise the code, as right now it waits for the whole round trip of input webcam to server to display to complete, to prevent backpressure. I will reimplement this as a GRPC service running in the cloud, probably.

Here is a debug image, of the pose estimation on a choir. Note that it also correctly gets orientation of the pose skeleton (look at the bone colours).

supercgeek-EventProcess

Idea Track 1

With regard to the scrolling project I proposed in my EventProposal, I did few explorations around this:

Idea Track 2

Earlier this year I did a bunch of microscope explorations (1, 2) for a different class and I was thinking about trying to revisit some of this stuff in slo-motion. I was also inspired by Kim Pimmel’s work in this area:

Other References [1, 2]

fatik-eventProcess

I had the opportunity to go to a death metal show at the smiling moose. I facebook messaged this local punk band and got their permission to film. Must say, it was a wild experience. There were a lot of older white males with beers. The music was also spectacular. I have all the footage and now I just need to look through all of them and make a more cohesive video.

 

 

Geep-EventUpdate

So I had a meeting with Kyle about the sphinctometer. I’ve been doing tons of research on how to make this and he’s connected me with some people to talk to. I have a list of things to get to help make a DIY sphinctometer.

A few inspirations:

kGoal Boost

http://wearablex.com/fundawear/

http://orgasmscience.com/

https://docs.google.com/spreadsheets/d/1Hk2PILgUwJlcvIrqn_NkxJOibY8Z0XcxBSzQ61qCGhU/edit#gid=0

a — place proposal

Place Project


I am interesting in capturing people through walls and around corners.

how ≠– capture

I plan on capturing footsteps sounds through a contact microphone array. When placed on a hard surface, it should enable detecting of footsteps from long distances. Using time-delay-of-arrival estimation, I should be able to triangluate the approximate location of the footsteps sources — i.e, the people

how ≠– media artifact

I plan on projecting a map of people walking around above the STUDIO of Creative Inquiry on the roof the STUDIO of Creative Inquiry

a — event proposal

Event Project


I am interesting in capturing horizontal planes of commonality across disparate / disconnected human existences in the world.

To express this desire, I will use live camera feeds from around the world, extract semantic/significant/poignant/interesting features, and place them in a single space, a World Playhouse

how ± capture

I plan on using implementations of real-time multi-person pose estimation to extract live pose-skeletons from webcams in selected spaces around the world.

how ± media artiface

To visualise the pose-skeletons, I plan on creating a World Playhouse. Within the Playhouse, I will map the captured pose-skeletons to avatars. Inter-avatar interactions, the horizontal threads connecting these distant, originally non-overlapping rooms, will be amplified, through as-yet-undeterminded methods including but not limited to — physics, ragdoll physics.

blue-EventProposal

I’ve been working with data from a shoot I did recently in the Panoptic Studio (PhD project in Robotics and CS). The Panoptic Studio is a dome with 480 VGA cameras, 30+ HD cameras, 10 Kinect, hardware-based sync, and calibration for multi-person motion capture.

The output is in the form of skeletons (like traditional mocap), and also dense point clouds that can be meshed. I filmed two dancers in this dome, and am working on ways to express the data, primarily working with the point clouds and meshing these to create an animation.

The event I’d be capturing is the interaction between the two dancers. I’m interested in this as a prototype for understanding how to work with this data, as there is not much documentation on it. I’ve been working with this data using Meshlab and Blender, but am interested in potentially working with OF to create spheres on the individual points in the point cloud, to create usable geometry.

Here’s what I’ve been doing so far:

Geep-EventProposal

I’m going to make a sphinctometer but it will be like a fitbit. I will brand it as the new health tech craze. It will have social media capabilities and you can share the graph data with your friends.

Scenario: You check to see your friends pressure scores. Carol’s butthole is at an unusual pressure rating. Looks like someone had anal!! You now have the tightest butthole out of all of your friends!!

The campaign includes potential work outs, lifestyle advice, etc.

I’m going to make a mock app interface, a hopefully working prototype, and a promotional video/kick starter campaign.

 

supercgeek-EventProposal

Idea 1

Everything scrolls under subject: A day in the life of the modern human. I have this idea tell a subject-based story of modern life by looking at someone’s day through locked perspective on particular objects.

I’m particularly interested in exploring the comparison between a finger scrolling a phone and a person biking across a land.

fatik-eventProposal

I had the one idea of stitching and editing a lot of movies that said the word “fuck” because of the incident with my mother.

But I did want to really play with the kinect and capturing depth and movement as well as 3d space. I was inspired by the french music video and the student project of the guy playing with the kinect in the studio.

I really want to try capturing a big pool of people at a concert or something. I know that there are a lot of weird events and things in Pittsburgh so I want to try finding these people.

blue-Place

I am interested in the idea of “the topography of our intimate being” discussed by the theorist Gaston Bachelard in his book “The Poetics of Space.” In this book, he examines and explores our relationship to elements of home, such as attics, cellars, drawers, nests, and shells. I was also inspired by Sophie Calle’s photography of other people’s hotel rooms, and the video documentaries of what’s in people’s pockets.

I have a particular interest in the use and contents of drawers, specifically top drawers. Personally, I’ve always had a strong association with my own top drawers, mostly dresser and bedside table. When I was younger, this was where I would stash my special objects, my secret things, my collected totems. Now, my association remains similar, yet slightly more disorganized, as the drawers can become catch-alls for special items and also random junk alike. These drawers seem to me to be a particular manifestation in material form of my thought processes, my life over time, and the ephemera of personal rituals.

Therefore, I set out on the nosy task of creating a collection of other people’s top drawers. I wondered if my hypothesis of reaching the “intimate topography” of the drawer-owners through the process of creating 3D photogrammetry models of their drawers would be validated. Bachelard says, “A house that has been experienced is not an inert box. Inhabited space transcends geometrical space.” I’m interested in the way this relates to the process of creating 3D geometry from this inhabited space.

I’m also interested in the spatial memory or association of someone’s house in their own mind. When I did the scans, I would ask the drawer-owner to draw for me a rough floorplan of their house or office. I’ve included one of these floorplans, extruded using Blender, as the “floor” of the Unity environment. The idea of capturing the “room tone” audio of every room where the drawer is located was suggested, and I love this idea. In a further iteration, I would explore this. I’m interested in the way this project may blur the concept of “portrait” through a personal place or object, and in creating a stylized / art documentary about a person through an investigation into looking (at the people themselves, their environments, and their objects). This concept has come up in my artwork before, and it’s interesting to see it crop up again unconsciously. My work seems to create opportunities for me to experience and almost curate social rituals – for myself and for the participants involved. Sometimes I would like to be able to share the process of making the work (the social experience of “collaborating” with the drawer owners) with the viewer in addition to the final product.

Through putting these models into an explorable VR environment, I wanted to give the viewer an intimate encounter with the contents of the drawers – the ability to explore and investigate a place which is normally entirely private to the owner. This was my first time using photogrammetry and Unity, and I set myself the technical challenge of creating an environment to explore using Google Cardboard. I am happy to have tackled this, and am continuing to use these techniques in other projects now.

Quan-Place

Full process

First Try —

I went to a large tunnel for my first iteration of steel wool photography, with the intent of creating a spacial mapping of layered pictures. A singular photo looked like this:

and the stacked photo looks like this:

Second Try —

I realized how distracting those circles were in long exposure, so I wanted to eliminate them. One option was photoshop, but a cleaner and more interesting method was with the use of video. This is one ‘slice’ of the tunnel that I chose to map.

I did this every 5 feet for the entire length of the tunnel, and then texture mapped these videos with alpha values  on planes in space.

Final Result—

Vid 1

Vid 2

supercgeek-PlaceProcess

Previous Blog Post: PlaceProposal

CONCEPT

historical photos of Carnegie-Mellon X a dynamic (i.e. user-driven) augmented reality viewport that allows one to time travel by looking through visual tears in their perception of space

PIECES OF THE PUZZLE

TECHNOLOGY: Working through the HoloLens academy tutorials (I tested my dev setup last night with a HoloLens in the studio and was able to place objects and demo live in holographic)

MATERIAL: I’m going to the CMU Historical Archives tomorrow to find some interior photos that will jive better with the HoloLens’ limited recognition range. Some of the original photos I was planning on using would have required being outside 30-40 feet away from buildings/markers which seems to break the HoloLens’ ability to establish presence.

TECHNICAL ART: Adding depth data to flat 2D images by manually sculpting them around basic 3D geometry and generating 3D scenes from that process.

DESIGN: Creating location-based portal effects (see BioShock Infinite Tears) and designing affordances for both the general portal location and to infer when one is going to enter/exit a portal by going beyond the adequate range of view.

TECHNOLOGY

I started by going through some of the HoloLens Academy lessons to get a general handle on the pipeline and how to build custom apps and push them to a live HoloLens. Here’s a quick video of capturing the results of that setup process:

[this area of work is going]

reference:

  • https://forums.hololens.com/discussion/1951/align-hologram-s-with-real-world-objects-and-or-room
  • https://www.youtube.com/watch?v=jy8XHQAFyU0
  • https://www.youtube.com/watch?v=iUmTi3_Ynus
  • https://www.youtube.com/watch?v=jy8XHQAFyU0
  • https://developer.microsoft.com/en-us/windows/holographic/spatial_mapping#using_the_surface_observer
  • https://github.com/Microsoft/HoloToolkit-Unity
  • https://developer.microsoft.com/en-us/windows/holographic/spatial_mapping_in_unity
  • https://forums.hololens.com/discussion/1951/align-hologram-s-with-real-world-objects-and-or-room
  • https://youtu.be/C7mLH_5QzvU
  • https://forums.hololens.com/discussion/1033/using-spatial-mapping-to-recognize-a-pre-scanned-space?

From Case Study on Looking Through Holes

 

MATERIAL

On March 6th, I visited the University Archives to begin searching through historical content for my primary material. It was an absolutely awe-inspiring experience that I won’t forget soon. Some highlights from my March 6th trip are below; I’ll be returning to the archives on March 7th for further review.

[this area of work is going]

TECHNICAL ART

As of March 6th, I’ve begun learning Maya to do 3D environment recreation from the 2D historical historical photo content.

[this area of work is going]

DESIGN

Talking to Austin Lee about my project for this class inspired me to take the Design of the ‘temporal shifting’ interface seriously. He showed me two projects which sparked my imagination in particular, the Khronos Projector and Art+Com Timescope.

[this area of work is going]

Other References

* == from fellow student A

blue-Portrait

For the portrait project, I was interested in exploring ways to capture the physicality / physical presence of my portrait partner. In my work, this is something I am almost always searching for, and in this course we have been exposed to many different ways of capturing, or making visible, the human body. I gravitated almost immediately to the Edgertronic ultra high speed camera, for its high image quality and ability to significantly stretch time in visual form. I was interested in capturing my partner’s breathing and pulse after undergoing an act of physical exertion, in a way that also reflected traditional portraiture. For this project, I found touchstones in the work of Collier Schorr, Marilyn Minter, Rineke Dijkstra, and Bill Viola. I wanted to see what the high speed camera could capture that I wasn’t able to capture on a DSLR at 60fps. I ended up shooting these portraits at 900fps. A 7 second recording was then stretched to over 3 minutes.

We shot through a window, and I was interested in visualizing her breath on the glass, and also the trickle of water down the glass. I wanted to create a portrait that allowed the viewer to sink into the image, to slowly realize that what they were seeing was not in fact still or static, but was living and breathing. I used a very shallow depth of field, and chose to focus most clearly on the water droplets, to emphasize the surface of the image, an attempt to recreate the feeling of looking through a pane of glass. By showing the portrait on a vertical monitor that is the exact aspect ratio of the image, I aimed to turn the monitor itself into the window – a further attempt to highlight physicality in space.

I also created a triptych version of the portrait:

In addition, I was planning to film my partner after she ran / physically worked her body up to a higher pulse/breathing rate and capture this embodied process, but this was not possible on the day of filming. We tried, but it became clear that in order to capture what I was hoping to witness, the “physical exertion” process would have to be significantly more intense than we were able to do that day. So we returned to the visualization of breath / moisture, and focused on the window shoot. I was happy with what we were able to capture, and feel I did succeed in my goal of capturing an aspect of my partner’s physical presence.

In the future, I would like to continue exploring physicality through capture methods, and would like to get MIT’s Eulerian Video Magnification scripts working with these videos. I attempted to do this, but was not satisfied with the visual quality of the OpenFrameworks addon, and was unsuccessful in getting the MATLAB or Python versions of the script to compile. Eulerian Video Magnification, or EVM, magnifies and amplifies small color changes or movements in video, and has been used to visualize pulse and breathing in babies and medical situations. I would be interested to see what affect this might have on a portrait like the one I created. I’m also interested in working with near-infrared to visualize pulse, and potentially MRI / ultrasound data – and specifically finding ways to integrate these computational / data-driven processes into immersive/sensual/visual culture imagery and experiences.