blue-Final

body, my body is both the title of my final project, and a song by the Pittsburgh-based dance/music duo, slowdanger, for which I’ve made the music video, in addition to a VR dance piece.

For much of the semester, I’ve been working with as many methods of motion capture as I could get access to. At Carnegie Mellon, I’m fortunate enough to have at my fingertips multiple motion capture research labs, as well as many other methods for capturing bodies in space and motion. I began to collaborate with slowdanger in March, as we are collectively interested in sensory perception, kinesthetic experience and understanding, and ways to imbue and express feelings of embodiment in visual, digital, and mixed reality experiences. This project has allowed me to further investigate my core research questions of human perception, representation, and communication within experiences with evolving technology – specifically focused on the notion of embodiment, as experienced by a performer and translated to (sensed by, interacted with, responded to by) a user/audience/witness.

In addition to these conceptual questions, I also went through a very technical investigation into the capture and display (both interactively and statically) of the data generated by the various motion capture processes available to me. I was able to work with CMU Robotics / CS student, Hanbyul Joo, and the Panoptic Studio, which is a massively multiview system for markerless motion capture using 480 VGA cameras, 30+ HD cameras, 10 RGB-D sensors (Kinects), with hardware-based sync and calibration. I am very interested in the emergent field of volumetric capture, and being able to “film” people volumetrically for use in immersive experiences such as VR, MR, and interactive installation. I want to be able to capture people as they are – without having to wear motion capture suits with retroreflective markers – and to capture multiple people in physical contact with each other, which in traditional motion capture is extremely difficult to do. Dance is the perfect form to explore this, and with slowdanger we definitely pushed the limits of each system we were working with. For the capture in the Panoptic Studio I was told, yes they can touch, but hugging is very difficult. So the two dancers began in a hug. Then, in the Motion Capture Research Lab, with the leotards and markers, I was told that one dancer picking the other up would probably not work. So were born two peitàs of each dancer carrying the other. The hug from the Panoptic Studio worked (at least for my purposes, in which I was only using the dense point clouds, not the skeletons), but the two pietàs resulted in the rainfall of retroreflective balls from the leotards to the floor. I did not end up using this capture in my final piece, but I’m interested to experiment with it later to see what happens when a motion capture skeleton suddenly disintegrates.

Here’s a look into the capture process:

One of the big technical hurdles I encountered during this project was working with the PLY files generated by the Panoptic Dome. These are text files with x,y,z and r,g,b data which create a series of points that generate a dense point cloud – a 3 dimensional visualization of the captured object or person. Displaying these point clouds, or turning them into meshes with texture, is a reasonably documented workflow, through software such as Meshlab or Agisoft PhotoScan. However, scripting this process to generate high-resolution meshes with texture on thousands of PLY files, aka thousands of frames of a capture, is extremely difficult, and virtually non-existent. Each PLY file I received is about 25 megabytes, and in a 3 minute capture, there are roughly 8,000 frames. This means scripting either the display of these point clouds (relatively unaltered) or the creation of decimated meshes with high resolution textures re-projected onto the decimated mesh – is pushing the limits of the processing power of our current computers. 3D software such as Maya and Unity do not import PLY’s natively. This project required a good amount of collaboration, and I’m grateful to Charlotte Stiles, Ricardo Tucker, and Golan who all worked with me to try various methods of displaying and animating the point clouds. What ended up working was an OpenFrameworks app that used ofxAssimpModelLoader and ofMesh to load the point clouds, and ofxTimeline to edit them with keyframes on a timeline. When I first tried to display the point clouds, they were coming in with incorrect colors (all black), so with some research it was determined that the RGB values had to be reformatted from 0-255 integer values to 0-1 floats. I wanted to be able to get these point clouds into the other 3D software I was using, Maya and Unity, but OpenFrameworks was the only program to load and display them in a usable way, so I captured these animations through the ofApp using an Atomos Ninja Flame recorder, and then composited those videos with my other 3D animation into the final video using After Effects.

Here’s a gif from the music video:


From the song ‘body, my body‘ music video, by slowdanger, Anna Thompson and Taylor Knight, from the album, body released on MISC Records, 2017.

In addition to the music video, I was curious to create an immersive experience of the dance, using VR or AR. I had worked with Unity and Google Cardboard in a previous project for this course, but had not created anything for Oculus or Vive yet, so I decided to dive in and try to make a VR version of the dance for Oculus. For this, I worked with the motion capture data that I captured in the traditional mocap lab, using skeleton/joint rigs. For the figures, I took 3D scans of the dancer’s bodies, a full body scan and a closer head/shoulders scan, and rigged these to the motion capture data using Maya. For the Maya elements of the project, I worked in collaboration with Catherine Luo, as the slowdanger project stretched across two classes, and Catherine was in my group for the other class. She is also fabulous in Maya and very interested in character modeling, so we learned a lot together about rigging, skin weighting, and building usable models from 3D scans. Once we had these rigged models, I was able to import them from Maya into Unity to create an environment for the dancers to exist in (using 3D scans of trees taken with the Skanect Structure Sensor) and to build this project for VR. Witnessing this VR version of the dance, and witnessing others experience this, was extremely fascinating. Putting the dancer in VR allows the user to place themselves right in the middle of this duet, sometimes even passing through the dancers’ bodies (as they did not have colliders). This is something that is usually not possible in dance performances that happen live, so this created a fascinating situation where some users actually started to try to “dance” with the dancers, putting their bodies in similar positions to the dancers, clearly “sensing” a physical connection to them, and other users were occasionally very surprised when a dancer would leap towards them in VR. This created a collision of personal space and virtual space with bodies that straddled the line between the uncanny valley and actual perception as individual, recognizable people because of the 3D scanned texture and real movement captured. The reactions I received were more intense than I expected, and people largely responded physically and emotionally to the piece, saying that the experience was very surreal or more intense because the bodies felt in many ways like real people – there was a sense of intimacy with being so close to the figures (who clearly were unaware of the user). All of this is very fascinating to me, and something I want to play with more. I showed the VR piece to slowdanger themselves, and this is one of the most fascinating observations I’ve been able to have, witnessing the actual people experiencing their motion-captured, 3D scanned avatars in virtual reality. I’m curious what would happen if I was able to put the temporal visualization of the dancers into VR, where the textures changed over time, photographically – so facial expressions would be visible, and thus the texture would not be static as it was with the 3D scan rigged to a mocap skeleton. I’d like to try to work with the point cloud data further to attempt to get it to be compatible with Unity and Maya. I did find a tutorial on Sketchfab that loaded a point cloud into Unity, and was able to get it working, though the point cloud was made of triangles and would have worked better if it was more dense to begin with (to get higher resolution data), and I was not able to work with the scripts to get them to load and display many frames at once, to animate them.

Overall, I am very excited about the possibilities of this material, especially working with 3D scans rather than computer-modeled assets. This creates a very different experience for the user/participant/witness. I plan to work with motion capture further, especially dedicated to creating situations where embodiment is highlighted or explored, and I’d really like to do some experiments in multi-person VR, MR or AR that is affected or triggered by physical contact between people, and other explorations of human experience enmeshed in digital experience.

blue-ForCatalog

Body, My Body is a motion capture project created in collaboration with the Pittsburgh-based movement/music duo, slowdanger, utilizing 3D body scans rigged to mocap data to create an immersive way to view a dance.

blue_Final-Progress

I’m continuing to work with the material I shot a while ago in the Panoptic Studio – determined for some obsessive reason to try to get the difficult file format that is a PLY to become usable geometry in a 3D editing program (Unity or Maya) and to play (i.e. display and swap out) thousands of frames to create a volumetric animation that can be exported for and viewed in a VR/AR device. Last time, I got the PLY’s to animate in Open Frameworks. While this was exciting, my subsequent goal has been to get this material into a program like Unity or Maya so I can treat these animations – PLY sequences – as another 3D asset and create an environment around these dancers (the subject of my panoptic capture). In Open Frameworks, while I could get the ply’s to display and animate, I could not add other 3D assets such as 3D scans of trees that I created and have edited in Maya.

The question of how to capture, display, and animate point clouds, and generate usable meshes from this data, is a very current challenge in computer graphics. Much of the information I’m finding about this is being published concurrently with my explorations. I’m also discovering a large gap between the research communities and production communities when it comes to this material. One of the most fascinating parts of this project, for me, is exploring this disconnect and inserting myself in the middle of this communication stream, attempting to bring these communities (or at least the work of these communities) together. Can I dive into the (non-arts / creative coding) research while coming up with something actually usable and communicable to the arts / creative coding communities? This is a goal I’ll pursue for my thesis.

I’ve done some smaller experiments with the PLY’s and 3D scans, and I got Vuforia to work to test Augmented Reality:

Here’s an unaltered PLY in Meshlab:

I also got this tutorial to work, to display a PLY in Unity. I hope to continue working on this, and write a script to load and display multiple (ideally thousands, lol) PLY’s to create an animation – and export this for Oculus for the final project. By generating the ply’s as triangles, this creates usable geometry that Unity understands, therefore I can treat these game objects as expected, and will hopefully be able to create a VR project in a fairly streamlined way.

Here’s a screenshot of the Unity editor, with the parameters for editing the PLY in the Inspector:

blue-Event

I’ve been working with footage shot in the Panoptic Studio at CMU, a markerless motion capture system developed by CMU CS and Robotics PhD students, . I’m interested in volumetric capture of the human body, without rigging a model to a skeleton in traditional motion capture, but in capturing in 3D the actual photographic situation – in my case, the human form. I am collaborating with Pittsburgh dance and music duo, Slowdanger, comprised of Anna Thompson and Taylor Knight. I’m interested in capturing actual video of real people, volumetrically, and creating situations to experience and interact with them.

The research question of how to work with and display this data is a challenge from multiple perspectives. First, a capture system must exist to be able to generate the data. The Panoptic Studio uses 480 cameras and 10 Kinects to capture video and depth data in 360 degrees. Secondly, the material is extremely expensive to process with regards to a computer’s RAM, CPU and GPU, and graphics cards. I worked for multiple weeks to convert resulting point clouds – i.e. a series of (x,y,z and r,g,b) points that create a 3-dimensional form, as meshes, with textures, and convert them to obj sequences to manipulate in a 3D program such as Maya or Unity. This had very minimal success – as I was able to get a few meshes to load in and animate in Unity, but without textures. I then decided to work with the point clouds themselves, to see what could be done with these. The resulting tests load in the ply files, in this case 900 of them (or 900 frames), and display them (draw to the screen) one after the other, creating an animation. I experimented with skipping points to create a less dense point cloud, and in displaying nearly every point to see how close I could get to photographic representation.

The resulting artifacts are proof-of-concept, rather than an artwork in and of themselves. I was not originally thinking of this, but the footage has been likened to early film tests of Edison, Muybridge, and Cocteau. It’s interesting to me to think of such a new technology generating material which feels very old – but at the same time, it is oddly appropriate, as we are basically at a similar point in creating visual content with this medium as they were with the early film tests in the late 1800’s, early 1900’s. It is such a challenge to simply process and display this content, that we are experimenting with the form in similar ways.

https://www.youtube.com/watch?v=-CM9W6pYSEo

In a further iteration of this material, I would like to get this content into an Oculus to emphasize its volumetric qualities, giving the viewer the ability to move around the forms in 360 degrees.

Here are the tests:

Creating volumetric films, “4D holograms”, is catching investor and industry attention for taking virtual, augmented, and mixed reality into a new domain beyond CGI – and there is a race to see who will do it best/first/most convincingly. Companies such as 8i and 4D Views are two such companies. I do feel that there are a lot of assumptions and exaggerated claims being made currently around this technology. It’s interesting to look at the types of content that come out of very nascent technology – and draw parallels between the early filmmaking / photography community and this industry / research. Who is making what, who is capturing who, and why? For whom?

The Panoptic Studio at CMU, however, does not come from filmmaking / VFX motivation, but rather a machine learning skeleton detection research question to interpret social interactions through body language. Thus, the question of reconstruction of these captures has not been heavily researched.

blue-EventProcess

I’m in the midst of a deep, dark hole about meshing point clouds using Meshlab, and automating all the processes through scripting. I am attempting to create a workflow that starts in Meshlab, uses a filter script and Meshlab Server to batch process the point clouds (.ply) into usable .obj’s, and then bring these obj’s into Unity, and animate using a package called Mega Cache, which takes in .obj sequences.

I’ve just discovered some material that seems to suggest that Unity can deal with point clouds using plugins, and I will pursue this next.

I’m meeting with the Panoptic Studio team tomorrow evening to talk through their data output and the workflow I’ve been investigating.

My highest priority right now is to achieve functional playback in animation form with meshes that maintain a relatively high level of fidelity to the original mesh. I’d like to display the point clouds as an animation, too, but have not figured this out yet. The content is exciting, but challenging to work with.

blue-EventProposal

I’ve been working with data from a shoot I did recently in the Panoptic Studio (PhD project in Robotics and CS). The Panoptic Studio is a dome with 480 VGA cameras, 30+ HD cameras, 10 Kinect, hardware-based sync, and calibration for multi-person motion capture.

The output is in the form of skeletons (like traditional mocap), and also dense point clouds that can be meshed. I filmed two dancers in this dome, and am working on ways to express the data, primarily working with the point clouds and meshing these to create an animation.

The event I’d be capturing is the interaction between the two dancers. I’m interested in this as a prototype for understanding how to work with this data, as there is not much documentation on it. I’ve been working with this data using Meshlab and Blender, but am interested in potentially working with OF to create spheres on the individual points in the point cloud, to create usable geometry.

Here’s what I’ve been doing so far:

blue-Place

I am interested in the idea of “the topography of our intimate being” discussed by the theorist Gaston Bachelard in his book “The Poetics of Space.” In this book, he examines and explores our relationship to elements of home, such as attics, cellars, drawers, nests, and shells. I was also inspired by Sophie Calle’s photography of other people’s hotel rooms, and the video documentaries of what’s in people’s pockets.

I have a particular interest in the use and contents of drawers, specifically top drawers. Personally, I’ve always had a strong association with my own top drawers, mostly dresser and bedside table. When I was younger, this was where I would stash my special objects, my secret things, my collected totems. Now, my association remains similar, yet slightly more disorganized, as the drawers can become catch-alls for special items and also random junk alike. These drawers seem to me to be a particular manifestation in material form of my thought processes, my life over time, and the ephemera of personal rituals.

Therefore, I set out on the nosy task of creating a collection of other people’s top drawers. I wondered if my hypothesis of reaching the “intimate topography” of the drawer-owners through the process of creating 3D photogrammetry models of their drawers would be validated. Bachelard says, “A house that has been experienced is not an inert box. Inhabited space transcends geometrical space.” I’m interested in the way this relates to the process of creating 3D geometry from this inhabited space.

I’m also interested in the spatial memory or association of someone’s house in their own mind. When I did the scans, I would ask the drawer-owner to draw for me a rough floorplan of their house or office. I’ve included one of these floorplans, extruded using Blender, as the “floor” of the Unity environment. The idea of capturing the “room tone” audio of every room where the drawer is located was suggested, and I love this idea. In a further iteration, I would explore this. I’m interested in the way this project may blur the concept of “portrait” through a personal place or object, and in creating a stylized / art documentary about a person through an investigation into looking (at the people themselves, their environments, and their objects). This concept has come up in my artwork before, and it’s interesting to see it crop up again unconsciously. My work seems to create opportunities for me to experience and almost curate social rituals – for myself and for the participants involved. Sometimes I would like to be able to share the process of making the work (the social experience of “collaborating” with the drawer owners) with the viewer in addition to the final product.

Through putting these models into an explorable VR environment, I wanted to give the viewer an intimate encounter with the contents of the drawers – the ability to explore and investigate a place which is normally entirely private to the owner. This was my first time using photogrammetry and Unity, and I set myself the technical challenge of creating an environment to explore using Google Cardboard. I am happy to have tackled this, and am continuing to use these techniques in other projects now.

blue-PlaceProposal

I am interested in intimate spaces. I’m interested in places where we stash things, hide things, forget things. I’m interested in the topography of our top drawers.

Gaston Bachelard wrote, in his book The Poetics of Space (1957/1964), about the “topography of our intimate being.” He wrote about the phenomenology of attics, basements(cellars), nests, cabinets, drawers, and the house as a whole.

I would like to create a system with which to immerse a viewer in the landscape of someone’s top drawer. This could be a top drawer of a desk, or dresser, or kitchen. I know, personally, that the back of the top drawer is often a place of hidden things, and whenever I move homes (which in the past decade has been all too often), I am always surprised at what I find there when packing up my belongings.

Technically, I am interested in figuring out a way to create a 3D scan of the contents and space of a top drawer, and I would like to place this in a 3D viewer, ideally a Google Cardboard, where a person could have an intimate, immersive encounter with this often overlooked but richly revealing space.

blue-Portrait

For the portrait project, I was interested in exploring ways to capture the physicality / physical presence of my portrait partner. In my work, this is something I am almost always searching for, and in this course we have been exposed to many different ways of capturing, or making visible, the human body. I gravitated almost immediately to the Edgertronic ultra high speed camera, for its high image quality and ability to significantly stretch time in visual form. I was interested in capturing my partner’s breathing and pulse after undergoing an act of physical exertion, in a way that also reflected traditional portraiture. For this project, I found touchstones in the work of Collier Schorr, Marilyn Minter, Rineke Dijkstra, and Bill Viola. I wanted to see what the high speed camera could capture that I wasn’t able to capture on a DSLR at 60fps. I ended up shooting these portraits at 900fps. A 7 second recording was then stretched to over 3 minutes.

We shot through a window, and I was interested in visualizing her breath on the glass, and also the trickle of water down the glass. I wanted to create a portrait that allowed the viewer to sink into the image, to slowly realize that what they were seeing was not in fact still or static, but was living and breathing. I used a very shallow depth of field, and chose to focus most clearly on the water droplets, to emphasize the surface of the image, an attempt to recreate the feeling of looking through a pane of glass. By showing the portrait on a vertical monitor that is the exact aspect ratio of the image, I aimed to turn the monitor itself into the window – a further attempt to highlight physicality in space.

I also created a triptych version of the portrait:

In addition, I was planning to film my partner after she ran / physically worked her body up to a higher pulse/breathing rate and capture this embodied process, but this was not possible on the day of filming. We tried, but it became clear that in order to capture what I was hoping to witness, the “physical exertion” process would have to be significantly more intense than we were able to do that day. So we returned to the visualization of breath / moisture, and focused on the window shoot. I was happy with what we were able to capture, and feel I did succeed in my goal of capturing an aspect of my partner’s physical presence.

In the future, I would like to continue exploring physicality through capture methods, and would like to get MIT’s Eulerian Video Magnification scripts working with these videos. I attempted to do this, but was not satisfied with the visual quality of the OpenFrameworks addon, and was unsuccessful in getting the MATLAB or Python versions of the script to compile. Eulerian Video Magnification, or EVM, magnifies and amplifies small color changes or movements in video, and has been used to visualize pulse and breathing in babies and medical situations. I would be interested to see what affect this might have on a portrait like the one I created. I’m also interested in working with near-infrared to visualize pulse, and potentially MRI / ultrasound data – and specifically finding ways to integrate these computational / data-driven processes into immersive/sensual/visual culture imagery and experiences.

Blue-PortraitPlan

I’m interested in using the high speed and possibly thermal camera to capture Faith in a way that focuses on her physicality when put through a process of exertion, physical & mental. I’m interested in creating a situation where I’m able to visualize this exertion in the form of sweat, breath, tears, or saliva. I’m very interested in capturing her breathing in detail – and to this end, I am planning to have her run, to work up a sweat, and then to film her as she is breathing when she stops running. I would like to situate this portrait outdoors, with the environment around her, and work WITH the cold weather to help visualize this physical process. I’m drawing influence from the artists Marilyn Minter, Collier Schorr, Rineke Dijkystra, Bruce Nauman, and also advertising imagery of atheletes.

blue-SEM

I was curious what something from my kitchen might look like under the electron microscope. So, I choose some quinoa because I suspected its internal structure might be revealed; as it unfurls when you cook it, maybe I could find out how that worked on a micro-level. I also had three colors of quinoa – black, white, and red. I thought maybe they might look different from each other. Turns out, all colors of quinoa look the same to electrons…

Here is the quinoa from a familiar view :

Then we zoomed in to find some interesting terrain:

The most interesting part of the quinoa turned out to be the place where the seed had separated from the stalk:
I would like to know more about what exactly is going on in this image, but it seems to show signs of having been separated (or torn, cut, etc) from other biological material. I feel like this image could be a set piece from some sci-fi movie, too.

Here’s the closest we zoomed in:

We could see individual cells in the quinoa. Pretty awesome.

Here’s a quinoa plant for reference:

Next, in my series of “trendy foods under the microscope” I’ll examine a kale smoothie.

blue-Introduction

Hi lovely people,

I’m in the first year of the Masters in Tangible Interaction Design (MTID) program. My first “capture” love was photography (specifically portraiture and narrative photography), followed by video, and then video (media) for live performance. I’ve worked as a projection designer for theatre/dance, and also as a producer and creative for experiential design. I’m very interested in exploring ideas of embodiment and communication in digital experiences, and in finding ultra-sensory ways to knit together interactivity, digital experiences, and physical environments.

Excited for the fun stuff ahead!