jamodei-FinalProject

Point Cloud Gardens

point cloud garden_banner

A collection of depth-capture collages from daily life arranged into an altar for personal reflection and recombination.

point cloud gardens - banner 2

Overview:

This project was inspired to take the drippy tools of depth capture and create a means for using this technology to facilitate a daily drawing practice with point clouds. As I worked on creating this, ideas of rock gardens and altars came to me from a desire to create personal intimate spaces for archive, arrangement, and rest. How can I have the experience of my crystal garden in my pocket? A fully functional mobile application is the next step of this project. What are the objects in one’s everyday depth garden? For me, it was mostly my friends and interesting textures. What would you like to venerate in your everyday?

point cloud selection #5 point cloud selection #6

Narrative:

This project was a continuation of my drawing software, with the aim of activating the depth camera now included in new iPhones as a tool to collage everyday life – to draw a circle around a segment of reality and capture the drippy point clouds for later reconcatenation in a 3D/AR type space.

In this version of the project, I focused the scope of my project to create a space that could be navigated in 3D, a space that would allow viewers to ‘fly’ through the moments captured from life. In a world where there is so much noise and image-overload, I wanted to create a garden space that would allow people to collect, arrange and take care of reflections from their everyday – to traverse through senses of scale and arranged amalgamations outside of our material reality.

point cloud garden_banner_03

In developing this project, I had to face the challenges of getting the scans in .PLY format from the app Heges into my Unity-based driving space. Figuring out how to make this workflow possible was a definite challenge for me. The final pipeline solution was to use Heges to capture the scan in an unknown PLY format; then to bring it unto Meshlab to decimate the amount of data as well as convert it to the PLY binary little-endian format file that would allow Keijiro Takahashi’s PCX Unity package to work.  From there I could start to collage and build space in Unity that could then be explored during gameplay mode.

people playing point cloud garden #2people playing point cloud garden #1

While I was happy with the aesthetics of the result, I wanted to take this project much further in the following ways: I wanted to have the process described above be able to happen ‘offscreen’ in one click. I also wanted the garden space to be editable as well as traversable. Lastly, in the stretch-goal version I wanted it to be a mobile application that could be carried around in one’s pocket. I think as I continue to explore depth capture techniques and their everyday applications these steps are things I will strive towards.

Above is video documentation of a fly through in the point cloud garden.

 

 

 

jamodei – Interactive Manufactory – Weird Shapes in Public – check in

 

Packing and Cracking – Getting Weird Shapes Out in Public

NC Gerrymandered Districts
NC’s Gerrymandered Districts ready for laser cutting into coasters (and other weird shape interactions)

 

I will be taking my pass for this project. Nonetheless, here is where my research for this project is currently at, and where I am interested in taking these ideas in the future.

I am quite busy with tech rehearsals for my video/projection design of Atlas of Depression  in the School of Drama (Which is open April 17-19).

Atlas of Depression tech still
Tech process photo! Using ISF shaders to create affective landscapes and manipulate live video feeds, too.

Anyway:

‘Packing and Cracking’

I am working on creating/writing/producing an interactive, map-based theater project called ‘Packing and Cracking’ that confronts and explores the harsh realities of gerrymandering in my home state of North Carolina. My collaborator, Rachel Karp, and I have describe this project as:

“A multimedia mapmaking event, ‘Packing and Cracking’ explores redistricting–and the widespread manipulation of redistricting known as gerrymandering–in America today. ‘Packing and Cracking’ focuses on redistricting in that state, whose maps have been so racially and partisanly manipulated that it has led to the state no longer being considered a democracy. Set on a theater-sized map of North Carolina, with the audience arranged across it to match the state’s population demographics and distribution, ‘Packing and Cracking’ uses cutting-edge redistricting software and North Carolina’s particular redistricting story to draw and redraw district lines around audience members in real time, demonstrating how easy and precise districting can be and how little the people affected are involved.”

Weird Shapes

The main idea behind this project is to put the weird, gerrymandered shapes that constitute these districts into everyday objects that people can interact with. The first impulse was to make these odd shapes visible so that discussion around them and what they were could happen. I was inspired by this project that does this with jewelry.   At first I wanted to use Shapeways to mass produce a cheaper, and more distributable version of this project – or one where people could upload their own districts. After our discussion just replicating this project was not interesting enough on its own, and I moved on to the idea of mixing failure with these weird shapes. I did, however, make and order a cheaper version of this necklace with a an engraved hashtag that arrives tomorrow.

NC-6 Necklace
“A diamond is forever, but a district lasts a decade.”
Failure

Currently, my interest is in creating a website where people can order a variety of household/everyday objects cut out in gerrymandered shapes. The hope is that the shapes of these objects will make the use of the object result in failure – and hopefully draw attention to the immense complexity surrounding gerrymandering via humorous failure. I want to begin the process of having people place their own voter disenfranchisement – as a result of gerrymandering – into their own body through their performance with the failed objects. For example, I am hoping to to cut the first image in this post out as a set of drinking cup coasters in proportional scale to each other. This would result in some of the very compact districts being useless as a coaster, and some of the large districts with odd holes in them silly to use, too.  I am in the process of getting the laser cutter training at the school of drama, and will make this particular item available on the my Glitch-based site via the Ponoko API. Other ideas for failed objects include:

  • Silicon oven mitts that make it hard not to burn yourself because of the gerrymandered districts.
  • Weird shaped pillows that make it hard to sleep.
  • Disposable tissues that make it hard to blow your nose.
  • Custom cut sticky-not pads that make it hard to take notes.
  • Tote bags that are not good for holding items.

 

http://www.ismydistrictgerrymandered.us/

 

 

 

 

 

jamodei–lookingOutwards03

I am interested in making a simple, poetic intervention/noise creation/corruption of the transactional, capitalist systems of desire production embedded into our everyday (digital) communications.

Here are a few pieces of research that I have found exploring this:

drone triptychs

drone triptych 1

drone triptych 2drone triptych 3

 

by Tivon Rice, 2016
photogrammetric digital prints
Link Here

Rice describes this project as:

“These images and texts represent Rice’s studies of Seattle’s rapid change. As many sites and landscapes in the city disappear, a new kind of visuality emerges: one shaped by economic forces, the influx of tech, and developments that often favor these interests rather than those of the diverse communities that call Seattle home.

In Drone Triptychs, these scenes and locations are explored through a digital process – photogrammetry – which generates a virtual 3D model by analyzing hundreds of two-dimensional photos. In order to access all possible perspectives, many of the photos were captured using a drone, an airborne camera funded by 4Culture’s 2015 Tech Specific grant.

The models that result from photogrammetry can then be scaled, rotated, inverted, animated, textured, or rendered as a wireframe. This act of virtualizing a space, which often creates a glitchy, hollow, or flattened shell of the original site, seems similar to many of the large-scale image-making processes at work in the city: regrading, demolition, faux preservation, façadism.

The accompanying texts further explore a virtual or uncanny representation of Seattle’s image. Working in collaboration with Google AMI – Artists and Machine Intelligence, a computer was trained to “speak” by analyzing over 250,000 documents from Seattle’s Department of Planning and Development. Ranging from design proposals and guidance, to public comments and protest, the vocabulary that resulted from this training was used by the software to automatically generate captions and short stories about each photo. In these stories, the “voices” of city planners and the public are put into a virtual dialogue (or argument) with each other as they describe each scene. ”

What I enjoy about this project is the pairing of disappearing visual landscapes with a poetic reinterpretation of the language that is out there acting as a force which is creating the disappearance.

Fifteen Unconventional Uses of Voice Technology

Article Link Here

This is an interesting article G0lan showed me about a course taught to explore creative uses in voice technology. The github and syllabus from the class are filled with interesting resources.

–Objects summoned in VR by voice in Aidan Nelson’s “Paradise Blues”–

 

jamodei-DrawingSoftware

 

Exploring Depth Capture Collage Tools

 

Collages Created:

The aim of this project is to activate the depth camera now included in new iPhones, and turn it into a tool to collage everyday life – to be able to draw a circle around a segment of reality and let the drippy point clouds be captured for later reconcatenation in a 3D/AR type space. This turned out to be quite a challenging process, and it is still ongoing. At this point, I have pieces of three different sketches running, and I am working to house everything under one application. I spent a lot of time in my research phase doing Swift tutorials so that I could get these developer sketches running. I am continuing this research as I move towards making my own app by combining and piping together the pieces of all of the research featured in this post.

 

In the video ‘Point Cloud Streamer Example,’ you can see an example of the live streaming point cloud capture that I am striving to implement. This was the piece of research that first set me down this path. This Apple developer sketch has by far the best detail of any of the depth cameras wrap-ups that I have seen, and is of the fidelity that I want to use in the final app.

 

In the video ‘Depth Capture Background Removal Example,’ you can see another developer sketch that uses the depth camera to remove the background and offer the viewer the chance to replace it with a picture of their choice – a sort of live green-screening via depth effect. In the final version of the app, I hope to give the user the freedom to scale the ‘Z’ value and select how much depth (meaning distance from the lens to the back of the visible frame) they want to maintain in the  captured selection, for later collage.

Finally, I found an app called ‘Heges’ that possesses an onboard depth camera capture apparatus. It captures depth in a proprietary format, and then allows users the option to export the depth data into .PLY sketches. This is what I used to make my collages, and to mock up the type of space I hope to manipulate in a final version for 3D creation. I had to use MeshLab to open the .PLY files, and then I rotated and adjusted them before exporting stills that I collaged in Photoshop. This app was pretty effective, but the range of its depth capture was less extensive than what seemed available in the Point Cloud Streamer Example.’ It does have built-in 3D and AR viewports which are also a plus and I will try to eventually incorporate such viewports into my depth capture tool.

 

 

Point Cloud Streamer Example

Depth Capture Background Removal Example

Heges Application Documentation

 

jamodei-mask

On Being: Dorito Dreaming

On Being: Dorito Dreaming, still

This project is an attempt to pay a sublime, transubstantiative homage to a product with which I have a deep connection. In this case, for me, it was Doritos. I was inspired to consider the concept of a mask in a more relational sense. We are asked to embody the values of many products many times per day – and instead of letting that subtextual, marketing interpellation happen in the background, I wanted to embrace it, to own it, to become it. To sync up with my material reality in hopes that I can find space to make choices among the many, many, many consumption-based asks that hit me in invisible waves. To create a visually realized version of the masks we are asked to wear every time we look at the bag of Doritos, or the Starbucks coffee, the Crest toothpaste, or the Instagram logo (etc.).

 

Research-wise, I was initially interested in face-mapping or face-swapping technologies. A deep dive eventually led me to find out about machine learning-created ‘style transfers,’ which are essentially the training of a neural network to repurpose an image into the style of another input image. I found an excellent tutorial that demonstrated how to do style-transfer for web cameras. I trained it on GPUs in the cloud via Paperspace’s Gradient. Then, the model ran in my browser with Ml5.js and P5.js. I was interested in using this technique to achieve the homage described above as I wanted to use a live camera to capture reality as a Dorito (bag) might.

This is the image I trained the model on for the ‘style transfer.’  

Once I had the style-transfer camera complete, I began to look for a performative starting point. I was inspired in thinking about re-concatenating the world with a performance style similar to Dadaist Hugo Ball’s ‘Elefantenkarawane’ in simplicity, costume and direct confrontation with reality. I was using my software to re-configure reality capture, and I wanted to work in a performative tradition that also tried to configure reality for some sort of meaningful understanding (even if in the absurd). In On Being: Dorito Dreaming, to take the breakdown of language a step further and to remain focused on material embodiment, I created an ASMR (Autonomous sensory meridian response)-inspired soundtrack for the performance composed of sounds created by consuming Doritos and carefully observing the package next to a microphone.

I am interested in continuing this process with other people, and creating a series of vignettes of people trying to embody a product of their choice.

Below are some stills from the film studio and the costume/makeup construction:

OB:DD process - studio 1OB:DD process - studio 1

OB:DD process - costume 1OB:DD process - costume 2

Hugo Ball performing Elefantenkarawane, 1916

Hugo Ball performing Elefantenkarawane

In my research, it seemed the a lot of the live-camera style transferring had been done with famous painting styles. Example below:

style transfer example

As far as getting the model trained, then up and running, I want to thank Jeena and Aman for helping get through some technical issues/questions I was having in this stage of the project!

On Being: Dorito Dreaming GIF

jamodei-lookingOutwards02

Elegy: GTA USA Gun Homicides

by Joseph Delappe 

CW: animated graphic violence

Elegy: GTA USA Gun Homicides

Link to Actipedia documentation

Link to live Twitch stream

This project is a self-playing version of Grand Theft Auto V that performs as a data visualization for “a daily reenactment of the total number of USA gun homicides since January 1st, 2018.” One interacts with this work by watching the 24/7 live stream on Twitch. As the camera slowly pans backwards, one sees characters in the video game killing each other every few minutes (or more often I guess depending in the day) as a way of marking something that can feel invisible. Elegy is challenging to watch (even though generally I do not find first person shooters to be that triggering). The mix of mediums – i.e. real gun violence vs. video game gun violence vs. statistics on gun violence – presented in a never-ending slow scroll to chill-but-patriotic music creates a performance with the viewer that forces into being a complex and unanswerable dialectic around the reality of the large number of gun homicides in the USA and the apparent impossibility of change. This complication of data is what I find to be the most fruitful aspect of the work for me. I find the work’s attempt to repurpose material observations about our reality – and communicate them in familiar cultural forms in order to visualize the political nature of data – helpful and inspiring.

Elegy-GTA USA Gun Homicides 2.

 

jamodei-2Dphysics

Interpellation Mouth Tracker

My goals for this project were to work with a live video feed, find an amusing way to interact with the matter.js physics system, and also to introduce myself to working with p5.js/javascript.

 

“Interpellation is a process, a process in which we encounter our culture’s values and internalize them. Interpellation expresses the idea that an idea is not simply yours alone (such as ‘I like blue, I always have’) but rather an idea that has been presented to you for you to accept. ”

 

What I ended up making is a face-tracking interaction (using clmtrackr.js ) where every-time the viewer opens their mouth, the word ‘interpellation’ spills out and bounces around until you close your mouth. I thought this would be a humorous way to engage with a relatively simply physics interaction. The words have a floor and ceiling they interact with, and I was able to get moderate control over the manner/speed/animation style in which they bounced around and interacted with the other word-rectangles. Overall, I wish I had been able to get further into physics systems, but I am happy with the progress I made in this new (to me) creative coding environment. I am interested in future possibilities of exploring face-tracking, too.

 

GIF documentation:

jamodei-lookingoutwards01

The Future

Created by Anonymous Ensemble

I had the pleasure of seeing this immersive, live theater performance just a few weeks ago, before I came back to school for the start of the semester. In this piece, you walk in and sit in a room full of the other members of the audience. What appears to be a human in a space-suit-like costume walks over to you and gestures that they would like to put headphones on you. When you consent, they place them over your ears and then you hear a voice call out, “what is your name?” You watch the performer gesticulate to the movements of the computerized voice; as you respond you hear yourself replying, and everyone else replying in turn. Behind this performer is a row of other performers operating various machines and  instruments who will speak to you soon. They then ask for an audience member to offer a breath for a breathing ritual. Another for the sound of a heartbeat. As folks volunteer, you hear the breath and heartbeat become the underlying track to a 1000-breath countdown image (appearing on small screens in the corner of your view) that will be denote the length of the performance.

 

All the audio is created between the computer-like people asking questions and the audience responding. Eventually they put you into dialogue (through them) with other members of the audience. This piece is at its most difficult when you start to question who these seemingly benevolent robot-people are, and it starts to feel confused and potentially dangerous. The piece is at its best when you learn the rules of the live audio system and share in moments of learned and earned interaction. These moments conclude at the end of segments such as ‘politics’ and ‘environment’ when you hear an audio collage-song generated from everyone’s responses played back as the computer people nod and drift their eye contact. A sensation of uploading and analyzing pleasantly washes over you.

I know this piece relied heavily on Max/MSP in order to interface with the live audio and video system in the space, as well as for setting up the audio i/o between all of the performers and audience members.

 

Info and pictures below:

 

 

Jamodei-reading1

When Flanagan states, “[g]ames—like film, television, and other media—are created by those who live in culture and are surrounded by their own cultural imaginary, and are a cultural medium that carries embedded beliefs, whether intended or not,” I hear what I consider to be an echo of (one of) my foundational questions when I begin a new art project. What do I want to explore, and why? How are the potential proposed aesthetics and politics interpellating each other? Why does the medium I am using matter to the experience of the viewer/participant who will encounter it? What expectations will the framework of the location of the work leave the viewer to start with? What will their relationship to my viewpoint offer them? What process, what method, and to what end?

The exposition or complication of a norm or idea is often the hope, or maybe even goal, in work I make or projects for which I design theatrical environments. I was excited to encounter this idea in relation to the making of art-games, to which I am only relatively recently exposed. Flanagan’s third point – concerning employing criticality to create new forms of play – was immediately recognizable to me in terms of a technique which interests me in performance mediums (including the digital). It made me think further about the ‘double-performance.’ An example of this Postdramatic technique could be summarized as when a theater production uses a live camera onstage, but forefronts the interplay between the filming, the filmed, and the live (re)mediation of the filmed. This multifaceted liveness tells an audience member who might be accustomed to sitting in the dark watching a narrative scene unfold that they have multiple points of reproduction to track at at the same time. Often, my hope with this technique is that the disruption of how the audience is supposed to pay attention to the form comes under question as the content of the production unfolds, and hopefully in the in between spaces of interpreting the double (or triple depending on how you want to look at it) performance new ways of experiencing the idea can be accessed. I think there are a lot of intriguing possibilities for considering how new forms of play relate to attention, and how such relationships help convey the transfer of specific critiques of dominant (or assumed) values.