dechoes – final project documentation

Walking :  A Retraced Map of Walks Executed in Response to Grief

Stills from the 5min-long video walkthrough

 

For my final project, I continued work I had set down a couple months prior. Early in January, I was confronted with multiple events which caused me to grieve simultaneously. As a way to process those events, I walked roughly three hours a day over the course of a week. Because those walks felt so significant in the reclaiming of physical and mental space, I had  the foresight to record my path over the course of that week. I kept track of my routes and their timestamps, how long they took me, how long I stayed in any given place.

To create this 3D experience, I decided to use Photogrammetry to reconstruct physical spaces virtually. I specifically chose to work with footage from Google walkthroughs, because that is a theme I have been consistently going back to and interested in working with. How does digital space retain physical events? How does one make a digital landscape emotional and relatable?

Still of the keyframing of  the walks in Google Earth Studio

 

Recorded Path Through Photogrammetry in PhotoScan Pro

 

Rendering Time

 

I had originally planned on the video playing for a whole week synched to the real time, which unfortunately was not achieved in this time frame due to the intense rendering time. I might revisit this concept later (outside of the context of grief) and develop a more fleshed out video piece.

I’m also going to link to my website documentation for my documentary Dedications I-V, which I largely made in this class (even though I never used it as a class deliverable).

dechoes – visualization/manufactory

Infinite Cities Generator

This project is based on Italo Calvino’s book Invisible Cities, a novel which counts the tales of the travels of Marco Polo, told to the emperor Kublai Khan. “The majority of the book consists of brief prose poems describing 55 fictitious cities that are narrated by Polo, many of which can be read as parables or meditations on culture, language, time, memory, death, or the general nature of human experience.” (Thanks wikipedia)

What interested me about this novel, was how much it could be assimilated to generative storytelling and big datasets. I noticed as I read on, how closely the author was following specific rule sets, and how those same rules could be used to generate a vast amount of new stories. I was fascinated by the complexity, detail and visual quality of each city that Calvino created and decided to create more of my own.

I started by decomposing the structure of his storytelling and separated his individual texts into multiple categories, such as Title, Introduction, Qualifiers, Actions, Contradictions and Morals. I sampled actual thoughts,  sentences and names from his book but also added my own to the mix. I programmed my Infinite Cities Generator in p5.js using Kate Compton’s Tracery (Thanks Kate!).

Over the course of the next few weeks, I would like to complexify my rule sets as well as create generative maps for each new city, as a way to offer a visual escape into them. In addition to that, I would like to generate pdfs and actually print the book as a way to have a physical and believable artifact by the end of the project.

Below are a couple samples of the kind of stories my Infinite Cities Generator can create:

In addition to this project, I have been working on a 3D map experience, retracing all the places I have walked to in an entire week while dealing with grief. I walk when I have things to deal with or think through, and that week I walked an average of 2h a day. I’m thinking of displaying this instead of/or in addition to the Infinite Cities Generator. It would be displayed on the LookingGlass as a 3D video playing in real time, with the camera traveling on the exact paths I did.

 

 

And in addition to THAT, I have been slaving over my thesis project Dedications I-V, a volumetric documentary on storytelling in the context of progressive memory loss. It will be taking the form of five individual chapters on memory, with five different protagonists. Although I can’t really show it just yet, this is where all of my energy has been put into.

 

(i’m overcompensating because i haven’t produced anything real in this class yet — whoopsie)

 

dechoes – LookingOutwards3

The House of Dust by Alison Knowles is one of the first generative text computer pieces. The computerized poem is built off a very specific structure “consisting of the phrase “a house of” followed by a randomized sequence of 1) a material, 2) a site or situation, a light source, and 3) a category of inhabitants taken from four distinct lists.” In 1968, Knowles got a fellowship to create a physical structure of her generative poem from the Guggenheim. It was later moved to Cal Arts, California, and she used it as her teaching space.

I will also be working on a generative text project, inspired by Italo Calvino’s Invisible Cities. I will be using Kate Compton’s Tracery and Rita.js as a tool to generate sequences of new cities, accompanied by their individualized city maps.

dechoes – mask

Performative Mask

Overview. From the start, I was interested in doing some work with music videos, and adapted this assignment to my own constraints. I’ve been working with two bands from Montréal on a potential music video and album cover collaboration — this assignment felt like an opportunity to prototype some visual ideas. For the sake of this warm-up exercise, I worked with the song A Stone is a Stone by Helena Deland. She sings about goodbyes and core issues in either a person’s character or in a relationship.

The lyrics made me think of permanence. When you go through life with a specific idea, character trait or person for a long time, you sometimes lose track of their presence. I was interested in working with the idea of fading into one’s environment, having been somewhere or someone for so long that you become one with it. After a while, you can barely be distinguished from the space, metaphorically or physically, that you evolve in.

Ideation Process. My process was hectic, as I had a very hard time thinking of an idea that got me excited. I started by wanting to make an app which erased your face any time you smiled awkwardly in an uncomfortable situation, but decided this kind of work had been overdone in the wearables world. Then I decided to make a language grapher using mouth movements to have a visual representation of texts which had a “Chloe” as the protagonist (as a way to study the word, how other people perceive it in literature, and how that influences personal character). I went through with this idea, and hated it by the end, so I decided to start over from scratch. Hence, the current idea described in the overview.

The hoodies that allow you to hide from uncomfortable situations. 

 

A visualization of the introduction of the character “Chloé” in the novel ‘L’Ecume Des Jours’ by Boris Vian.

Physical Process. I essentially worked using the environment as a way to inform the content, creating bidirectionally. The researched a couple backgrounds, photographed them and performed in front of them with my nice camera. I then applied the simple mapping filter of the aforementioned image to the prerecorded video. This project didn’t actually require any scripting, since I chose to do it with Spark AR. I originally started doing it in Processing but wasn’t happy with the way it looked. Spark AR had a more advanced mesh tracking option, which allowed for warping around face features, which I stuck with.

                

Testing patterns and functionality. 

 

I proceeded to film a total of 6 scenes, selected by texture and color. I wore clothes that fit the background scheme, and photographed each material before filming myself lip-synching with a dslr. I then converted the videos to a smaller format (which was the only import option, sorry for the loss of quality) and layered the mask in Spark AR. The final editing rendered the video below, only featuring a small chunk of the song.

 

 

(I removed the video from youtube, sorry Golan. I left you a gif though)

Improvements. With more time, I probably would have:

1) found a way to also include the ears and the neck, which look silly without the mask layer

2) reconsidered my idea, because I felt like the overall effect was uncomfortable and funny, while the song really isn’t

3) found a way to change the scale of my map image, because it didn’t fade as much into the background as I had wanted

dechoes – LookingOutwards-1

Uncanny Rd. – Drawing tool to interactively synthesise street images

Uncanny Road is a drawing tool created by Anastasis Germanidis and Cristobal Valenzuela, as a way to generate new interactions between humans and machines. Their tool uses GANs to generate new Google maps images drawn by the user. The images that are produced are beautiful, poetic, and as the title indicates, uncanny. To me, this project represents the infinite possible for creation when open to experimentation and unconventional tools. One issue I found with Uncanny Road, is the lack of control that the user asserts on the tool, however than can lead to unpredictably delightful images as well. Unfortunately, this project is still just a tool and might need to be taken further in order to be called an art piece.

Gif:

Complete Demo Video:

 

dechoes – reading1

I felt very attracted to Flanagan’s idea that ‘Critical Play Can Mean Toying with the Notion of Goals’, which is a notion that hadn’t crossed my mind before enrolling in art school. The article mentioned Molleindustria a couple times, which did not surprise me, as one of their games made me realize that rules need not apply.  A Short History of the Gaze, which was shown at Weird Reality: Art && Code in 2016, made me think about games in a way I never had before. The game progresses through different levels engaging the viewer in gazing, all of which are intended to make the player feel invasive, uncomfortable, vulnerable or even violent.

I have been having a hard time merging the Critical and the Play in my work, which is something I would really like to work on this last semester at CMU. I feel like my pieces fluctuate between empty/fun and deep/serious, whilst all of my favorite interactive artworks always merge both. The notion of critical play is important because it is engaging, thus enables thought and  progress. Unfortunately, it is much harder to achieve within a piece of work than it is to talk about.

dechoes – 2D physics

This gargle activated sketch can be found online here.

 

I cannot say that this warmup had any particularly important content driving it, I was mainly interested in getting used to Box2d and merging it with other interactive qualities. I tried a couple things first using face tracking (as I wanted the water to be generated through a gargling gesture), mouseIsPressed, or with mouseX, mouseY, and finally settled on having it be voice activated. The size and quantity of water droplets being spurted out of the faucet are driven by the threshold of the audio, generated by a gargling spectator (Although there is no sound in the documentation, one could imagine me frantically gargling alone in my bedroom at 3am).

The creation of this assignment was heavily supported by Dan Schiffman’s Box2D documentation.