dorsek – FinalProject

Tweetable Sentence: 

“Butter Please: A gaze-based interactive compilation of nightmares which aims to mimic the sensation of dreaming through playing with your perception of control.  ”

 

Overview

Butter Please is an interactive sequence of nightmares transcribed during a 3-month period of insomnia, and the result of an exacerbated depression. The work is an exploration of the phenomenological properties of dreaming and their evolutionary physiology, in addition to being a direct aim at marrying my own practices in fine art and human-computer interactions. The work finds parallels with mythology and folklore; the way people seem to ascribe a sense of sentimentality to such fantastical narratives. 

 Butter Please mimics the sensation of dreaming, through playing with your perception of control. Your gaze (picked up via the Tobii Eyetracker 4c) is what controls how you move through the piece, it is how the work engages with, and responds to you. 

 

 

Narrative

Butter Please, as mentioned above, was inspired by a 3-month period of insomnia I experience in the midst of a period of emotional turmoil; the nightmares resulted from an overwhelming external anxiety brought on by a series of unfortunate events and served to only exacerbate the difficulty of the time. The dreams themselves became so bad for me that I would do everything in my power to avoid falling asleep, which in turn birthed a vicious cycle. My inspiration for pursuing this was as follows; to try to dissect the experience a bit through replicating the dreams themselves (all of which I vividly transcribed during the time that this happened, as I thought it would be useful to me later on).

This project was something I felt strongly enough about to want to pursue to the end, so I decided to work on it with the hopes of displaying it in the senior exhibition (where it presently resides).  In addition to being a mode for me to process such an odd time in my life, it also became a way to experiment with combining my practice in Human-Computer Interaction and Fine Art; creating a piece of new media art that people could engage and interact with in a meaningful way.  It was a long process  – from drawing each animation and aspect of the piece on my trackpad with a one-pixel brush in photoshop (because I am a bit of a control-freak) to actually deciding on the interactive portion of the piece (a.k.a. using the eye tracker and specifically gaze as the sole mode for transition through images)… and beyond that, even deciding on how to present it in a show.  I think that I had the most difficulty with getting things to run quickly, simply because there was so much pixel data being drawn and re-drawn from scene to scene. It was a bit difficult to glean the eyetracker data at first from the Tobii 4c, but as soon as I managed to do that the process of coding became much smoother. In this way the project did not meet my original expectation for fluidity and smoothness… On the other hand; it exceeded my original expectations on so many levels: I never would have expected to have coded a program which utilized an eye-tracker even just 4 months ago when I was searching for the best mode of interaction with this piece… I think that being in this class really developed my ability to source information for learning how to operate and control unusual technology, and for that I am actually pretty proud of myself (especially knowing how unconfident and how little I felt I knew at the beginning of the semester…)

In all honesty, I’m elated to have gotten the project to a finished looking state; there were a few points were I wasn’t sure that I would be able to create it in time for the senior show.

That being said I am extremely indebted to Ari (acdaly) for all of the help she provided to me in working out the kinks of the code (not to mention the tremendous moral support she provided to me…); I really can’t thank her enough for her kindness, and patience to work through things with me; and I couldn’t have finished it on time in the level of polish it exists without her help.

Beyond that I really owe it to all of my peers for the amazing feedback that they provided throughout the year (both fellow art majors in senior critique seminar from Fall/Spring semester in addition to the folks from this Interactive Art course). It’s because of them that I was able to refine things to the point they are at.

Golan is also to thank (that goes without saying) for being such an invaluable resource and allowing me to borrow the tech I needed in order to make this project come to fruition.

Aaaaaaand finally I just want to give a shoutout to Augusto Esteves for creating an application to transmit data from the Tobii into processing (that saved me a vast amount of time in the end.)

 

________________________

Extra Documentation

Butter Please in action!

     

Shots from the piece:

________________________

dorsek – Looking Outwards – 3

So initially I was interested in doing something relating to genome art or in other words….

genetic code (art) — (hahaha)

but didn’t really find anything that seemed to strike my fancy (that wasn’t also presented already by Golan – i.e. the artist testing bacteria in NYC subway lines) BUT I did find a piece which I found to really influence the way I was thinking about communication, which is this (it’s also not technically an art piece but I think it raises a lot of funny questions just in the nature of its existence)

Braillify

Basically, this is code tool used for converting images into braille! Now, it doesn’t actually translate into braille but rather translates the image into a second made of out braille-based patterns. What’s quite funny to me is how braille’s original intent is being rendered completely useless in two ways… A) in that it’s not actually translating anything but rather being used to pictorially represent something that is already an image, and B) that even if somebody wanted to test the braille, they couldn’t because it exists on a digital screen (though I’m sure somebody could created a way to stream that information so it can be 3-d printed or something similar…); the irony here is painfully potent.

Some other interesting resources: 

An article about injecting a synthetic copy of your DNA into your art in order to prove it’s not a forgery

https://news.artnet.com/art-world/dna-art-forgery-954971

~~

An open-source platform meant to “Kandinskyfy your Genome” making your DNA into a Kandinsky style digital painting (this is really funny). The website provides a link to their GitHub where you can view all of the code they use to analyze DNA and generate “findings” with

https://www.impute.me/kandinsky/

https://github.com/lassefolkersen/impute-me

~~

 

dorsek – FinalProposal

For the final project in this course, I will be putting finishing up the piece I have been working on for the senior show: the work itself is a digital compilation of dreamscapes based off several dreams I had during a 3-month period of insomnia. It is interactive, and you can transition through/interact in different dreamscapes through the use of the tobii 4c eye tracker (i.e. your gaze is a subtle, almost unrecognizable controller of sorts…) Much like real dreams, the interaction will be very distorted, sometimes triggering responses that distance you from what you intend (for example when you look at something it stops moving or doing whatever interesting interaction it was before)…

I will be putting it into the Ellis but since there are two eye trackers, I hope I will be able to use one for the class exhibition as well on a separate NUC.

At the moment I am compiling all of the dream sequences into processing (from P5.JS so that they are compatible with the eye-tracker…) and putting the finishing touches on the interactions.

dorsek – project4Feedback

much of the the feedback I received during critique revolved around various other interventions I could include and every sense were quite helpful in generating ideas for how to cintuniue this project if I so please. I was glad to have it looked at in this somewhat unfinished state because the quality of feedback was restricted to conceptual questions of what the world can ask in and of the world especially if existing in a context outside of class.

I also got some useful feedback which cemented my original feelings/intent with regards to the documentation of the project; that is to say suggestions which confirmed my initial intuition to use two people interacting over this medium as opposed to myself (so as to communicate the idea better) including making the interactions or documentation a bit more “dad-specific” (as Josh put it) so as to communicate the inspiration a bit better as well.

 

Perhaps I got lucky seeing as how I was first and thus might have received the brunt force of people’s energy, but the amount of feedback I received for the project was large. Though there wasn’t a lot of content, people really exoanded on the concept of the piece and provided various suggestions in addition to food for thought with regards to the societal and relastional implications it makes/reflects on in today’s relationships.

dorsek – 04 Telematic

This project took quite some re-working from the start and in the end made for a very effective challenge for me with regard to learning how to navigate through a difficult backend/not well-documented issue of plumbing. It the process itself involved learning how to rip pixels from a screen and send the data over a local server into a program that would then commiunicate them to processing in a format that it could both understand, and replicate.

In the end, because I spent so much time on learning and working through the issue of trying to get the video data into processing, I had to Wizard of Oz the final interaction; seeing as how for some reason the pixel data was nit being recignized as video or photo readable by the face detecting library available to processing (OpenCV). Other than that small bug, the eye tracking data from the Tobii 4c, the video feed from Skype, and the sound, were all working perfectly well.

Beyind all of that however, I was able to successfully construct an imagined future wherein the potential for monitoring the gaze during video conferencing was realized and also allowed for a bit of play with th intervention I posed; a sort of meter that humorously rates the quality of eye contact you make with the form of the other person (and which would certainly work had it not been for the openCV issues…)

 

 

much of the the feedback I received during critique revolved around various other interventions I could include and every sense were quite helpful in generating ideas for how to continue this project if I so please. I was glad to have it looked at in this somewhat unfinished state because the quality of feedback was restricted to conceptual questions of what the world can ask in and of the world especially if existing in a context outside of class.

dorsek – telematic (check-in)

At the beginning of this projects I was attempting to ideate concepts for the interactive manufactory prompt; trying to come up with a way to use my genetic code and literal code to create a manufactory piece of art, and after much preliminary research on what people have already been doing in relation to genome art I have decided that it’s too easy to come up with another  dumb idea and this is the kind of project that should brew in the back of your mind for awhile…

So I switched gears and started to work on the foundation for a telematic project.

To lay down a basis for my project I should first talk about what sparked my interest in doing this… in recent months, I’ve found myself to be quite frustrated to be quite frustrated with the fact that during video chatting sessions, My father always seems to be looking at himself. It’s clear that his gaze isn’t directed towards me because he only looks at me when I say something that really grabs his attention. Through conducting interviews I found that this seems to be a major pain-point for the population of people who have interacted with him over Skype.

In light of this frustration I wanted to make a video chatting interface specifically meant to communicate with him (or people like him who are blatantly checking themselves out the entire time you are engaging with them…) I decided that it would be fun to go down the path of obstructing his own view of himself during these sessions so that he would be forced to stop looking at himself. Some examples being:

  • His face gets smaller, and smaller, the longer he stares at himself
  • His face disappears and re-appears in each of my pupils on his screen so that he will be more inclined to look me in the eye
  • His phone (fondly referred to as “Lil’ Debbie”) vibrates violently when ever he looks at his own image.
  • The sound on his end goes completely mute and he hears nothing until he is making eye contact again
  • And other such fun interactions (always open to suggestions)

So far I am in the process of developing the gaze tracking process locally (through the use of openCV and processing). I hope to have that completely worked out by Friday so that I might begin to establish a way of interfacing and sharing video between himself and I and from there modify the way that the program responds to his obsession with his likeness.

Dorsek – DrawingSoftware

Some screenshots of the “drawings” after 20 minutes of napping post-video and after implementing a 3rd ‘trigger’/training session

Brain User Interface based Drawing

When stripped down to the bare bones, this project was my first attempt at trying to create a drawing based brain user interface (BUI) using a commercially available brain wave sensing headband (Muse2) in order to do so.

My interest in creating such a piece originally laid in the desire to develop a program that could transcribe your dreams as illustrations while you were unconscious, allowing you to wake up to a image that was supposed to be a transcribed dream journal of sorts… Specifically I wanted to use brainwaves of a sleeping user in order to begin to draw a thing which then would be completed by SketchRNN (when it decided that it was certain it knew what was being rendered), and then move onto the next start of a drawing, repeating over and over until the user wakes up in the morning only to see a composition of “their dream” (or rather what the program believe their dream to be). Unfortunately, this was not achievable in the time given for this project due to a few factors:

a.) The fact that the brain sensing headband didn’t arrive until about 6 days before the project was due

b.) my own unfamiliarity with the programs necessary to make a program like that become reality

Considering the first obstacle in particular I found it conducive to try and narrow down my scope as much as possible to this: a program with which you could use the raw EEG data of your brainwaves to paint with, so essentially using your focus on particular thoughts in order to manipulate a digital painting tool – Though as you will see, this too was much more difficult than I initially expected.

Capturing the drawing of a gnarly yawn
A “close-up” look at how the brush is moving where I focus on the concept of chocolate covered bananas(low position/red) as opposed to the sky (high position/blue)…

 

Process

Much like Golan warned me, the “plumbing” for this project seemed to suck up the most amount of time – as it was a great deal of work trying to get information out of the headband in the form of OSC data (so that I could forward it into Wekinator, use machine learning in order to “train” the drawing program, and then from there implement it in processing).
The plumbing actually required a few extra steps, one of the most important being getting to know OSCulator (an extremely valuable tool recommended to me for use by Golan) – even though I had the ability to export the OSC data of the headset via the 3rd party app, museMonitor, all of the OSC data being exported was in the form of several separate messages (a format that Wekinator didn’t seem to recognize) so I used OSCulator in order to format the OSCdata into Wekinator’s standardly and only accepted float list format. Though there is a great deal of information on muse headset data, and wekinator alone – there is hardly any information on the use of wekinator, OSCulator, and muse in combination so much of my time was spent doing research simply on how to get the information from one platform to another.

Overall, this slightly frustrating and certainly trying process took me the better half of 5 days, and as a result I wasn’t able to spend as much time on the concept or actual training of the application unfortunately. In retrospect, I’m glad I was able to accomplish as much as I did considering how little information I felt there was regarding such a niche method in addition to how little I initially felt I could comprehend. It was definitely an amazing learning experience.

Future Iteration…

So, even though I do have a BUI that functions somewhat coherently, I do think that I would have liked to spend more time on fleshing out my original concept or even implementing features to turn this into a clever game (such as a very difficult game of “snake” or some sort of a response to fugpaint that takes frustration with interface to a whole new level provided nearly impossible workarounds). I will be spending more time on this because it’s been a pretty engaging  idea to play with/develop.

 

Special thanks to:

Golan (for introducing me to some very helpful tutorials on how to use Wekinator, for turning me onto OSCulator which I eventually used to get the OSC data into Wekinator, and for encouraging me to pursue the development of this project!)

Tatyana (for suggesting Wekinator to me when I initially pitched my idea to her before we shared our research in class for the midway point)

Grey (for making some very helpful suggestions as to how I could get the OSC data into Wekinator without the use of OSCulator, and for offering his assistance to me )

Tom (for acquiring the muse headband!)

 

dorsek – Looking Outwards – 2

This project, though not technically an art piece, certainly influenced my vision for the DrawingSoftware project.

To set the stage a  bit, as of about a year ago a set of four scientists from Kyoto University’s Kamitani Lab released the result of a research study on using artificial intelligence (specifically without the use of machine learning ((a method which has been used before for this typed of recording with some success)), and instead through the use of “deep neural networks”) in order to decode the brain scans of people.  Through showing their participants natural images (for varying lengths of times), artificial as well as geometric shapes, and letters from the alphabet, over varying lengths of time… and recording their brains scans at those times in addition to recording when participants were told to think of a specific image, or even well looking at several of the images together. According to the researchers, once the brain waves were scanned, they would then use a computer program to “de-code” the image, or as they like to say ‘reverse-engineer’ it.

What most intrigued me regarding this project was the fact that brain scans were being used to re-generate imagery; that and the technology (which is undoubtedly beyond my capacity of understanding and my own capabilities at the moment) that they used in order to accomplish this.. Reading this is partially what inspired me to try and pursue the creation of a project which would render your dreams out for you as you slept.

Now – what’s wrong with this project? Well as a research piece, I can’t point out anything specific but in general I think my biggest critique is that this isn’t an art piece; the technology isn’t being used in a way that might challenge how we think about the world in any way – there’s no opportunity for revelation or new perspectives with regards to the concept behind the project (which obviously could simply be due to the fact that they are still developing this new way of processing and re-generating imagery via brain scans) which seems to detract from the interesting nature of the project.  I also don’t believe that this will age well  because of that, just for the simple fact that once you get past the initial “woah” with the technology, there’s not really anything else there that they’ve provided as brain food (on purpose at least…)

 

 

Dorsek-2DPhysics

For this project I wanted to develop something that closely aligned with a portion of my senior project which has to do with the experience of dreaming; specifically the sensation of needing or wanting to do something but being frustrated with your lack of an ability to; the harder you focus, the harder whatever it is you want to do becomes.

Struggling to get the teeth into the basket
Original sketch of idea…

 

 

 

 

 

 

I took one of many dreams that I recorded during a 3-month bout of insomnia and decided to re-create it  using matter.js as my physics engine of choice.

Initially, I had plans for many interactions based in matter.js but it turned out to take me a bit longer than expected in order to familiarize myself with the library.