lumar – FinalProject


iLids from Marisa Lu on Vimeo.

 

A phone powered sleeping mask with a set of cyborg eyes and an automatic answering machine to stand in for you while you tumble through REM cycles.

Put it on and konk out as needed. When you wake up, your digital self will give you a run down of what happened.

The project was originally inspired by the transparent eye lids pigeon’s have that enable them to appear awake while sleeping. This particular quirk has interesting interaction implications in the context of chatbots, artificial voices, computer vision, and the idea of a ‘digital’ self embodied by our personal electronics. We hand off a lot of cognitive load to our devices. So how about turning on a digital self when you’re ready to turn off? The project is a speculative satire that highlights the ever increasingly entwined function and upkeep of self and algorithm. The prototype works as an interactive prop for the concept delivered originally at the show as a live-sleep performance.

  • Narrative: In text of approximately 200-300 words, write a narrative which discusses your project in detail. What inspired you? How did you develop it? What problems did you have to solve? By what metrics should your project be evaluated (how would you know if you were successful)? In what ways was your project successful, and in what ways did it not meet your expectations? (If applicable) what’s next for this project? Be sure to include credits to any peers or collaborators who helped you, and acknowledgements for any open-source code or other resources you used.

Butterflies, caterpillars, and beetles have all developed patterns to look like big eyes. Pigeon’s appear constantly vigilant with their transparent eyelids and in the mangrove forests of India, fishermen wear face masks on the back of their heads to ward off attacks from Bengal tigers stalking from behind. These fun facts fascinated me as a child. In Chinese paintings, the eyes of imperial dragons are left blank – because to fill them in, would be to bring the creature to life. In both nature and culture, eyes carry more gravitas than other body parts because they are most visibly affected by and reflective of an internal state. And if we were to go out on a limb and extrapolate that further – making and changing eyes could be likened to creating and manipulating self / identity.

Ok, so eyes are clearly a great leverage point. But what did I want to manipulate them for? I knew that there was something viscerally appealing to having and wearing eyes that on a twist, weren’t reflective of my true internal state (which would be sleeping). There was something just plain exciting about the little act of duplicity.

The choice to use a phone is significant beyond its convenient form factor match with the sleeping mask. If anything in the world was supposed to ‘know’ me best, it’d probably be my iPhone. For better or for worse, its content fills my free time, and what’s on my mind is on my phone — and somehow that is an uncomfortable admission to make.

And so, part of what I wanted to achieve with this experience, was a degree of speculative discomfort. People should feel somewhat uncomfortable watching someone sleep with their phone eyes on, because it speaks to an underlying idea that one might offload living part of (or more of) their life to an AI to handle.

Of course however uncomfortable it might be, I do still hope people find a spark of joy from it, because there is something exciting about technology advancing to this implied degree of computer as extension of self.

In more practical terms of assessing my project, I think it hit uncanny and works well enough to get parts of a point across, but I struggle with what tone to use for framing or ‘marketing’ it. And I would additionally love to hear more on how to figure out how to make more considered metrics for evaluating this. On the technical side, the eye gaze tracking could still be finessed. The technical nuancing of how to map eye gaze to ensure the phone always looks like its staring straight at any visitors/viewers is an exciting challenge. For now, the system is very much based off of how a different artist achieved it in a large scale installation of his — insert citation — but could be done more robustly by calculating backwards from depth data the geometry and angle the eye would need to be. Part of that begins with adjusting the mapping to account for the front view camera on the iPhone being off center from the screen which most people are making ‘eye contact’ with.

In my brainstorms, I had imagined many more features to support the theme of the phone taking on the digital self. This included different emotion states as computational filters, of microphone input to run NLP off of and automating responses for the the phone to converse with people trying to talk to the sleeping user.

The next iteration of this would be to clean this up to work for users wanting their own custom eyes.

But for now — some other technical demos

Credits:

Thank you Peter Shehan for the concise name! Thank you Connie Ye, Anna Gusman, and Tatyana for joining in to a rich brainstorm session on names, and marketing material. Thank you Aman Tiwari, Yujin Ariza and Gray Crawford for that technical/ux brainstorm at 1 in the morning on how I might calibrate a user’s custom eye uploads. I will make sure to polish up the app with those amazing suggestions later on in life.

  • Repo Link. 

Disclaimer: Code could do with a thorough clean. Only tested as an iOS 12 app built on iPhone X.

lumar-FinalProposal

‘wide-awake’ sleeping mask

or ‘Sleep with one eye open’…’sleep walking’? …’seeing through the eyes of the computer?’ see as computers do? digitally enabling the unconscious?

technically —

I’m thinking of having a live performance of someone napping in the studio wearing a custom iPhone holding sleeping mask.

The tentative technical plan is to have a native app running full screen on the phone tracking people watching the sleeping person through the front view camera surreptitiously turned on the entire time. The eyes on display would react to the environment’s people and optical flow…..potentially….either way…we will see…. Apple’s core ML has some finicky aspects to it.

 

lumar–Project4feedback

The feedback was good to get. It’s evident that the concept clearly communicated, but I do need to work on having a more specific framing or tone towards the experience; whether the connected hearts takes on a sentimental sincere angle, or dark humor alá Kawara and his series I am still alive. It’s hard to gauge which to lean into, but it’s clear from the feedback that a stronger stance would make the project less banal.

The other interesting extrapolation would be whether the experience could be better contextualized physically as well; imagining the phone inserted into a stuffed animal or some other more form appropriate manifestation.

lumar + greecus barcode project

Symphony of Geagle lanes

(abbridged demo with single Entropy lane)

Checking out to the theme of Game Of Thrones from Marisa Lu on Vimeo.

 

Greecus and Lumar imagined a whole row of checkout lanes in the grocery store all playing to the same timeline, each item checked out playing the next note in the song.

Unfortunately, Giant Eagle said they were ‘private property’ and would have to ‘contact corporate for permission’.

(The video here had the beeps overlaid on top because it’s actually rather difficult for a single person to scan things quickly enough for the music to work. The original concept for all the lanes to work in unison doesn’t work so well with a single entropy employee)

inspired by a New York turnstiles project

lumar-Telematic

I can calculate people’s heart rate from someone’s finger over the phone’s rear camera with flash turned on. So can I make a program for people to send, receive, and exchange heart beats.

There’s at least plenty of wordplay to twist around with this —> “Holding your heart in the palm of my hand,” Vulnerability, dark humor (Golan imagined someone on their death bed, I can also see it as something you use while having “heart to heart”, or an alternative to holding hands for the remote couple?), and then I wonder…would it be strangely intimate to hold a stranger’s heart?

Kyle Machulis’s shared the thumb kiss app, and it was delightfully simple and elegant. If I could get some of that Taptic Engine action here with the above heart program that’d be fun on the technical side too.

_________________________________________________
WIP demos
_________________________________________________

Heart monitor from camera feed from Marisa Lu on Vimeo.

Started setting this up as a native app talking to a node.js server.

I wanted to stay within a single view with no other pages or hidden hamburger menus, but how do I make sure to stay communicative?

I occasionally gave my work in progress apps to unsuspecting classmates to see if they could figure out either the intent of the app or how to use it. Most of the compositional UI changes grew organically from that as opposed to well defined and designed spec sheet because I was wary of what I’d be able to achieve my first time in swift.

One of the bigger UX changes came with a larger compromise in battery level, but haha, I think it’s worth it — when your finger is on the camera, the system automatically knows and begins to read heart beat with the flashlight automatically toggled on.

Finger on and off from Marisa Lu on Vimeo.

lumar-LookingOutwards03

Heart monitor from camera feed from Marisa Lu on Vimeo.

I can calculate people’s heart rate from someone’s finger over the phone’s rear camera with flash turned on. But what should I do with this heart rate?

Posting social media with heart rates tagged in

^ I find this completely….un-compelling. What’s a simple but elegant use of heart rate?

I’m wary of using it as a way to create visuals, because as often seems to be the case, the system is more creative than whatever traditional form of art it produces.

turning heart rate into a visualization, but really it’s just data into ink blobs that are kind of arbitrary and meaningless to me. What use is this visualization? For me, the humanity of the heart rate is gone in this.

Heart bot turning pulse into art

Kyle Machulis’s shared thumb kiss app makes me wonder if meditating to the hear beat of your partner might be an interesting experience. Get some of that Taptic Engine action here with the above heart program. There’s something elegant about holding someone’s heart in the palm of your hand. Ok. Maybe more cheesy, but there’s plenty of angles to take this.


The benefit of meditating in pairs

While not entirely similar, but still in the realm of pulses and beats and body signals —

Electro-neurographic signals to morse code from Marisa Lu on Vimeo.

lumar-DrawingMachine

FINAL:

so. I didn’t end up liking any of my iterations or branches well enough to own up to them. I took a pass when my other deadlines came up but I had a lot of fun during this process!

PROCESS

Some sketches —

Some resources —

Potential physical machines…

a google experiments for a projector lamp http://nordprojects.co/lantern/

1st prototype —

^ the above inspired by some MIT Media Lab work, including but not limited to —

Some technical decisions made and remade:

welp. I really liked the self-contained nature of a CV aided projector as my ‘machine’ for drawing so I gathered all 20+ parts —

when your cords are too short.

printed somethings, lost a lot of screws…and decided my first prototype was technically a little jank. I wanted to try and be more robust so I got started looking for better libraries (WEBrtc) and platforms. I ended up flashing the Android Things Operating System (instead of raspbian) onto the Pi. This OS is one that Google has made specially for IoT projects with integration  and control through a mobile android—

and then along the way I found a company that has already executed on the projection table lamp for productivity purposes —

LAMPIX — TABLE TOP AUGMENTED REALITY

they have a much better hardware setup than I do

^ turning point:

I had to really stop and think about what I hoped to achieve with this project because somewhere out in the world there was already a more robust system/product being produced. The idea wasn’t particularly novel even if I believed I could make some really good micro interactions and UX flows, so I wasn’t contributing to a collective imagination either. So what was left? The performance? But then I’d be relying on the artist’s drawing skills to provide merit to the performance, not my actual piece.

60 lumens from Marisa Lu on Vimeo.

 

…ok so it was back to the drawing board.

Some lessons learned:

  • Worry about the hardware after the software interactions are MVP, UNLESS! Unless the hardware is specially made for a particular software purpose (i.e. PiXY Cam with firmware and optimized HSB detection on device)

ex:  So. 60 Lumens didn’t mean anything to me before purchasing all the parts for this project, but I learned that the big boy projector used in the Miller for exhibitions is 1500+ lumens. My tiny laser projector does very poorly in the optimal OpenCV lighting settings, so I might have misspent a lot of effort trying to make everything a cohesive self-contained machine…haha.

ex: PixyCam is hardware optimized for HSB object detection!

HSB colored object detection from Marisa Lu on Vimeo.

 

  • Some other library explorations

ex: So back to the fan brush idea testing some HSB detection and getting around to implementing a threshold based region growing algorithm for getting the exact shape…

 

  • Some romancing with math and geometry again

Gray showed me some of his research papers from his undergrad! Wow, such inspiration! I was bouncing out ideas for the body as a harmonographer  or cycloid machine, and he suggested prototyping formulaic mutations, parameters, and animation in GeoGebra and life has been gucci ever since.

 

lumar-lookingoutwards2

I saw this recent work. I thought it was fun to see the Google Draw experiments made tangible and interactive, but….I actually included this piece because I wanted to bring up a potential critique — beyond the physical form just making it easier to exhibit, what does the tangible nature of the machine do for this experience? Does it fundamentally change or enhance the interaction? What the machine is doing is something the digital form can do just as easily. The way the user inputs things here is more or less the same as they would on the web version of this (mouse on screen instead of a stiff, vertical pen on paper) where they begin to draw a doodle. The machine tries to guess what it is and ‘autocomplete’ it. How it doesn’t line up/or guess it correctly ends up with your drawing recreated as a strange hybrid mix with dementing visually similar. Do I want to keep the end product? Not really. Do I really cherish the experience? I don’t know, it doesn’t bring much new to the table that Google’s web version didn’t already in terms of getting a sense of how good/ or bad the AI system behind it has gotten at image/object classification and computer vision.

So what is it that it brings? Is it the experience of seeing machine collaborate intelligently in realtime with you?

Kind of like Sougwen’s work — (see below) ?

Sougwen Chung, Drawing Operations Unit: Generation 2 (Memory), Collaboration, 2017 from sougwen on Vimeo.

lumar-mask

A day in the life of Me, Myself, and I

concept: I wear a mask that replaces everyone’s faces with my own. (pass through VR in a modified cardboard)

Some person drivers for this assignment was that yes, the prompt is “digital mask” that we perform with, but how do we take it out of the screen? I wanted to push myself to take advantage of the realtime aspect of this medium (face tracking software) — perhaps something that couldn’t be done better through post processing in a video software despite the deliverable being a video performance.

My personal critique though, is that I got caught up in these thoughts, and about how to make the experience of performing compelling, that I neglected how it would come off as a performance to others. It feels like the merit/alluring qualities of the end idea (^ an egocentric world where you see yourself in everyone) get’s lost when the viewer is watching someone else immersed in their own “bubble”. What the performer sees (everyone with their face) is appealing to them personally, but visually uninteresting to anyone else.

Where to begin?

Brain dump onto the sketch book. Go through the wonderful rabbit hole of links Golan provides. And start assessing tools/techniques available.

Some light projection ideas (because it felt like light projection had an allure to it in the quality of light that isn’t as easily recreated in post processing. Projecting a light face on a face feels like there’s a surreal quality to it as it blurs the digital with the physical)

And in case a performance doesn’t have time to be coordinated:

Pick an idea —

I’ll spend a day wearing the google cardboard with my phone running pass through VR, masking everyone I see by replacing their face with my own. A world of me myself and I. Talk about mirror neurons!

Some of the resources gathered:

Open frame works ARkit add-on to go with a sample project with ARkit facetracking (that doesn’t work, or at least I couldn’t figure out a null vertex issue)

Open frameworks IOS facetracking without ARkit add on by Kyle McDonald

^ the above needs an alternative approach to Open CV Open Frameworks CV Add on

Ok, so ideally it’d be nice to have multiple faces tracking at once — And Tatyana found a beautiful SDK that does that

Create.js is a nice suite of libraries that handles animation, canvas rendering, and preloading in HTML5

Start —

— with a to do list.

  1. Make/get cardboard headset to wear
  2. Brainstorm technical execution
  3. Audit software/libraries to pick out how to execute
  4. Get cracking:
    1. Face detection on REAR facing camera (not the True Depth Front facing one) ideally in IOS
    2. Get landmark detection for rigging the mask to features
    3. Ideally, get estimated lighting ….see if the true depth front facing face mask can be estimated from the rear
    4. Either just replace the 2d Texture of the estimated 3D mask from #3 with my face, OR….shit….if only #2 is achieved…make a 3D model of face with texture and then try to put it in AR attached to the pose estimation of detected face
    5. Make sure to render in the Cardboard bifocal view (figure out how to do that)
get some straps
And of course googley eyes

 

…sad. What? NooooooooooooooooooooooooooooooooooooooOOoo…….

Re-evaluate life because the to do list was wrong and Murphy’s law strikes again —

Lol. Maybe. Hopefully it doesn’t come to this part. But I have a bad feeling…that I’ll have to shift back to javascript

Converting jpg to base 64 and using it as a texture…but only the eyebrows came in….lol….how to do life

Ok, so it turns out my conversion to base 64 was off. I should just stick with using an online converter like so —

Though this method of UV wrapping a 2D texture isn’t particularly satisfying despite lots of trial and error doing incremental tweaks and changes. The craft is still low enough that it distracts from the experience, so either I get it high enough it doesn’t detract, or I do something else entirely….

Two roads diverge…

Option 1: Bring the craft up higher…how?

  • Sample video feed for ‘brightness’ namely, the whites of the eyes in a detected face. Use that as normalizing point to adapt the brightness of the image texture being used overlayed now
  • Replace entire face with a 3D model of my own face? (where does one go to get a quick and dirty but riggable model of one’s face?)
  • …set proportions? Maybe keep it from being quite so adaptable? The fluctuations very easily bring everything out of the illusion
  • make a smoother overlay by image cloning! Thank you again Kyle for introducing me to this technique — it’s about combining the high frequency of the overlay to a low frequency (blurred) shader fragment of the bottom layer. He and his friend Arturo made

Option 2: Something new:

 

 

Made another set of straps to hold the phone to the front of the face


I was inspired by Tatyana’s idea of making a ‘Skype’ add on feature that would shift the eyes of the caller to always appear to be looking directly at the ‘camera’ (aka, the other person on the end of the call)…only in this case, I’d have a “sleeping mask” strapped to my face to disguise my sleepy state for wide awake and attentive. The front facing camera would run face detection in the camera view and move my eye to ‘look’ like I’m staring directly at the nearest face in my field of view. When there’s no face, I could use the (as inspired by Eugene’s use for last week’s assignment) computer vision calculated “Optical Flow” to get the eyes to ‘track’ movement and ‘assumed’ points of interest. The interesting bit would be negotiating between what I should be looking at — at what size of a face do I focus on that more than movement? Or vice versa?

And the performance could be so much fun! I could just go and fall asleep in a public place, with my phone running the app to display the reactive eyes with the front facing camera on recording people’s reactions. Bwahahhaha, there’s something really appealing about then crowdsourcing the performance from everyone else’s reactions. Making unintentional performers out of the people around me.

And I choose the original path

A Japanese researcher has already made a wonderful iteration of the ^ aforementioned sleeping mask eyes.

Thank you Kyle McDonald for telling me about this!

And then regret it —

So switching from previewing on localhosts on desktop to Mobile browser had some differences. An hour later it turns out it’s because I was serving it over HTTP instead of over an encrypted network, HTTPS. No camera feed for the nonsecure!

Next step, switching over to the rear view camera instead of the default front face one!

Damn. Maybe I should have stuck to Native

Read up on how to access different camera’s on different devices and best practices for how to stream camera content! In my case I was trying to get the rear facing camera on mobile which turned out to be rather problematic…often causing the page to reload “because a problem occurred” but never got specified in the console….hmmm…

hmmm….

 

I swear it works ok on desktop. What am I doing wrong with mobile?

Anywhoooooooooo — eventually things sort themselves out, but next time? Definitely going to try and go native.

lumar-2DPhysics

Bouncy creatures are always fun. That’s where my brain went to first with this assignment. I had previously done a springy pigeon that was incredibly satisfying to play with so I had wondered could I build on that with Box2D?

(background SVG art not made for this assignment, pigeon remade in Matter.js soft bodies and then went with constraints)

 

Thoughts for physical device driven interactions — applying web events (device orientation and device motion) to the forces in the matter js engine.

Variation on springs and bouncy birds

But then I figured I had a week so I should take more time to think of more interesting inputs and outputs committing to something I already knew would have satisfying, albeit rather expected, interaction —

General ideas that would make good use of physics
Thoughts on input methods / when and why some would be more appropriate than others
Trying to integrate input with interesting output

And what about an idea that I could build on later? Maybe the beginning of a series of things?

Or I should just think of it as which simulations do I want to do? Because the fluid simulation library looks so fun, but I haven’t figured out a narrative/interaction/input/output that is compelling and justifies it. But again, this is an exercise so maybe I should just do it because I want it.

 

 

…..a 5 days later….

 

 

Ok. So I turned the ideas around in my head and something just made me really happy thinking about a future where I make a collection of different games for different muscle groups. I imagined some future where the opponents in the exercise games were actually other users or your friends that you have challenged remotely, and then its like crowdsourcing human computation to generate new/unique emergent behavior for the opponents every time you play. Theres an app that plays zombie noises to make a regular jog feel like a chase, but that sort of gamification for motivation gets old pretty quickly. A concern I do have though is…well, aren’t wii games essentially games that made you move? But they fell out of favor…..hmm……….

….ok had an interesting conversation with Jacquar and Gray about why Wii felt gimmicky and eventually lost favor, vs. traditional minimal movement game controls and then touching on wii physical motion in games vs. motion in VR games. There’s an interesting dichotomy to analyze here. Will circle back on these thoughts after I make a MVP that doesn’t look like this:

Pigeon with Matter.js physics gone wrong from Marisa Lu on Vimeo.

So it started off with recreating physics with Matter.js, figured it would be simple enough to get a chain mechanism for the pigeon leg pinned to a particular location, and then! Then, it had an unexpectedly animate reaction that happens to self-sustain infinitely.

…..I wonder, does this ^ count as emergent behavior? The way a double pendulum movement might as a chaotic system.

 

So back to the an initial idea, a pigeon hanging precariously over an ocean of blood relying on the user jerking the ‘spring’ (their device) up to keep the poor bird out of the sharks jaws.

Bloodbath from Marisa Lu on Vimeo.

Later tied it to device orientation and any acceleration got converted into a ‘add force’ on the bodies in the Matter engine. (kinda, and then broke it while figuring out some browser issues but yay, if ya’ll ever need a full screen browser, I highly recommend BrowserMax)

feel free to try on a device browser, would recommend BrowserMax for fullscreen