lchi-Mask

It is partly an audio visual performance and partly trying to reflect on the famous excerpt from Dziga Vertov’s Man with a Movie Camera that was included in John Berger’s Ways of Seeing TV show. In both works, they are trying to analyse “seeing” and how camera change the way we see.

In the beginning, I approach this project with one question in mind: what does the face tracking algorithm “see”? Trying to escape the tracking with just facial expression (without tilting your head), I squashed and stretched my face as much as possible. Without much success and a solid idea on how to continue, I start remapping the face to find inspiration.

I settled on the cropped eyes because they always look eerie and intriguing to me, the false sense of eye-contact, the information we can extract out of it or project on it. This is a performance of my eyes watching John Berger’s Ways of Seeing introducing Dziga Verrtov’s Man with a Movie Camera, reflecting on what the “mechanical eye” has become, controlling two masks with my two eyes, and trying to make it interesting.

Credits:
Music: Neu! – Super 16
Voice: John Berger – Ways of Seeing

 

lass-mask

  

The first version of my program took my face and made it smooth. I thought this was pretty cool, and my friend Eileen Lee told me it reminded her of noppera bo monsters (faceless ghosts). However, I realized that I tailored the program very specifically to my own face. It wasn’t smooth enough when I tested it on people with a visible nose bridge, but by trying to remove the nose bridge I ended up erasing my glasses. It was also hard to erase things like cheekbones, since it might end up blurring into things outside of the face and produce the wrong colors. Pretty much, it worked well for me because my face is flat to begin with. :’)

I made a second version using a similar idea, but instead of choosing areas to smooth based on facial landmarks, the user can select which areas need to be “corrected” using colored markings. I was heavily inspired by the Peter Campus performance  where he paints green paint on his face and uses chroma key to show another image of his face underneath. I think this version is better for a performance because you can slowly increase the amount of smoothing on your face.

I made this project because I was thinking about having insecurity about your face, and also our weird obsession with “nice” or “smooth” skin. When I see my relatives, they usually greet me by commenting on my face or skin. Things like this often make me wish I could hide my face completely.

All of the blending was achieved with OpenCV seamless clone, and the face tracking in the first version was done with dlib. Special thanks to that school cancellation that gave me 5 more days to work on this!

jaqaur – Mask

Chime Mask

I wasn’t super inspired for this project, but I had a few concepts I wanted to try to work in:

  • Masquerade-style masks, especially asymmetrical ones like this: 
  • Lots of shiny pieces (I really liked the glittery look of tli’s 2D Physics project, and wanted to make something that kind of looked like that)
  • Interaction, besides just the mask sticking to your face. I wanted moving parts, or something that you could do with the mask on besides just looking at yourself.

I drew up a few ideas (a little below), but what I ended up with was this: something metallic that would sparkle and could be played like an instrument.

My initial design for Windchime Mask, based on the FaceOSC raw points. Points I added are colored black, but points I moved, like the eyes and eyebrows, are not shown.

My idea was that by opening your mouth (or maybe nodding or some other gesture), you could play a note on a chime above their head. By rotating your head, you could change which note was selected. In this way, I would make a face instrument, and perform a short song on it. The fact that the chimes on top were different lengths was both accurate to how they are in real life, and reminiscent of the asymmetrical masks I love so much.

My early idea sketches:

I started by making the part around the eyes, which I did by receiving the raw FaceOSC data. To get the proper masquerade look, I needed to use different points than FaceOSC was tracking. So I made my own list of points. Some, like those on the cheeks, are the average of two other points (sometimes midpoint, sometimes 2/3 or something: I played with it). Others, like the eyes and eyebrows, are just exaggerations of the existing shapes. For example, masquerade masks generally have wider eye-holes than one’s actual eye size, so I pushed all of the eye points out from the center of their respective eye (twice as much). I did the same for the eyebrows. This results in a shape that more accurately resembles a real mask and also is more expressive when the user moves their eyes/eyebrows.

An early stage of Windchime Mask

I made the mask shape out of a bunch of triangles that shift their color based on the rotation of the user’s head. I started them with different colors so that the light and dark parts would be spread apart. I was hoping that it would look like it was made out of angled metal bits, and I think that comes across in the final product. The pieces make it look somewhat 3D even though they are all in 2D, and I’m happy about that.

When it came time to do the interactions, I found the one I had initially described somewhat challenging, because it involved extrapolating points so far above the head, and they would have to move in a realistic way whenever the head did. I also had another idea that I began to like more: dangling chimes. I thought that if I dangled metal bits below the eye-mask, they could operate like a partial veil, obscuring the mouth. They also could clang together like real wind chimes.

I used Box2D for the dangling chimes, and much like with my Puppet Bot, the physics settings had to be played with quite a bit. If I made the chimes too heavy, the string would get stretched or broken, but if I made the chimes too light, they didn’t move realistically. I ended up going with light-medium chimes, heavy strings connecting them to the mask, and very strong gravity. It isn’t perfect, and I feel like there is probably a better way to do what I’m doing, but I tried several things and couldn’t find the best one.

In any case, the interaction is still fun and satisfying, and I actually really like the metallic aesthetic I have here. I did put thought into what kind of performance I wanted to do (it would have been easier if I had made a mask you could play like an instrument), but I couldn’t think of anything too interesting. So, here I am, showing off my mask in a performative way (sound on to get the full chime effect):

Overall, I am pretty satisfied with Windchime Mask. It’s not one of my favorites. A few things I would ideally change about it are:

  • The sound of the chimes isn’t quite right. I’d like them to resonate a bit more, but I couldn’t find the right audio file. At least there are some windchimes that sound like this, so I can say I was trying to emulate them?
  • More problematically, the sound of the windchimes seriously degrades after about 45 seconds with the mask. It gets all garbled, buzzy, and demonic. I have no idea why this is. I tried searching for the problem and debugging it myself, to no avail.
  • The physics of the chimes has the same problem my PuppetBot did; it’s all a little too light and bouncy. Maybe that’s the nature of Box2D? But more likely, I’m missing something I need to change besides just the density/gravity parameters.

All of that said, I have fun playing with this mask. I like the look of it, and I especially like the idea of this mask in real life (though it probably would not turn out so well with the chimes slapping against my face). I’ll call this a success!

sjang-mask

Reunion/Re-enactment – Face Mask Experiments

Beginnings 

The work was initially sparked by the idea of mapping the history of myself onto my face. The core concept was to express and encapsulate in these masks the multitude of selves from different points in time, that would allow me to travel back and access what’s been filed away or buried within. I began creating masks from photographs that represented specific pivotal times of my life, that brought back certain moods and emotions. Trying on masks of my own face from the past brought back a flood of mixed emotions and memories, which transported me to a strange mental place. While trying to create a naturally aged version of my face, I was inspired to use my mother’s face to overlap with mine. This led to my creating a series of face masks that represented my mother in critical stages in her life, in tandem with mine. It only made sense, considering how much of an impact my difficult relationship with my mother had in shaping who I am, in more or less all aspects of my life. My mother’s masks enabled me to express voices of her – much of what I have internalized – which continues to be a source of internal strife and conflict. The process of creating the masks itself opened a way for me to untangle complex thoughts and feelings, and give distinct faces and voices to contradictory aspects of my identity.

I created a face texture generator that converts images to base64 and creates uv maps based on the facial landmark vertices. A critical factor guiding my image selection was having a clear, high-res frontal view of the face. Some of the images were then manipulated for improved landmark detection.

The Software

The mask software that evolved from these explorations allows me to rapidly switch between 15 face masks that act as portals to the past and future. The masks are carefully overlaid to my face, and the overlapping facial features

provide an interesting subtle tension, which is amplified in abrupt movement. The masks imbue my face with color and bring it to life. Depending on which mask I have on, I find myself naturally feeling and acting in a different way, and also engaging in private conversations with myself. I can choose to swap masks by speedily flipping through my choices, dramatically transitioning from one to another with visual effects, or simply selecting the one I want from a drop-down menu.

The various types of masks created for the performance

The act of putting on the masks is at once an unsettling self-interrogation and a cathartic form of self-expression. The software allows me to navigate and embody the myriad identities and spectrum of emotional states I have bubbling beneath the surface. The merging of my mother’s face with my own with age, expresses my hope for a certain closure and embrace with time. An image of my grandmother was also used to create an older mask version of my mother. The faces that form the masks span three generations, which reflect a sense of continuity and shared form of identity.

Experiments with different types of blend modes and mask transition effects that took advantage of technical glitches
Face flipping in action
Gradual flickering transition test
Blurring in and out between swaps

The Beyond Reality Face SDK – v4.1.0 (BRFv4)  was used for face detection/tracking and face texture overlay that dynamically changes with eye blinking and mouth openings. Mario Klingemann’s Superfast Blur script / algorithm was used for animated blur effects during mask transitions.

The Performance 

The mask performance that I had originally planned was scripted based on my recollections of interactions between me and my mother in different stages of our lives. I wanted the sparse words that make up the dialogues to capture the essence of events and moments in the past that continue to resonate, while not being too detailed in an overly(?) personal and revealing way. I wanted to allow enough room for viewers to be able to relate and reflect on their personal experiences. I had René Watz’s Out Of Space Image Pan playing in the background (a muffled static-like noise), to create an eerie, alien atmosphere.

What I failed to realize until too late, however, was how wrong I was in thinking I could perform this, especially for a public audience. Hearing my own voice out loud made me hyper conscious of how disconcertingly different it was from my inner cacophony of voices, which further alienated me from my own internal experiences I wanted to re-enact. The performance was overall a pretty disastrous experience, despite my repeated attempts to overcome my cognitive dissonance and deficiencies.

The first version of the script roughly reflected the five stages of grief – denial, anger, bargaining, depression and acceptance(reconciliation). Later realizing that the script was too long for a 2 minute performance, I condensed it to a much shorter version, which no longer stuck to a linear temporal narrative. I played with scrambling bits and pieces of the script in a random order, to reflect an inner chaos. My plans to convey this sense of disorientation through my script and acting, however, were effectively squashed during my clumsy attempts in performance.

Excerpts from a rough draft of the script

Ideally I would like to have the script read by someone with a deep, expressive voice, and have records of it playing like a voice-over during the performance.  Specific parts of the script would be associated with specific masks, and dynamically played when the mask is put on. Without the dissonance of hearing my own voice, I imagine I would be able to channel my thoughts and feelings through my body language and facial expressions.

It would also be interesting to explore more meaningful ways of selecting and transitioning between masks. I thought of invoking certain masks depending on the similarity of facial expression and emotions, although the library I am currently using is not readily capable of distinguishing such subtle differences and nuances in expression. I also thought of using songs or pieces of music that I associate with a specific time and space to trigger the mask transitions.

Process/Notes 

I went through many different ideas and designs before I embarked on the work I described above. Below are some excerpts from my ideation notes that show some of my thought process:

ya-mask

For this assignment, I created a musical instrument using my face. While not strictly a visual mask, I liked the idea of an aural mask augmenting audio input, transforming the human voice into something unrecognizable.

The final result is a performance in which I use the movements of my mouth in order to control the audio effects that are manipulating the sounds from my mouth. An overall feedback delay is tied to the openness of my mouth, while tilting my head or smiling distorts the audio in different ways. Also I mapped the orientation of my face to the stereo pan, making the audio mix move left and right.

One interesting characteristic about real-world instruments as compared to purely digital ones is the interdependence of the parameters. While an electronic performer can map individual features of sound to independent knobs, controlling them separately, a piano player is given overlapped control over the tonality of the notes: hitting the keys harder will result in a brighter tone, but also an overall louder sound. While it may seem like an unnecessary constraint, this often results in performances with more perceived expression, as musicians must take extreme care if they intend to play a note in a certain way. I wanted to mimic this interdependence in my performance, so I purposefully overlapped multiple controls of my audio parameters to the same inputs on my face. Furthermore, the muscles on my face often are affected by one another, so this constrained the space of control that I was given to manipulate the sound. The end result is me performing some rather odd contortions of my face to get the sounds that I wanted.

The performance setup is done using a mix of software and hardware setup. First, I attached a contact microphone to my throat, using a choker to secure it in place. The sound input is then routed to Ableton Live, where I run my audio effects. A browser-based Javascript environment is used to track and visualize the face from a webcam using handsfree.js, and send parameters of the face expression through OSC and WebSockets. Because Ableton Live can only receive UDP, however, a local server instance is used to pass the WebSocket OSC data over to a UDP connection, which Ableton can receive using custom Max for Live devices.

An Ableton Live session and a Node JS server running in the background.

For the visuals of the piece, I wanted something simple that showed the mouth abstracted away and isolated from the rest of the face, as the mouth itself was the main instrument. I ended up using paper.js to draw smooth paths of the contours of my tracked lips, colored white and centered on a black background. For reference, I also included the webcam stream in the top corner; in a live setting, however, I would probably not show the webcam stream as it is redundant.

 

yeen-mask

I made a “texting” app that reads your text out loud based on your facial expression. I was triggered by the thought that texting software allows people to be emotionally absent. But what if texting apps require users to be emotionally present all the time? By reading your text out loud, or even sending your text the way your face is when you type behind the screen.

I started by exploring the iPhone X face tracking ARkit. Then, I combined facial features with the speech synthesizer by manipulating the pitch and rate of each sentence. Things about your face that change the sound include:

more round eyes are – slower it speaks

more squinty eyes are – faster it speaks

more smiley – higher pitch

more frowny – lower pitch

insert “haha” if jaw wide open

insert “hello what’s up” if tongue out

insert “hello sexy” if wink

Process:

From lots of trials and errors, I changed many things, for example: I initially have: eyes rounder – speaks faster. and vice versa. But during testing, I found that it’s more natural the other way around…

My performance is a screen recording of me using the app.

 

In the final version, I added a correlation between the hue of the glasses and expression: the sum of pitch and rate changes the hue.

sketches

!credits to Gray Crawford who helped me extensively with Xcode visual elements!

ngdon-mask

age2death

age2death is a mirror with which you can watch yourself aging to death online. Try it out: https://age2death.glitch.me

GIFs

^ timelapse

^ teeth falling

^ decaying

^ subtle wrinkles

Process

Step 1: Facial Landmark Detection + Texture Mapping

I used brfv4 through handsfree.js to retrieve facial landmarks, and three.js to produce the 3D meshes.

I wanted to get the forehead too, which is not in the original 68 key points. However, a person’s forehead’s size and orientation is somewhat predictable based on the rest of their face. Thus I calculated 13 new  points for the forehead.

To achieve the effect I wanted, I have two possible approaches in mind: The first is to duplicate the captured face mesh and texture them with “filters” I want to apply, and composite the 3D rendering with the video feed.

The second is to “peel” the skin off the captured face, and stretch it onto a standard set of key points. Then the filters are applied onto the 2D texture, which is then used to texture the mesh.

I went with the second approach because I my intuition is that it would give me more control over the imagery. I never tried the other approach, but I liked my current approach.

Step 2: Photoshopping Filters

My next step was to augment the texture of the captured face so it looks as if it is aging.

I made some translucent images in Photoshop, featuring some wrinkles (of different severity), blotches, white eyebrows, an almost-dead-pale-blotchy face, a decaying corpse face, and a skull according to the standard set of key points I fixed.

I tired to make them contain as little information about gender, skin color, etc. as possible so these filters will ideally be applicable to everyone when blended using “multiply” blend mode.

Moreover, I made the textures rather “modular”. For example, the white brow image contains only the brow, the blotch image contains only the blotch, so I can mix and match different effects in my code.

Most of the textures were synthesized by stretching, warping, compositing, and applying other Photoshop magic to found images.

Currently I’m loading the textures directly in my code, but in the future maybe procedurally generating the textures fits more to my taste.

^ Some early tests. Problem is that the skin and lips are too saturated for an old person.

After spending a lot of time testing the software, I feel so handsome IRL.

Step 3: Skewing the Geometry

Besides the change in skin condition, I also notice that the shape of old people’s face & facial features also changes in real life. Therefore, I made the chin sag a bit and corners of the eyes move down a bit over time, among other subtle adjustments.

Step 4: Transitions

After I have a couple of filters ready, I tested out the effect by linearly interpolating them with code. It worked pretty well, but I thought that the simple fade in / fade out effect looks a bit cheap. I want it to be more than just face-swapping .

One of the improvements I made is what I call “bunch of blurry growing circles” transition. The idea is that some part of the skin get blotchy/rotten  before other parts, and the effects sort of expand/grow from some areas of the face to the whole face.

I achieved the effect with a mask containing a bunch of blurry growing circles (hence the name). As the circles grow, the new image (represented by black areas on the mask) reveals.

My first thought on how to achieve this kind of effect is to sample a perlin noise, but I then thought that it would be less performant unless I write a shader for it. The circles turned out to be pretty good (and fast).

Another problem I found was that the “post-mortem” effects (ie. corpse skin, skull, etc) are somewhat awkward. Since only the face is gross while other parts of the body are intact, I think the viewer tends to feel the effects are “just a mask”. I also don’t want to the effects to be scary in the sense of scary monsters in scary movies. Therefore my solution was to darken the screen gradually after death, and when the face finally turns into a skull, the screen is all black. I think of it as hiding my incompetence by making it harder to see.

I also made heavy use of HTML canvas blend modes such as multiply, screen, source-in, etc. I desaturate and darken parts of the skin.

^ Some blending tests

Step 5: Movable Parts

After I implemented the “baseline”, I thought I could make the experience more fun and enjoyable by adding little moving things such as bugs eating your corpse and teeth falling down when you’re too old.

The bugs are pretty straightforward. I added a bunch of little black dots whose movements are driven by Perlin noise. However I think I need to improve it in the future. Because when you look closely, you’ll find out these bugs are nothing more than little black dots. Maybe some (animated?) images of bugs will work better.

I did the falling teeth with two parts: the first is the particle effects of the individual tooth falling, and the second is to mask out the teeth that have already fallen.

I liked the visual effect of teeth coming off one by one, but sometimes the facial landmark detection is inaccurate around the mouth, and you can sort of see your real teeth behind the misaligned mask. I probably need to think of some more sophisticated algorithms.

^ left: teeth mask, right: teeth falling. Another thing I need to improve is the color of the teeth. Currently I assume some yellowish color, but probably better way is sample user’s real teeth color, or easier, filter all other teeth into this yellowish color.

Step 6: Timing & Framing

I adjusted the speed and pace of the aging process, so at first, the viewers are almost looking into a mirror, with nothing strange happening. Only slowly and gradually they’ll realize the problem. And finally, when they’re dead, their corpse decay quickly and everything fades into darkness.

I also wanted something more special than the default 640x480px window. I thought maybe a round canvas will remind people of a mirror.

I made the camera follow the head, because it is what really matters and what I would like people to focus on. It also looks nicer as a picture, when there isn’t a busy background.

Step 7: Performance

I decided to read a poem for my performance. I thought my piece woldn’t need too many performative kind of motions, and some quiet poem-reading best brings out the atmosphere.

I had several candidates, but the poem I finally picked is called “Dew on the Scallion (薤露)”. It was a mourning song written in 202 BC in ancient China.

薤上露,何易晞。

露晞明朝更复落,

人死一去何时归!

 

蒿里谁家地,

聚敛魂魄无贤愚。

鬼伯一何相催促,

人命不得少踟蹰。

I don’t think anyone has translated it to English yet, so here’s my own attempt:

The morning dew on the scallion,

how fast it dries!

it dries only to form agian tomorrow,

but when will he come back, when a person dies?

 

Whose place is it, this mountain burial ground?

yet every soul, foolish or wise, rest there.

And with what haste Death drives us on,

with no chance of lingering anywhere.

Byproduct: face-paint

Try it out: https://face-paint.glitch.me

While making my project, I found that painting on top of your own face is also quite fun. So I made a little web app that lets you do that.

ulbrik-mask

My Two Other Faces

Glitch Demo

This performance is an investigation of the potential existence of multiple personalities within ourselves. It features an exploration of the two other people generated from the left or right sides of my face in a digital mirror. The program I created for the performance can take one half of my face and reflect it over the centerline to cover the other half and create a new face entirely from one side.

Website for doing something similar with static photos: http://www.pichacks.com

I was inspired by reading about occasional emotional asymmetry of the face and the differences between our right and left brains. I had also seen reflected versions of my face before as pictures and found it fascinating to imagine these other people trapped inside me. I was curious to meet a live action version of them and see if they embodied half of my personality.

Illustration of reflection over pose/orientation line

To create this project, I used the FaceOSC openFrameworks app along with an example Processing app as a jumping off point to create a digital mirror with face tracking. I cut out half the face, then reflected, rotated, scaled, and positioned it back onto the other side of the face. I included controls to switch between the different possible faces and to make minor rotation and scale adjustments for fine tuning for the crookedness of your face. It functions best if you do not twist your face.

My face composed of two right faces

Jackalope-mask

Hmm so the initial idea was to make a “mask” that would transform me into my Warrior cat-sona of my childhood (See here if you’re unfamiliar with Warriors). The relationship of the mask and the performance is that individual features of the mask are triggered by certain motions.

There’s a lot that I wish I could have done, but couldn’t do as I realized while progressing through this project. It seems p5.js can’t load .mtl files associated with .obj models, hence the mask is colorless. It seems p5.js can’t load more than one model at a time (I think????) so the performance and implementation ended up including switching between different features. You can see from the sketch below it looks pretty different from the final result.

Overall not a favorite project since it’s not particularly polished or attractive, but I had fun haha. With more time I’d like to switch to not using p5.js and also adding ability for the mask to rotate with my head.

Initial planning:

lumar-mask

A day in the life of Me, Myself, and I

concept: I wear a mask that replaces everyone’s faces with my own. (pass through VR in a modified cardboard)

Some person drivers for this assignment was that yes, the prompt is “digital mask” that we perform with, but how do we take it out of the screen? I wanted to push myself to take advantage of the realtime aspect of this medium (face tracking software) — perhaps something that couldn’t be done better through post processing in a video software despite the deliverable being a video performance.

My personal critique though, is that I got caught up in these thoughts, and about how to make the experience of performing compelling, that I neglected how it would come off as a performance to others. It feels like the merit/alluring qualities of the end idea (^ an egocentric world where you see yourself in everyone) get’s lost when the viewer is watching someone else immersed in their own “bubble”. What the performer sees (everyone with their face) is appealing to them personally, but visually uninteresting to anyone else.

Where to begin?

Brain dump onto the sketch book. Go through the wonderful rabbit hole of links Golan provides. And start assessing tools/techniques available.

Some light projection ideas (because it felt like light projection had an allure to it in the quality of light that isn’t as easily recreated in post processing. Projecting a light face on a face feels like there’s a surreal quality to it as it blurs the digital with the physical)

And in case a performance doesn’t have time to be coordinated:

Pick an idea —

I’ll spend a day wearing the google cardboard with my phone running pass through VR, masking everyone I see by replacing their face with my own. A world of me myself and I. Talk about mirror neurons!

Some of the resources gathered:

Open frame works ARkit add-on to go with a sample project with ARkit facetracking (that doesn’t work, or at least I couldn’t figure out a null vertex issue)

Open frameworks IOS facetracking without ARkit add on by Kyle McDonald

^ the above needs an alternative approach to Open CV Open Frameworks CV Add on

Ok, so ideally it’d be nice to have multiple faces tracking at once — And Tatyana found a beautiful SDK that does that

Create.js is a nice suite of libraries that handles animation, canvas rendering, and preloading in HTML5

Start —

— with a to do list.

  1. Make/get cardboard headset to wear
  2. Brainstorm technical execution
  3. Audit software/libraries to pick out how to execute
  4. Get cracking:
    1. Face detection on REAR facing camera (not the True Depth Front facing one) ideally in IOS
    2. Get landmark detection for rigging the mask to features
    3. Ideally, get estimated lighting ….see if the true depth front facing face mask can be estimated from the rear
    4. Either just replace the 2d Texture of the estimated 3D mask from #3 with my face, OR….shit….if only #2 is achieved…make a 3D model of face with texture and then try to put it in AR attached to the pose estimation of detected face
    5. Make sure to render in the Cardboard bifocal view (figure out how to do that)
get some straps
And of course googley eyes

 

…sad. What? NooooooooooooooooooooooooooooooooooooooOOoo…….

Re-evaluate life because the to do list was wrong and Murphy’s law strikes again —

Lol. Maybe. Hopefully it doesn’t come to this part. But I have a bad feeling…that I’ll have to shift back to javascript

Converting jpg to base 64 and using it as a texture…but only the eyebrows came in….lol….how to do life

Ok, so it turns out my conversion to base 64 was off. I should just stick with using an online converter like so —

Though this method of UV wrapping a 2D texture isn’t particularly satisfying despite lots of trial and error doing incremental tweaks and changes. The craft is still low enough that it distracts from the experience, so either I get it high enough it doesn’t detract, or I do something else entirely….

Two roads diverge…

Option 1: Bring the craft up higher…how?

  • Sample video feed for ‘brightness’ namely, the whites of the eyes in a detected face. Use that as normalizing point to adapt the brightness of the image texture being used overlayed now
  • Replace entire face with a 3D model of my own face? (where does one go to get a quick and dirty but riggable model of one’s face?)
  • …set proportions? Maybe keep it from being quite so adaptable? The fluctuations very easily bring everything out of the illusion
  • make a smoother overlay by image cloning! Thank you again Kyle for introducing me to this technique — it’s about combining the high frequency of the overlay to a low frequency (blurred) shader fragment of the bottom layer. He and his friend Arturo made

Option 2: Something new:

 

 

Made another set of straps to hold the phone to the front of the face


I was inspired by Tatyana’s idea of making a ‘Skype’ add on feature that would shift the eyes of the caller to always appear to be looking directly at the ‘camera’ (aka, the other person on the end of the call)…only in this case, I’d have a “sleeping mask” strapped to my face to disguise my sleepy state for wide awake and attentive. The front facing camera would run face detection in the camera view and move my eye to ‘look’ like I’m staring directly at the nearest face in my field of view. When there’s no face, I could use the (as inspired by Eugene’s use for last week’s assignment) computer vision calculated “Optical Flow” to get the eyes to ‘track’ movement and ‘assumed’ points of interest. The interesting bit would be negotiating between what I should be looking at — at what size of a face do I focus on that more than movement? Or vice versa?

And the performance could be so much fun! I could just go and fall asleep in a public place, with my phone running the app to display the reactive eyes with the front facing camera on recording people’s reactions. Bwahahhaha, there’s something really appealing about then crowdsourcing the performance from everyone else’s reactions. Making unintentional performers out of the people around me.

And I choose the original path

A Japanese researcher has already made a wonderful iteration of the ^ aforementioned sleeping mask eyes.

Thank you Kyle McDonald for telling me about this!

And then regret it —

So switching from previewing on localhosts on desktop to Mobile browser had some differences. An hour later it turns out it’s because I was serving it over HTTP instead of over an encrypted network, HTTPS. No camera feed for the nonsecure!

Next step, switching over to the rear view camera instead of the default front face one!

Damn. Maybe I should have stuck to Native

Read up on how to access different camera’s on different devices and best practices for how to stream camera content! In my case I was trying to get the rear facing camera on mobile which turned out to be rather problematic…often causing the page to reload “because a problem occurred” but never got specified in the console….hmmm…

hmmm….

 

I swear it works ok on desktop. What am I doing wrong with mobile?

Anywhoooooooooo — eventually things sort themselves out, but next time? Definitely going to try and go native.