sjang-mask

Reunion/Re-enactment – Face Mask Experiments

Beginnings 

The work was initially sparked by the idea of mapping the history of myself onto my face. The core concept was to express and encapsulate in these masks the multitude of selves from different points in time, that would allow me to travel back and access what’s been filed away or buried within. I began creating masks from photographs that represented specific pivotal times of my life, that brought back certain moods and emotions. Trying on masks of my own face from the past brought back a flood of mixed emotions and memories, which transported me to a strange mental place. While trying to create a naturally aged version of my face, I was inspired to use my mother’s face to overlap with mine. This led to my creating a series of face masks that represented my mother in critical stages in her life, in tandem with mine. It only made sense, considering how much of an impact my difficult relationship with my mother had in shaping who I am, in more or less all aspects of my life. My mother’s masks enabled me to express voices of her – much of what I have internalized – which continues to be a source of internal strife and conflict. The process of creating the masks itself opened a way for me to untangle complex thoughts and feelings, and give distinct faces and voices to contradictory aspects of my identity.

I created a face texture generator that converts images to base64 and creates uv maps based on the facial landmark vertices. A critical factor guiding my image selection was having a clear, high-res frontal view of the face. Some of the images were then manipulated for improved landmark detection.

The Software

The mask software that evolved from these explorations allows me to rapidly switch between 15 face masks that act as portals to the past and future. The masks are carefully overlaid to my face, and the overlapping facial features

provide an interesting subtle tension, which is amplified in abrupt movement. The masks imbue my face with color and bring it to life. Depending on which mask I have on, I find myself naturally feeling and acting in a different way, and also engaging in private conversations with myself. I can choose to swap masks by speedily flipping through my choices, dramatically transitioning from one to another with visual effects, or simply selecting the one I want from a drop-down menu.

The various types of masks created for the performance

The act of putting on the masks is at once an unsettling self-interrogation and a cathartic form of self-expression. The software allows me to navigate and embody the myriad identities and spectrum of emotional states I have bubbling beneath the surface. The merging of my mother’s face with my own with age, expresses my hope for a certain closure and embrace with time. An image of my grandmother was also used to create an older mask version of my mother. The faces that form the masks span three generations, which reflect a sense of continuity and shared form of identity.

Experiments with different types of blend modes and mask transition effects that took advantage of technical glitches
Face flipping in action
Gradual flickering transition test
Blurring in and out between swaps

The Beyond Reality Face SDK – v4.1.0 (BRFv4)  was used for face detection/tracking and face texture overlay that dynamically changes with eye blinking and mouth openings. Mario Klingemann’s Superfast Blur script / algorithm was used for animated blur effects during mask transitions.

The Performance 

The mask performance that I had originally planned was scripted based on my recollections of interactions between me and my mother in different stages of our lives. I wanted the sparse words that make up the dialogues to capture the essence of events and moments in the past that continue to resonate, while not being too detailed in an overly(?) personal and revealing way. I wanted to allow enough room for viewers to be able to relate and reflect on their personal experiences. I had René Watz’s Out Of Space Image Pan playing in the background (a muffled static-like noise), to create an eerie, alien atmosphere.

What I failed to realize until too late, however, was how wrong I was in thinking I could perform this, especially for a public audience. Hearing my own voice out loud made me hyper conscious of how disconcertingly different it was from my inner cacophony of voices, which further alienated me from my own internal experiences I wanted to re-enact. The performance was overall a pretty disastrous experience, despite my repeated attempts to overcome my cognitive dissonance and deficiencies.

The first version of the script roughly reflected the five stages of grief – denial, anger, bargaining, depression and acceptance(reconciliation). Later realizing that the script was too long for a 2 minute performance, I condensed it to a much shorter version, which no longer stuck to a linear temporal narrative. I played with scrambling bits and pieces of the script in a random order, to reflect an inner chaos. My plans to convey this sense of disorientation through my script and acting, however, were effectively squashed during my clumsy attempts in performance.

Excerpts from a rough draft of the script

Ideally I would like to have the script read by someone with a deep, expressive voice, and have records of it playing like a voice-over during the performance.  Specific parts of the script would be associated with specific masks, and dynamically played when the mask is put on. Without the dissonance of hearing my own voice, I imagine I would be able to channel my thoughts and feelings through my body language and facial expressions.

It would also be interesting to explore more meaningful ways of selecting and transitioning between masks. I thought of invoking certain masks depending on the similarity of facial expression and emotions, although the library I am currently using is not readily capable of distinguishing such subtle differences and nuances in expression. I also thought of using songs or pieces of music that I associate with a specific time and space to trigger the mask transitions.

Process/Notes 

I went through many different ideas and designs before I embarked on the work I described above. Below are some excerpts from my ideation notes that show some of my thought process:

ya-mask

For this assignment, I created a musical instrument using my face. While not strictly a visual mask, I liked the idea of an aural mask augmenting audio input, transforming the human voice into something unrecognizable.

The final result is a performance in which I use the movements of my mouth in order to control the audio effects that are manipulating the sounds from my mouth. An overall feedback delay is tied to the openness of my mouth, while tilting my head or smiling distorts the audio in different ways. Also I mapped the orientation of my face to the stereo pan, making the audio mix move left and right.

One interesting characteristic about real-world instruments as compared to purely digital ones is the interdependence of the parameters. While an electronic performer can map individual features of sound to independent knobs, controlling them separately, a piano player is given overlapped control over the tonality of the notes: hitting the keys harder will result in a brighter tone, but also an overall louder sound. While it may seem like an unnecessary constraint, this often results in performances with more perceived expression, as musicians must take extreme care if they intend to play a note in a certain way. I wanted to mimic this interdependence in my performance, so I purposefully overlapped multiple controls of my audio parameters to the same inputs on my face. Furthermore, the muscles on my face often are affected by one another, so this constrained the space of control that I was given to manipulate the sound. The end result is me performing some rather odd contortions of my face to get the sounds that I wanted.

The performance setup is done using a mix of software and hardware setup. First, I attached a contact microphone to my throat, using a choker to secure it in place. The sound input is then routed to Ableton Live, where I run my audio effects. A browser-based Javascript environment is used to track and visualize the face from a webcam using handsfree.js, and send parameters of the face expression through OSC and WebSockets. Because Ableton Live can only receive UDP, however, a local server instance is used to pass the WebSocket OSC data over to a UDP connection, which Ableton can receive using custom Max for Live devices.

An Ableton Live session and a Node JS server running in the background.

For the visuals of the piece, I wanted something simple that showed the mouth abstracted away and isolated from the rest of the face, as the mouth itself was the main instrument. I ended up using paper.js to draw smooth paths of the contours of my tracked lips, colored white and centered on a black background. For reference, I also included the webcam stream in the top corner; in a live setting, however, I would probably not show the webcam stream as it is redundant.

 

yeen-mask

I made a “texting” app that reads your text out loud based on your facial expression. I was triggered by the thought that texting software allows people to be emotionally absent. But what if texting apps require users to be emotionally present all the time? By reading your text out loud, or even sending your text the way your face is when you type behind the screen.

I started by exploring the iPhone X face tracking ARkit. Then, I combined facial features with the speech synthesizer by manipulating the pitch and rate of each sentence. Things about your face that change the sound include:

more round eyes are – slower it speaks

more squinty eyes are – faster it speaks

more smiley – higher pitch

more frowny – lower pitch

insert “haha” if jaw wide open

insert “hello what’s up” if tongue out

insert “hello sexy” if wink

Process:

From lots of trials and errors, I changed many things, for example: I initially have: eyes rounder – speaks faster. and vice versa. But during testing, I found that it’s more natural the other way around…

My performance is a screen recording of me using the app.

 

In the final version, I added a correlation between the hue of the glasses and expression: the sum of pitch and rate changes the hue.

sketches

!credits to Gray Crawford who helped me extensively with Xcode visual elements!

ngdon-mask

age2death

age2death is a mirror with which you can watch yourself aging to death online. Try it out: https://age2death.glitch.me

GIFs

^ timelapse

^ teeth falling

^ decaying

^ subtle wrinkles

Process

Step 1: Facial Landmark Detection + Texture Mapping

I used brfv4 through handsfree.js to retrieve facial landmarks, and three.js to produce the 3D meshes.

I wanted to get the forehead too, which is not in the original 68 key points. However, a person’s forehead’s size and orientation is somewhat predictable based on the rest of their face. Thus I calculated 13 new  points for the forehead.

To achieve the effect I wanted, I have two possible approaches in mind: The first is to duplicate the captured face mesh and texture them with “filters” I want to apply, and composite the 3D rendering with the video feed.

The second is to “peel” the skin off the captured face, and stretch it onto a standard set of key points. Then the filters are applied onto the 2D texture, which is then used to texture the mesh.

I went with the second approach because I my intuition is that it would give me more control over the imagery. I never tried the other approach, but I liked my current approach.

Step 2: Photoshopping Filters

My next step was to augment the texture of the captured face so it looks as if it is aging.

I made some translucent images in Photoshop, featuring some wrinkles (of different severity), blotches, white eyebrows, an almost-dead-pale-blotchy face, a decaying corpse face, and a skull according to the standard set of key points I fixed.

I tired to make them contain as little information about gender, skin color, etc. as possible so these filters will ideally be applicable to everyone when blended using “multiply” blend mode.

Moreover, I made the textures rather “modular”. For example, the white brow image contains only the brow, the blotch image contains only the blotch, so I can mix and match different effects in my code.

Most of the textures were synthesized by stretching, warping, compositing, and applying other Photoshop magic to found images.

Currently I’m loading the textures directly in my code, but in the future maybe procedurally generating the textures fits more to my taste.

^ Some early tests. Problem is that the skin and lips are too saturated for an old person.

After spending a lot of time testing the software, I feel so handsome IRL.

Step 3: Skewing the Geometry

Besides the change in skin condition, I also notice that the shape of old people’s face & facial features also changes in real life. Therefore, I made the chin sag a bit and corners of the eyes move down a bit over time, among other subtle adjustments.

Step 4: Transitions

After I have a couple of filters ready, I tested out the effect by linearly interpolating them with code. It worked pretty well, but I thought that the simple fade in / fade out effect looks a bit cheap. I want it to be more than just face-swapping .

One of the improvements I made is what I call “bunch of blurry growing circles” transition. The idea is that some part of the skin get blotchy/rotten  before other parts, and the effects sort of expand/grow from some areas of the face to the whole face.

I achieved the effect with a mask containing a bunch of blurry growing circles (hence the name). As the circles grow, the new image (represented by black areas on the mask) reveals.

My first thought on how to achieve this kind of effect is to sample a perlin noise, but I then thought that it would be less performant unless I write a shader for it. The circles turned out to be pretty good (and fast).

Another problem I found was that the “post-mortem” effects (ie. corpse skin, skull, etc) are somewhat awkward. Since only the face is gross while other parts of the body are intact, I think the viewer tends to feel the effects are “just a mask”. I also don’t want to the effects to be scary in the sense of scary monsters in scary movies. Therefore my solution was to darken the screen gradually after death, and when the face finally turns into a skull, the screen is all black. I think of it as hiding my incompetence by making it harder to see.

I also made heavy use of HTML canvas blend modes such as multiply, screen, source-in, etc. I desaturate and darken parts of the skin.

^ Some blending tests

Step 5: Movable Parts

After I implemented the “baseline”, I thought I could make the experience more fun and enjoyable by adding little moving things such as bugs eating your corpse and teeth falling down when you’re too old.

The bugs are pretty straightforward. I added a bunch of little black dots whose movements are driven by Perlin noise. However I think I need to improve it in the future. Because when you look closely, you’ll find out these bugs are nothing more than little black dots. Maybe some (animated?) images of bugs will work better.

I did the falling teeth with two parts: the first is the particle effects of the individual tooth falling, and the second is to mask out the teeth that have already fallen.

I liked the visual effect of teeth coming off one by one, but sometimes the facial landmark detection is inaccurate around the mouth, and you can sort of see your real teeth behind the misaligned mask. I probably need to think of some more sophisticated algorithms.

^ left: teeth mask, right: teeth falling. Another thing I need to improve is the color of the teeth. Currently I assume some yellowish color, but probably better way is sample user’s real teeth color, or easier, filter all other teeth into this yellowish color.

Step 6: Timing & Framing

I adjusted the speed and pace of the aging process, so at first, the viewers are almost looking into a mirror, with nothing strange happening. Only slowly and gradually they’ll realize the problem. And finally, when they’re dead, their corpse decay quickly and everything fades into darkness.

I also wanted something more special than the default 640x480px window. I thought maybe a round canvas will remind people of a mirror.

I made the camera follow the head, because it is what really matters and what I would like people to focus on. It also looks nicer as a picture, when there isn’t a busy background.

Step 7: Performance

I decided to read a poem for my performance. I thought my piece woldn’t need too many performative kind of motions, and some quiet poem-reading best brings out the atmosphere.

I had several candidates, but the poem I finally picked is called “Dew on the Scallion (薤露)”. It was a mourning song written in 202 BC in ancient China.

薤上露,何易晞。

露晞明朝更复落,

人死一去何时归!

 

蒿里谁家地,

聚敛魂魄无贤愚。

鬼伯一何相催促,

人命不得少踟蹰。

I don’t think anyone has translated it to English yet, so here’s my own attempt:

The morning dew on the scallion,

how fast it dries!

it dries only to form agian tomorrow,

but when will he come back, when a person dies?

 

Whose place is it, this mountain burial ground?

yet every soul, foolish or wise, rest there.

And with what haste Death drives us on,

with no chance of lingering anywhere.

Byproduct: face-paint

Try it out: https://face-paint.glitch.me

While making my project, I found that painting on top of your own face is also quite fun. So I made a little web app that lets you do that.

ulbrik-mask

My Two Other Faces

Glitch Demo

This performance is an investigation of the potential existence of multiple personalities within ourselves. It features an exploration of the two other people generated from the left or right sides of my face in a digital mirror. The program I created for the performance can take one half of my face and reflect it over the centerline to cover the other half and create a new face entirely from one side.

Website for doing something similar with static photos: http://www.pichacks.com

I was inspired by reading about occasional emotional asymmetry of the face and the differences between our right and left brains. I had also seen reflected versions of my face before as pictures and found it fascinating to imagine these other people trapped inside me. I was curious to meet a live action version of them and see if they embodied half of my personality.

Illustration of reflection over pose/orientation line

To create this project, I used the FaceOSC openFrameworks app along with an example Processing app as a jumping off point to create a digital mirror with face tracking. I cut out half the face, then reflected, rotated, scaled, and positioned it back onto the other side of the face. I included controls to switch between the different possible faces and to make minor rotation and scale adjustments for fine tuning for the crookedness of your face. It functions best if you do not twist your face.

My face composed of two right faces

Jackalope-mask

Hmm so the initial idea was to make a “mask” that would transform me into my Warrior cat-sona of my childhood (See here if you’re unfamiliar with Warriors). The relationship of the mask and the performance is that individual features of the mask are triggered by certain motions.

There’s a lot that I wish I could have done, but couldn’t do as I realized while progressing through this project. It seems p5.js can’t load .mtl files associated with .obj models, hence the mask is colorless. It seems p5.js can’t load more than one model at a time (I think????) so the performance and implementation ended up including switching between different features. You can see from the sketch below it looks pretty different from the final result.

Overall not a favorite project since it’s not particularly polished or attractive, but I had fun haha. With more time I’d like to switch to not using p5.js and also adding ability for the mask to rotate with my head.

Initial planning:

lumar-mask

A day in the life of Me, Myself, and I

concept: I wear a mask that replaces everyone’s faces with my own. (pass through VR in a modified cardboard)

Some person drivers for this assignment was that yes, the prompt is “digital mask” that we perform with, but how do we take it out of the screen? I wanted to push myself to take advantage of the realtime aspect of this medium (face tracking software) — perhaps something that couldn’t be done better through post processing in a video software despite the deliverable being a video performance.

My personal critique though, is that I got caught up in these thoughts, and about how to make the experience of performing compelling, that I neglected how it would come off as a performance to others. It feels like the merit/alluring qualities of the end idea (^ an egocentric world where you see yourself in everyone) get’s lost when the viewer is watching someone else immersed in their own “bubble”. What the performer sees (everyone with their face) is appealing to them personally, but visually uninteresting to anyone else.

Where to begin?

Brain dump onto the sketch book. Go through the wonderful rabbit hole of links Golan provides. And start assessing tools/techniques available.

Some light projection ideas (because it felt like light projection had an allure to it in the quality of light that isn’t as easily recreated in post processing. Projecting a light face on a face feels like there’s a surreal quality to it as it blurs the digital with the physical)

And in case a performance doesn’t have time to be coordinated:

Pick an idea —

I’ll spend a day wearing the google cardboard with my phone running pass through VR, masking everyone I see by replacing their face with my own. A world of me myself and I. Talk about mirror neurons!

Some of the resources gathered:

Open frame works ARkit add-on to go with a sample project with ARkit facetracking (that doesn’t work, or at least I couldn’t figure out a null vertex issue)

Open frameworks IOS facetracking without ARkit add on by Kyle McDonald

^ the above needs an alternative approach to Open CV Open Frameworks CV Add on

Ok, so ideally it’d be nice to have multiple faces tracking at once — And Tatyana found a beautiful SDK that does that

Create.js is a nice suite of libraries that handles animation, canvas rendering, and preloading in HTML5

Start —

— with a to do list.

  1. Make/get cardboard headset to wear
  2. Brainstorm technical execution
  3. Audit software/libraries to pick out how to execute
  4. Get cracking:
    1. Face detection on REAR facing camera (not the True Depth Front facing one) ideally in IOS
    2. Get landmark detection for rigging the mask to features
    3. Ideally, get estimated lighting ….see if the true depth front facing face mask can be estimated from the rear
    4. Either just replace the 2d Texture of the estimated 3D mask from #3 with my face, OR….shit….if only #2 is achieved…make a 3D model of face with texture and then try to put it in AR attached to the pose estimation of detected face
    5. Make sure to render in the Cardboard bifocal view (figure out how to do that)
get some straps
And of course googley eyes

 

…sad. What? NooooooooooooooooooooooooooooooooooooooOOoo…….

Re-evaluate life because the to do list was wrong and Murphy’s law strikes again —

Lol. Maybe. Hopefully it doesn’t come to this part. But I have a bad feeling…that I’ll have to shift back to javascript

Converting jpg to base 64 and using it as a texture…but only the eyebrows came in….lol….how to do life

Ok, so it turns out my conversion to base 64 was off. I should just stick with using an online converter like so —

Though this method of UV wrapping a 2D texture isn’t particularly satisfying despite lots of trial and error doing incremental tweaks and changes. The craft is still low enough that it distracts from the experience, so either I get it high enough it doesn’t detract, or I do something else entirely….

Two roads diverge…

Option 1: Bring the craft up higher…how?

  • Sample video feed for ‘brightness’ namely, the whites of the eyes in a detected face. Use that as normalizing point to adapt the brightness of the image texture being used overlayed now
  • Replace entire face with a 3D model of my own face? (where does one go to get a quick and dirty but riggable model of one’s face?)
  • …set proportions? Maybe keep it from being quite so adaptable? The fluctuations very easily bring everything out of the illusion
  • make a smoother overlay by image cloning! Thank you again Kyle for introducing me to this technique — it’s about combining the high frequency of the overlay to a low frequency (blurred) shader fragment of the bottom layer. He and his friend Arturo made

Option 2: Something new:

 

 

Made another set of straps to hold the phone to the front of the face


I was inspired by Tatyana’s idea of making a ‘Skype’ add on feature that would shift the eyes of the caller to always appear to be looking directly at the ‘camera’ (aka, the other person on the end of the call)…only in this case, I’d have a “sleeping mask” strapped to my face to disguise my sleepy state for wide awake and attentive. The front facing camera would run face detection in the camera view and move my eye to ‘look’ like I’m staring directly at the nearest face in my field of view. When there’s no face, I could use the (as inspired by Eugene’s use for last week’s assignment) computer vision calculated “Optical Flow” to get the eyes to ‘track’ movement and ‘assumed’ points of interest. The interesting bit would be negotiating between what I should be looking at — at what size of a face do I focus on that more than movement? Or vice versa?

And the performance could be so much fun! I could just go and fall asleep in a public place, with my phone running the app to display the reactive eyes with the front facing camera on recording people’s reactions. Bwahahhaha, there’s something really appealing about then crowdsourcing the performance from everyone else’s reactions. Making unintentional performers out of the people around me.

And I choose the original path

A Japanese researcher has already made a wonderful iteration of the ^ aforementioned sleeping mask eyes.

Thank you Kyle McDonald for telling me about this!

And then regret it —

So switching from previewing on localhosts on desktop to Mobile browser had some differences. An hour later it turns out it’s because I was serving it over HTTP instead of over an encrypted network, HTTPS. No camera feed for the nonsecure!

Next step, switching over to the rear view camera instead of the default front face one!

Damn. Maybe I should have stuck to Native

Read up on how to access different camera’s on different devices and best practices for how to stream camera content! In my case I was trying to get the rear facing camera on mobile which turned out to be rather problematic…often causing the page to reload “because a problem occurred” but never got specified in the console….hmmm…

hmmm….

 

I swear it works ok on desktop. What am I doing wrong with mobile?

Anywhoooooooooo — eventually things sort themselves out, but next time? Definitely going to try and go native.

aahdee – 2D Physics

The inspiration for this project was my love for the rain and the snow. I find them to be rather calming and I just enjoy experiencing that weather. Thus, my goal for this project was to create a window to that experience.

Using Dan Shiffman’s The Nature of Code, I used the Box2D for Processing library to create this. I thought that this was a bit technically challenging since it’s been a while since I used Processing and the trigonometry with Box2D’s world was hard to get right.  Nevertheless, I find that my goals for this project was met, but I think that it would be much stronger if I created or found some ambient music to pair with it.

I wasn’t sure if I wanted this project to be horizontal or vertical. I like both orientations, but if I were to display this on a wall I would use a very long horizontal orientation.

conye – 2D Physics

Dinfinite Dog

A dog hanging from a chain of hands, where the gravity in game is affected by the gyroscope data of the mobile device. On impact with the wall, small creatures will pop out!

Although I had originally wanted to build a dog-builder game (depicted below), I ended up with something rather different. I’m still happy with how it turned out, because I learned a lot about Unity’s physics engine and still made something that I find novel to interact with. At first, I wasn’t fond of how the body parts of the dog could move out of the place with each other and distort the dog body, but now I’m happy that it adds an element of surprise and makes the dog more dynamic.

My rough final draft of the project didn’t include the fluffy, springy walls and instead had rigid, unmoving walls. This iteration of the app looked very violent because it was essentially just a dog smashing against the walls.  However, I received a lot of a good feedback (such as fluffy walls + springs on the wall colliders + dog face blinking animation) during crit that made the game a lot dog-friendlier.

Thank you to Lukas for help with the dog’s torso shader, to Tat for her help on the visual look, and to Grey and Aman for their advice on the swinging dog!

Documentation pictures

I had originally hoped to make a dog builder application, as depicted below. Because it was physics based, I thought the idea of using real mobile to influence gravity would be a neat interaction, so I built most of my program around the idea of a blob of flesh swinging around on a rope. Users would then be able to add dog parts to the blob of flesh, making many permutations of a dog.

Early prototype of the app; I built the dog’s body from colliders in Unity.

Some assets I drew for this app.

jamodei-2Dphysics

Interpellation Mouth Tracker

My goals for this project were to work with a live video feed, find an amusing way to interact with the matter.js physics system, and also to introduce myself to working with p5.js/javascript.

 

“Interpellation is a process, a process in which we encounter our culture’s values and internalize them. Interpellation expresses the idea that an idea is not simply yours alone (such as ‘I like blue, I always have’) but rather an idea that has been presented to you for you to accept. ”

 

What I ended up making is a face-tracking interaction (using clmtrackr.js ) where every-time the viewer opens their mouth, the word ‘interpellation’ spills out and bounces around until you close your mouth. I thought this would be a humorous way to engage with a relatively simply physics interaction. The words have a floor and ceiling they interact with, and I was able to get moderate control over the manner/speed/animation style in which they bounced around and interacted with the other word-rectangles. Overall, I wish I had been able to get further into physics systems, but I am happy with the progress I made in this new (to me) creative coding environment. I am interested in future possibilities of exploring face-tracking, too.

 

GIF documentation: