yeen-mask

I made a “texting” app that reads your text out loud based on your facial expression. I was triggered by the thought that texting software allows people to be emotionally absent. But what if texting apps require users to be emotionally present all the time? By reading your text out loud, or even sending your text the way your face is when you type behind the screen.

I started by exploring the iPhone X face tracking ARkit. Then, I combined facial features with the speech synthesizer by manipulating the pitch and rate of each sentence. Things about your face that change the sound include:

more round eyes are – slower it speaks

more squinty eyes are – faster it speaks

more smiley – higher pitch

more frowny – lower pitch

insert “haha” if jaw wide open

insert “hello what’s up” if tongue out

insert “hello sexy” if wink

Process:

From lots of trials and errors, I changed many things, for example: I initially have: eyes rounder – speaks faster. and vice versa. But during testing, I found that it’s more natural the other way around…

My performance is a screen recording of me using the app.

 

In the final version, I added a correlation between the hue of the glasses and expression: the sum of pitch and rate changes the hue.

sketches

!credits to Gray Crawford who helped me extensively with Xcode visual elements!