I used Kyle Mcdonald’s syphonFaceOSC app to create a processing sketch that fills the viewers mouth with text that dynamically resizes to their mouth. Every time the mouth closes the word changes to the next one in the sequence. This piece resides in an interesting juncture between kinetic type, subtitles and lip reading. Now that I have built this tool I intend to brain storm ideas of how I could use is to make a finished piece or performance. I am interested in juxtaposing spoken with written word. I am also interested in finding out whether this has any applications to assist the deaf.
faceOsc head orientation -> processing -> java Robot class -> types command rapidly while using rhino
I like rhino 3d. It is a very powerful nurbs modeler. There are certain commands, specifically join/explode, group/ungroup and trim/split, which are used all the time. To execute these commands one has to either click a button or type the command and press enter. Both take too long/I’m lazy.
So I made this thingy that detects various head motions and triggers rhino commands. Processing takes in data about the orientation of the head about x,y,and z axis. Each signal has a running average, a relative threshold above and below that average, and a time zone (min and max time) in which the signal pattern can be considered a trigger. The signal pattern required is simple: the signal must cross the threshold and then return. The time that it takes to do this must fit within the time zone. In the video there are three graphs on the right side of the screen. They are in order, from the top, as x y and z. The light blue horizontal lines represent the relative threshold (+and-). The thin orangy line is the running average. The signal is dark blue when in bounds, light blue when below, and purple when above. The gray rectangle approximate the time zone, with the vertical black line as zero (it really should be at the right edge of each graph, but it seemed to cluttered).
Sometimes its rather glitchy. Especially in the video: the screen grab makes things run slow. Also, the x and y axis triggers are often confused. I have to hold my head pretty still. More effective signal processing would help. It would be awesome to be able to combine various triggers to have more commands, but this would be rather difficult. I did set up the structure so that various combinations of triggers for different channels (like eyebrows, mouth and jaw) could code for specific commands.
For my faceOSC project I created a bubble wand that you control using your face. The center of the wand is mapped to the center of your face so it fill follow the path your face moves in. To blow a bubble you move your mouth the same way you would to blow a bubble using a physical bubble wand. The longer you blow, the bigger the bubble will get. When you relax your mouth, the bubble is released from the wand and will float freely. There are three wands each with a different shape (circle, flower, and star). You can switch between the wands by raising your eye brows.
Below is a video demonstrating and explaining my project. You can download the source code here.
My FaceOSC project uses Open Sound Control to transmit the parameters of my face via the internet to my Processing sketch. My sketch uses the eyebrow and mouth data to determine the size of the alpha mask over an image of Spongebob Squarepants. Only when I am closest to the camera and my face is ridiculously wide open can I see the image in its entirety, at which point Spongebob’s laughter is triggered. Is he laughing at you, or with you?
This is an interesting interface enable you to change your face like a Sichuan Opera Pro. You can also learn more about the story plot and history behind that face. – just use your cell phone scan the QR code below.
The QR code in the left bottom links to a crowdsourcing knowledge tank that you can learn about or contribute the knowledge about one specific role in the opera. The idea is to provide a simple and instant way to knowledge, but currently it is just my Facebook photo album. : P
Hamburger eating contest using FaceOsc and Processing. It tracks the user’s mouth height until it reaches a threshold. You’ll take a bite out of the hamburger when you close your mouth again. The image sequence is stored in an array and is called according to the counter.
Future work would be to implement head position and show the bite with the corresponding location on the hamburger through image masking.
What I did was that I mapped the mouthWidth and mouthHeight values got from the FaceOSC to RGB values in Processing to draw on canvas. The shape is exactly how your mouth is, and the color changes according to your mouth open-close motion. So basically what I did was turning the mouth into a paint brush and the drawing doesn’t look too bad. :)
It can be tough sometimes choosing what you want to eat. Fret no more, I’ve found a solution. I took several recipes and photos of the food from the web. I used FaceOSC to create a program that randomly choose a meal and how to make it. I programed so that the food gets randomized when the eyebrows are raised, and stop when they are at the normal position. However, I (unintentionally) found out later that blinking would also work as well. Once you are filled with joy because you are satisfied with the food the program has chosen for you, you smile and the recipe of that specific food will appear. You have to keep smiling though in order to read the recipes. (How else would the program know how happy you are with the choice being made for ya!?)
Here’s the code: https://github.com/pattvira/faceOSC_food
For my faceOSC implementation i utilized Processing and Dan Wilcox’ Face class. My sketch is a 3D Robot who can be controlled by facial rotation along the x,y, and z axis and by scaling along the z-axis. The robots eyes are semi-independently articulated by the eyeBrow properties of the face object. Additionally, the robot’s eyes glow red when the user opens their mouth fully. See the video below for a working demo.