Category Archives: face-osc

Caroline

29 Jan 2013

Screen shot 2013-01-29 at 3.15.00 PM

I used Kyle Mcdonald’s syphonFaceOSC app to create a processing sketch that fills the viewers mouth with text that dynamically resizes to their mouth. Every time the mouth closes the word changes to the next one in the sequence. This piece resides in an interesting juncture between kinetic type, subtitles and lip reading. Now that I have built this tool I intend to brain storm ideas of how I could use is to make a finished piece or performance. I am interested in juxtaposing spoken with written word. I am also interested in finding out whether this has any applications to assist the deaf.

code on github: https://github.com/crecord/faceOSC

Joshua

29 Jan 2013

faceOsc head orientation -> processing -> java Robot class -> types command rapidly while using rhino

I like rhino 3d.  It is a very powerful nurbs modeler.  There are certain commands, specifically join/explode, group/ungroup and trim/split, which are used all the time.  To execute these commands one has to either click a button or type the command and press enter.  Both take too long/I’m lazy.

So I made this thingy that detects various head motions and triggers rhino commands.  Processing takes in data about the orientation of the head about x,y,and z axis.  Each signal has a running average, a relative threshold above and below that average, and a time zone (min and max time) in which the signal pattern can be considered a trigger.  The signal pattern required is simple: the signal must cross the threshold and then return. The time that it takes to do this must fit within the time zone.  In the video there are three graphs on the right side of the screen.  They are in order, from the top, as x y and z.  The light blue horizontal lines represent the relative threshold (+and-).  The thin orangy line is the running average.  The signal is dark blue when in bounds, light blue when below, and purple when above.  The gray rectangle approximate the time zone, with the vertical black line as zero (it really should be at the right edge of each graph, but it seemed to cluttered).

Screen Shot 2013-01-29 at 1.14.29 AM

Sometimes its rather glitchy. Especially in the video: the screen grab makes things run slow. Also, the x and y axis triggers are often confused.  I have to hold my head pretty still.  More effective signal processing would help.  It would be awesome to be able to combine various triggers to have more commands, but this would be rather difficult.  I did set up the structure so that various combinations of triggers for different channels (like eyebrows, mouth and jaw) could code for specific commands.

 https://github.com/jlopezbi/faceOSC-Rhino

Erica

28 Jan 2013

bubbles
For my faceOSC project I created a bubble wand that you control using your face. The center of the wand is mapped to the center of your face so it fill follow the path your face moves in. To blow a bubble you move your mouth the same way you would to blow a bubble using a physical bubble wand. The longer you blow, the bigger the bubble will get. When you relax your mouth, the bubble is released from the wand and will float freely. There are three wands each with a different shape (circle, flower, and star). You can switch between the wands by raising your eye brows.
Below is a video demonstrating and explaining my project. You can download the source code here.

Bubbles from Erica Lazrus on Vimeo.

Alan

28 Jan 2013

 

The Github Repo: https://github.com/chinesecold/FaceOSCTwitter

The original idea of this project is using FaceOSC to capture emotions to control your web experience. It combines ofxOSC and ofxJSON addons together to change the way how you tweet.

In the video, if you keep your mouth open enough, you will see the current top 10 trend from Twitter.
Tweet met some problems right now with OAuth. I am working on this.

Screen Shot 2013-01-28 at 9.19.56 AM

Andy

28 Jan 2013

Github: https://github.com/andybiar/SpongeFaceOSC

My FaceOSC project uses Open Sound Control to transmit the parameters of my face via the internet to my Processing sketch. My sketch uses the eyebrow and mouth data to determine the size of the alpha mask over an image of Spongebob Squarepants. Only when I am closest to the camera and my face is ridiculously wide open can I see the image in its entirety, at which point Spongebob’s laughter is triggered. Is he laughing at you, or with you?

faceOSC

P1C – FaceOSC

This is an interesting interface enable you to change your face like a Sichuan Opera Pro. You can also learn more about the story plot and history behind that face. – just use your cell phone scan the QR code below.

Screen Shot 2013-01-28 at 8.06.43 AM

The QR code in the left bottom links to a crowdsourcing knowledge tank that you can learn about or contribute the knowledge about one specific role in the opera. The idea is to provide a simple and instant way to knowledge, but currently it is just my Facebook photo album. : P

Elwin

28 Jan 2013

p1_faceosc
Hamburger eating contest using FaceOsc and Processing. It tracks the user’s mouth height until it reaches a threshold. You’ll take a bite out of the hamburger when you close your mouth again. The image sequence is stored in an array and is called according to the counter.

Future work would be to implement head position and show the bite with the corresponding location on the hamburger through image masking.

Github link: https://github.com/BlueSpiritbox/p1_faceosc-hamburgers

import oscP5.*;
OscP5 oscP5;

//our FaceOSC tracked face dat
Face face = new Face();

PFont f;
PImage bg, img, icon, startBtn;

String imgString;
int imgIndex = 0;
int imgLast = 8;
int burgerNumber = 3;  //number of hamburgers
float bgRotate = 0;

int startTime;
int currentTime;  
int maxTime = 60;    //time limit
String counter;

boolean playing = false;
boolean eat = false;
boolean readyToEat = false;
boolean finished = false;


void setup () {
  size(600, 480);
  frameRate(60);

  imgString = "hamburger-"+str(imgIndex)+".png";
  img = loadImage(imgString);
  bg = loadImage("bg.png");
  icon = loadImage("icon.png");
  startBtn = loadImage("start-button.png");

  f = createFont("Arial", 24, true);
  currentTime = maxTime;
  oscP5 = new OscP5(this, 8338);
}

void draw() {

  //background + rotation
  pushMatrix(); 
  translate(width/2, height/2);
  rotate(bgRotate*TWO_PI/360);
  bgRotate += 0.5;
  image(bg, -bg.width/2, -bg.height/2);
  popMatrix();


  img.resize(400, 400);  //resizes image
  image(img, (width/2)-(img.width/2), (height/2)-(img.height/2));  //puts hamburger image in the center

  //draw hamburger icons
  for ( int i=0; i 0) {
      if ( face.mouthHeight > 3 && !readyToEat) {
        println("Mouth Open");
        readyToEat = true;
      }
      if ( face.mouthHeight < 2 && readyToEat) {
        readyToEat = false;
        eatBurger();
      }
    }
  }
}

void eatBurger() {
  imgIndex += 1;  //next hamburger image
  if ( imgIndex == imgLast ) {
    checkBurgers();  //checks how many burgers are leftd
  }

  imgString = "hamburger-"+str(imgIndex)+".png";
  img = loadImage(imgString);
  println("Nom nom nom!");
}
void checkBurgers() {
  if ( burgerNumber != 1) {  //if 1 or more burgers left
    imgIndex = 0;          //reset to full burger
    burgerNumber--;        //minus 1 burger
  } 
  else {
    imgIndex = imgLast;      //blank image
    burgerNumber--;    //no more burgers left
    finished = true;
  }
}

void startButton() {
  cursor(HAND);
  fill(0, 170);
  rect(0, 0, width, height);
  image(startBtn, (width/2)-(startBtn.width/2), (height/2)-(startBtn.height/2));

  if ( mousePressed == true ) { 
    startTime = millis();
    cursor(ARROW);
    playing = true;
  }
}

void score() {
  fill(0);
  textFont(f, 64);
  textAlign(CENTER);
  text("Your score: "+currentTime+"!!", width/2, height/2+12);
}

void keyPressed() {    //hotkeys for testing purposes

  if ( playing ) {
    switch (key) {    //press 'a' to take a bite
    case 'a':
      if ( burgerNumber != 0 ) {
        eatBurger();
      }
      break;  
    default:  
      break;
    }
  }
}


// OSC CALLBACK FUNCTIONS
void oscEvent(OscMessage m) {
  face.parseOSC(m);
}

Ziyun

28 Jan 2013

faceOSC+P5

FaceOSC+Processing from kaikai on Vimeo.

This is my faceOSC+Processing experiment.

What I did was that I mapped the mouthWidth and mouthHeight values got from the FaceOSC to RGB values in Processing to draw on canvas. The shape is exactly how your mouth is, and the color changes according to your mouth open-close motion. So basically what I did was turning the mouth into a paint brush and the drawing doesn’t look too bad. :)

see code on github

 

 

 

Patt

28 Jan 2013

faceOSC_cs

It can be tough sometimes choosing what you want to eat. Fret no more, I’ve found a solution. I took several recipes and photos of the food from the web. I used FaceOSC to create a program that randomly choose a meal and how to make it. I programed so that the food gets randomized when the eyebrows are raised, and stop when they are at the normal position. However, I (unintentionally) found out later that blinking would also work as well. Once you are filled with joy because you are satisfied with the food the program has chosen for you, you smile and the recipe of that specific food will appear. You have to keep smiling though in order to read the recipes. (How else would the program know how happy you are with the choice being made for ya!?)

Here’s the code: https://github.com/pattvira/faceOSC_food

Sources for recipes: www.smittenkitchen.com

John

28 Jan 2013

Screenshot_1_28_13_1_19_AM

For my faceOSC implementation i utilized Processing and Dan Wilcox’ Face class. My sketch is a 3D Robot who can be controlled by facial rotation along the x,y, and z axis and by scaling along the z-axis. The robots eyes are semi-independently articulated by the eyeBrow properties of the face object. Additionally, the robot’s eyes glow red when the user opens their mouth fully. See the video below for a working demo.

Gihub Repo: https://github.com/johngruen/roberto

fosc from john gruen on Vimeo.