Category Archives: project-1

Bueno

28 Jan 2013

rainbowVomit from Andrew Bueno on Vimeo.

After talking about my old waterboarding clock processing app in class the other day, I got a bit nostalgic for that old, lo-fi aesthetic. Thus, the crapsaccharine rainbow vomit app. I hooked up the FaceOSCSyphon app to Dan’s FaceOSC template for processing. I kept track mainly of the mouth. When its tracking points become wide enough apart, the vomit begins. It quite a bit of figuring out – Caroline Record was a huge help. Another source of help was a fellow on openProcessing – he is properly credited in my code. Without him, the app wouldn’t have that special brand of ridiculous.

https://github.com/buenoIsHere/RainbowVomit

 
//
// a template for receiving face tracking osc messages from
// Kyle McDonald's FaceOSC https://github.com/kylemcdonald/ofxFaceTracker
//
// 2012 Dan Wilcox danomatika.com
// for the IACD Spring 2012 class at the CMU School of Art
//
// adapted from from Greg Borenstein's 2011 example
// http://www.gregborenstein.com/
// https://gist.github.com/1603230
//
// Addendum from Bueno: Thank you florian on openFrameworks for the code!
import codeanticode.syphon.*;
import oscP5.*;

SyphonClient client;
OscP5 oscP5;
float[] rawPoints = new float[131];
float cWidth; 
float cHeight; 
PImage img;
Wave wav;

// num faces found
int found;

// pose
float poseScale;
PVector posePosition = new PVector();
PVector poseOrientation = new PVector();

// gesture
float mouthHeight;
float mouthWidth;
float eyeLeft;
float eyeRight;
float eyebrowLeft;
float eyebrowRight;
float jaw;
float nostrils;

void setup() {
  size(640, 480, P3D);
  background (random (256), random (256), 255, random (256));
  smooth();
  frameRate(30);

  oscP5 = new OscP5(this, 8338);
  oscP5.plug(this, "found", "/found");
  oscP5.plug(this, "rawDataReceived", "/raw");

  client = new SyphonClient(this, "FaceOSC");

  colorMode(HSB);
  wav = new Wave(width/2, height/2, 25, PI/18, 0);
}

void draw() {  
  if (client.available()) 
  {
    // The first time getImage() is called with 
    // a null argument, it will initialize the PImage
    // object with the correct size.
    //img = client.getImage(img); // load the pixels array with the updated image info (slow)
    img = client.getImage(img, false); // does not load the pixels array (faster)
    image(img,0,0);
    //background(255,0,0,0);
    if(found > 0) {
      float lastNum = 0; 
      int Idx = 0; 

      noFill();
      noStroke();
      if(dist(rawPoints[122], rawPoints[123], rawPoints[128], rawPoints[129]) > 25)
      {
        wav.paint = true;
      }
      else
      {
        wav.paint = false;  
      }

      wav.offsetX = (rawPoints[120] + rawPoints[124])/2;
      wav.offsetY = (rawPoints[121] + rawPoints[125])/2;
      wav.amplitude = rawPoints[124] - rawPoints[120] - 40;
      wav.display();
      wav.update();
   // ellipse (rawPoints[130],rawPoints[131], 2,2);  

    }
    else
    {
      println ("not found");
    }
  }
}

void mousePressed() {
  background (random (256), random (256), 255, random (256));
}

// OSC CALLBACK FUNCTIONS

public void found(int i) {
  println("found: " + i);
  found = i;
}

public void rawDataReceived(float [] f){
    for (int i = 0; i < f.length; i++)
    {
      rawPoints[i] = f[i];
    }
}

// all other OSC messages end up here
void oscEvent(OscMessage m) {
  if (m.isPlugged() == false) {
    println("UNPLUGGED: " + m);
  }
}
 
/* OpenProcessing Tweak of *@*http://www.openprocessing.org/sketch/46795*@* */
/* !do not delete the line above, required for linking your tweak if you re-upload */
/* Some fun with the sin function
 once you press the left button of the mouse, you can modulate
 the scale and the frequency of a  sine wave*/

// I tried to keep things clean, using OOP
//i'm 100% not sure about the grammar and the vocabulary in the comments

//this script is inspired by the awesome book "PROCESSING a Programming Handbook for Visual Designers and Artists"
// written by  Casey Reas and Ben Fry

class Wave {

  float offsetX; // this will position our wave on the X axis
  float offsetY;
  float amplitude;// this is for the wave height
  float frequency;//this will set the frequency(from batshit fast to insanely zoombastic)
  float angle;// it's an angle, every body knows that!
  boolean paint;

  //constructor
  Wave(float offX, float offY, float amp, float fre, float ang) {
    offsetX = offX;
    offsetY = offY;
    amplitude = amp;
    frequency = fre;
    angle = ang;
    paint = false;
  }

  void display() {

    // for each pixel on the x axis, A.K.A: on all the width of the screen
    for (float y = offsetY; y < = height ; y+=.2) {   

      /*So this is when it's getting all maths and stuff
       it's a neat formula that updates the y position of the ellipse you'll draw
       so, after some movement, it will look like a wave*/
      float posx = offsetX + (sin(angle)*(amplitude + map(y, offsetY, height,20,100)));

      /*you draw an ellipse
       the y position is increasing everytime you loop
       the x position is going up and down and up and down, and...
       my ellipse is 5 pixel high and 10 pixel width. this is how I like my ellipse*/
      ellipse(posx, y, 10,10);

      /* a nice and easy way to get RAINBOW colours
       (don't forget to set the colorMode to HSB in the setup function)
       h will set the hue
       the h value is calculated wirh a sin function and is mapped in order to get a value between 0 and 130*/
      float h = map(sin(angle), -1, 1, 0, 130);
      //here is our h again, for hue. saturation and brightness are at 255

      if(paint)
      {
        fill(h, 255, 255);
      }
      else
      {
        noFill();  
      }

      //don't forget to increase the angle by adding frequency
      // or your wave will be flat; you don't want that
      angle += frequency/2;
    }
  }

  /*this function will be called each frame
   it modifies amplitude and frequency, depending on the x and y coordinates of the mouse,
   once you press the left button*/
  void update() {
  }
}

Alan

28 Jan 2013

Screen Shot 2013-01-28 at 9.24.01 AM

The Github Repo: https://github.com/chinesecold/Rock-Scissor-Paper

 

This project is designed to implement Rock-Paper-Scissor game on Sifteo devices. With each user using one cube, it allows users to bid their choices by tilting and tapping and to challenge other players by neighboring others’ devices.

sifteo1sifteo2

Erica

28 Jan 2013

bubbles
For my faceOSC project I created a bubble wand that you control using your face. The center of the wand is mapped to the center of your face so it fill follow the path your face moves in. To blow a bubble you move your mouth the same way you would to blow a bubble using a physical bubble wand. The longer you blow, the bigger the bubble will get. When you relax your mouth, the bubble is released from the wand and will float freely. There are three wands each with a different shape (circle, flower, and star). You can switch between the wands by raising your eye brows.
Below is a video demonstrating and explaining my project. You can download the source code here.

Bubbles from Erica Lazrus on Vimeo.

Alan

28 Jan 2013

 

The Github Repo: https://github.com/chinesecold/FaceOSCTwitter

The original idea of this project is using FaceOSC to capture emotions to control your web experience. It combines ofxOSC and ofxJSON addons together to change the way how you tweet.

In the video, if you keep your mouth open enough, you will see the current top 10 trend from Twitter.
Tweet met some problems right now with OAuth. I am working on this.

Screen Shot 2013-01-28 at 9.19.56 AM

Bueno

28 Jan 2013

Screen Recording 3 from Andrew Bueno on Vimeo.

I happen to take my webcam wiggling very seriously, thank you very much. I managed to get a pretty decent psychedelic effect with extremely minimal effort – basically I took the ofxOpticalFlowFarneback example and mixed it with the example for ofxBlur. The result is cool, though I wish the blur wouldn’t gray out as much as it seems to. My original plan was to utilize ofxAsciiArt as a filter for the colorful effects of ofxOpticalFlowFarneback, but the creator of it had a rather strange set of dependencies going on.

https://github.com/buenoIsHere/ofxAddonsExample

 
#include "testApp.h"

//--------------------------------------------------------------
void testApp::setup(){

    vidGrabber.initGrabber(640, 480);
    flowSolver.setup(vidGrabber.getWidth()/2, vidGrabber.getHeight()/2, 0.5, 3, 10, 1, 7, 1.5, false, false);
    ofEnableAlphaBlending();
    blur.setup(vidGrabber.getWidth(), vidGrabber.getHeight(), 4, .2, 4);
}

//--------------------------------------------------------------
void testApp::update(){
    vidGrabber.update();
    if(vidGrabber.isFrameNew()){
        flowSolver.update(vidGrabber);
    }

    blur.setScale(ofMap(mouseX, 0, ofGetWidth(), 1, 10));
	blur.setRotation(ofMap(mouseY, 0, ofGetHeight(), -PI, PI));
}

//--------------------------------------------------------------
void testApp::draw(){

    ofSetColor(255, 255, 255);

    blur.begin();
    vidGrabber.draw(0, 0);
    flowSolver.drawColored(vidGrabber.getWidth(), vidGrabber.getHeight(), 10, 3);
    ofSetCircleResolution(64);
	ofCircle(mouseX, mouseY, 32);
	blur.end();

    blur.draw();

}

//--------------------------------------------------------------
void testApp::keyPressed(int key){
    if(key == 'p') { flowSolver.setPyramidScale(ofClamp(flowSolver.getPyramidScale() - 0.01,0.0,1.0)); }
    else if(key == 'P') { flowSolver.setPyramidScale(ofClamp(flowSolver.getPyramidScale() + 0.01,0.0,1.0)); }
    else if(key == 'l') { flowSolver.setPyramidLevels(MAX(flowSolver.getPyramidLevels() - 1,1)); }
    else if(key == 'L') { flowSolver.setPyramidLevels(flowSolver.getPyramidLevels() + 1); }
    else if(key == 'w') { flowSolver.setWindowSize(MAX(flowSolver.getWindowSize() - 1,1)); }
    else if(key == 'W') { flowSolver.setWindowSize(flowSolver.getWindowSize() + 1); }
    else if(key == 'i') { flowSolver.setIterationsPerLevel(MAX(flowSolver.getIterationsPerLevel() - 1,1)); }
    else if(key == 'I') { flowSolver.setIterationsPerLevel(flowSolver.getIterationsPerLevel() + 1); }
    else if(key == 'a') { flowSolver.setExpansionArea(MAX(flowSolver.getExpansionArea() - 2,1)); }
    else if(key == 'A') { flowSolver.setExpansionArea(flowSolver.getExpansionArea() + 2); }
    else if(key == 's') { flowSolver.setExpansionSigma(ofClamp(flowSolver.getExpansionSigma() - 0.01,0.0,10.0)); }
    else if(key == 'S') { flowSolver.setExpansionSigma(ofClamp(flowSolver.getExpansionSigma() + 0.01,0.0,10.0)); }
    else if(key == 'f') { flowSolver.setFlowFeedback(false); }
    else if(key == 'F') { flowSolver.setFlowFeedback(true); }
    else if(key == 'g') { flowSolver.setGaussianFiltering(false); }
    else if(key == 'G') { flowSolver.setGaussianFiltering(true); }
}

//--------------------------------------------------------------
void testApp::keyReleased(int key){

}

//--------------------------------------------------------------
void testApp::mouseMoved(int x, int y ){

}

//--------------------------------------------------------------
void testApp::mouseDragged(int x, int y, int button){

}

//--------------------------------------------------------------
void testApp::mousePressed(int x, int y, int button){

}

//--------------------------------------------------------------
void testApp::mouseReleased(int x, int y, int button){

}

//--------------------------------------------------------------
void testApp::windowResized(int w, int h){

}

//--------------------------------------------------------------
void testApp::gotMessage(ofMessage msg){

}

//--------------------------------------------------------------
void testApp::dragEvent(ofDragInfo dragInfo){ 

}

Alan

28 Jan 2013

The solution is using Prof Levin’s way to measure the brightness of each pixel on the screen canvas. If the pixel in the letter’s position is below the threshold lower value, the letter should go back where it is bright enough instead of going down.

The text is read from a ‘captain.txt’ file, repeatedly falling from the top of canvas. When it goes down to the bottom, I reset its y value.

 

 
// Text Rain (Processing Re-Code "cover version")
// Original by Camille Utterback and Romy Achituv (1999):
// http://camilleutterback.com/projects/text-rain/
// Implemented in Processing 2.0b7 by Han Hua, January 2013
// 
// This assumes that the participant is in front of a light-colored background. 

//===================================================================
// The live video camera Capture object:
import processing.video.*;
Capture video;

float letterGravity = 2;
int brightnessThreshold = 110;
float initialLetterYPosition = 10;
TextRainLetter poemLetters[];
int nLetters;

String poemLines[];
int line_index = 0;
BufferedReader reader;
int m_sec;
int nextAddTime = 5000;

//-----------------------------------
void setup() {
  frameRate(30);

  size(640,480); 
  video = new Capture (this, width, height);
  video.start();  

  poemLines = new String[30];
  reader = createReader("captain.txt");
  readText();

  String poemString = poemLines[line_index];
  nLetters = poemString.length();
  poemLetters = new TextRainLetter[nLetters];
  for (int i=0; i<nLetters; i++) {
    char c = poemString.charAt(i);
    float x = random(width * ((float)i/(nLetters+1)) + 1, width * ((float)(i+1)/(nLetters+1)));
    float y = random(initialLetterYPosition, initialLetterYPosition+10);
    poemLetters[i] = new TextRainLetter(c,x,y);
  }
}

//-----------------------------------
void draw() {
  if (video.available()) {
    video.read();
    video.loadPixels();

    // this translate & scale flips the video left/right. 
    pushMatrix();

    // mirror the video
    translate (width,0); 
    scale (-1,1); 
    image (video, 0, 0, width, height); // refresh
    popMatrix();

    for (int i=0; i<nLetters; i++) {       poemLetters[i].update();       poemLetters[i].draw();     }   }      m_sec = millis();   if(m_sec > nextAddTime){
    println("1");
    nextAddTime += 3000;
    line_index++;
    if(line_index > poemLines.length-1){
      line_index = 0;
    }
    addNewLetters();
  }
}

void readText() {
  String line = null;
   try{
       int i = 0;
       while ((line = reader.readLine()) != null) {
          poemLines[i] = line;
          i++;
          //println(line);
       }

   }catch(Exception e)
  {
   e.printStackTrace();
    line = null;
  }
}

void addNewLetters(){
  TextRainLetter tempLetters[] = new TextRainLetter[poemLetters.length + poemLines[line_index].length()];

  int i=0;
  for(; i < poemLetters.length; i++){
    tempLetters[i] = poemLetters[i];
  }

  String s = poemLines[line_index];
  int n = s.length();
  for(int j = 0; j<n; j++, i++){     char c = s.charAt(j);     float x = random(1, width-1);     float y = random(initialLetterYPosition, initialLetterYPosition+30);     tempLetters[i] = new TextRainLetter(c,x,y);   }      poemLetters = tempLetters;   nLetters = poemLetters.length; } //----------------------------------- void keyPressed() {   if (key == CODED) {     if (keyCode == UP) {       brightnessThreshold = min(255, brightnessThreshold+5);       println("brightnessThreshold = " + brightnessThreshold);      } else if (keyCode == DOWN) {       brightnessThreshold = max(0, brightnessThreshold-5);       println("brightnessThreshold = " + brightnessThreshold);     }    }  } //=================================================================== class TextRainLetter {      char  c;   float x;    float y;      TextRainLetter (char cc, float xx, float yy) {     c = cc;     x = xx;     y = yy;   }   //-----------------------------------   void update() {     // Update the position of a TextRainLetter.           // 1. Compute the pixel index corresponding to the (x,y) location of the TextRainLetter.     int flippedX = (int)(width-1-x); // because we have flipped the video left/right.     int index = width*(int)y + flippedX;     index = constrain (index, 0, width*height-1);          // establish a range around the threshold, within which motion is not required.     int thresholdTolerance = 5;     int thresholdLo = brightnessThreshold - thresholdTolerance;     int thresholdHi = brightnessThreshold + thresholdTolerance;          // 2. Fetch the color of the pixel there, and compute its brightness.     float pixelBrightness = brightness(video.pixels[index]);          // 3. If the TextRainLetter is in a bright area, move downwards.     //    If it's in a dark area, move up until we're in a light area.     if (pixelBrightness > thresholdHi) {
      y += letterGravity; //move downward

    } else {
      while ((y > initialLetterYPosition) && (pixelBrightness < thresholdLo)){         y -= letterGravity; // travel upwards intil it's bright again         index = width*(int)y + flippedX;         index = constrain (index, 0, width*height-1);         pixelBrightness = brightness(video.pixels[index]);       }     }          if ((y >= height-1) || (y < initialLetterYPosition)){       y = initialLetterYPosition;     }   }   //-----------------------------------   void draw() {     // Draw the letter. Use a simple black "drop shadow"     // to achieve improved contrast for the typography.           if( y > height-20){
      y = random(initialLetterYPosition, initialLetterYPosition+30);
      x = random(1, width-1);
    }

    fill(random(256),random(256),random(256));
    text (""+c, x+1,y+1); 
    text (""+c, x-1,y+1); 
    text (""+c, x+1,y-1); 
    text (""+c, x-1,y-1); 
    fill(255,255,255);
    text (""+c, x,y);
  }
}

The Github Repo: https://github.com/chinesecold/TextRain

Andy

28 Jan 2013

Github: https://github.com/andybiar/WhoLetTheDogsOut

With Dan’s permission, I am going to delay shooting a video until a later time. The entire whimsy of Who Let the Dogs Out is watching the dogs run around, but they won’t be awake until maybe noon Eastern time.

Who Let the Dogs Out is perhaps the cutest and most whimsical of my projects. I use the IpVideoGrabber addon with XMLSettings and OpenCV and SoundPlayer to stream video footage from the webcams at the Sniff Dog Hotel in Portland, Oregon, and whenever the user presses a key they can start or stop the classic song “Who Let the Dogs Out” by the Baja Men. I wanted to more with this originally, but the ofMotiontracker addon was very naughty and substantially vexed me to discontinue its use. Further pursuits may include stripping the background subtraction algorithm from the OpenCVExample and then using the blob detection in a generative work.

dogs

Bueno

28 Jan 2013

Screen Recording 2 from Andrew Bueno on Vimeo.

Here it is, my own little tribute to Text Rain. My method was to create a LetterDrop object containing coordinates, a velocity, a color, and a character. These were stored into an array list, and spawned about 500 pixels apart. Each line of rain is from an e e cummings poem – they can eventually catch up to each other if there is a collision. Each letter stops in place if it hits a pixel that is dark enough. I decided to highlight the brightness difference by recoloring pixels into one of two colors, depending on where it fell relative to the brightness threshold.

https://github.com/buenoIsHere/textRain

 

Screen Shot 2013-01-28 at 8.19.40 AM

import processing.video.*;

Capture cam;
float brightnessThresh;
String [] quotes;
ArrayList  letters;
int letterGap;

//Our raindrops!
public class LetterDrop
{ 
  public float velocity;
  public float lx;
  public float ly;
  public char letter;
  public color clr;

  public LetterDrop(float xcoord, float ycoord,  char l, color c)
  {
    lx = xcoord;
    ly = ycoord;
    velocity = .6 + random(0.0, 0.07);  
    letter = l;
    clr = c;
  }
}

void setup() {
  size(640, 480);

  quotes = new String [2];
  quotes[0] = "Humanity i love you because you are perpetually putting the";
  quotes[1] = "secret of life in your pants and forgetting it’s there";

  //SETTING UP WEBCAM CAPTURE
  String[] cameras = Capture.list();

  if (cameras.length == 0) {
    println("There are no cameras available for capture.");
    exit();
  } 
  else {

    // The camera can be initialized directly using an 
    // element from the array returned by list():
    cam = new Capture(this, cameras[0]);
    cam.start();
  }

  noStroke();
  brightnessThresh = 120;

  // CREATE THE FONT
  textFont(createFont("Georgia", 22));
  textAlign(CENTER, BOTTOM);

  letters = new ArrayList();
  spawnLetters();
}

//Helper function for setup. Populates our array of letters.
//Note that I was inspired by ~Kamen and I give credit to him for
//the idea to have the next text lines "waiting" offscreen.
void spawnLetters()
{

   //GRAB A LINE FROM THE POEM
   for (int l = 0; l < quotes.length; l++) 
   {

    String phrase = quotes[l];
    int len = phrase.length();

    //FOR EACH NONSPACE CHAR IN PHRASE, MAKE A NEW LETTER OBJECT
    for (int i=0; i < len; i++) 
    {
      char ch = phrase.charAt(i);

      if (ch != ' ')
      {
        letters.add(new LetterDrop(10 * i + 30, -l * 500 + 20, ch, color(255, 255, 255)));
      }
    }
  } 
}

void draw() {

  update();

  image(cam, 0, 0);
  // The following does the same, and is faster when just drawing the image
  // without any additional resizing, transformations, or tint.
  //set(0, 0, cam);  

  for (int k = 0; k < letters.size(); k++)    {     LetterDrop drop = letters.get(k);     if (drop.ly >= 0) 
    {
      stroke(drop.clr);
      text(drop.letter, drop.lx, drop.ly);
    }
  }

}

void update()
{

  // DRAW CAPTURED WEBCAM IMAGE TO WINDOW  
  if (cam.available() == true) {
    cam.read();
  } 

  loadPixels();

  //RECOLOR IMAGE BASED ON PIXEL BRIGHTNESS
  for(int i = 0; i < cam.height; i++)
  {
    for(int j = 0; j < cam.width; j++)     {         int index = (i * cam.width) + j;                  if(brightness(cam.pixels[index]) > brightnessThresh)
        {
          cam.pixels[index] = 0x5F7485;  

        }
        else
        {
          cam.pixels[index] = 0x4B556C;
        }
    }
  }

  updatePixels();

  //CHECK EACH LETTER FOR COLLISION
  for(int k = 0; k < letters.size(); k ++)   {     LetterDrop drop = letters.get(k);          if(!collision(drop))     {       drop.ly += drop.velocity;     }     else if(collision(drop) && (drop.ly > 15))
    {
      int aboveIndex = floor(drop.lx) + floor(drop.ly-1) * width;
      if(brightness(pixels[aboveIndex]) < brightnessThresh)       {         drop.ly -= 5;       }     }          if(drop.ly > height)
    {
      drop.ly -= height + 500; 
    }   
  }
}

boolean collision(LetterDrop drop)
{

    if(drop.ly > 0)
    {
      int index = floor(drop.lx) + floor(drop.ly) * width;
      color pC = pixels[index];

      if(brightness(pC) < brightnessThresh)
      {
        return true;  
      }
      else
      {
        return false;  
      }
    }
    else
    {
      return false;  
    }
}

Andy

28 Jan 2013

Github: https://github.com/andybiar/SifteoMusicBlocks

My Sifteo project ended up being the one that I am most proud of out of the four. I created a small step sequencer and composition system for Sifteo. The starting pitch and tempo are set at compile time, and from then on there is a master cube that will always play the starting pitch on the downbeat. From there you can add up to 7 other cubes into the composition. Adding cubes on top of other cubes will create chords, whereas adding cubes to the sides of other cubes will create melodies. The notes on the other cubes are dependent on the pitch of the cube you are connecting to (the base pitch), the side of the cube you are placing (determines whether to make a Major 2nd, Minor 3rd, Major 3rd, or Perfect 4th using Pythagorean tuning ratios on sine waves), and whether you are adding or subtracting that interval from the current. It’s pretty fun. Unfortunately all the Sifteo cubes in the STUDIO died so I couldn’t test on the hardware for this round of documentation.

sifteomusicblocks

Meng

28 Jan 2013

Dues make my nose blooding !!!!!!!!
Bloodnose actually with ofxosc + OfxBox2d.
This is a simply sketch with ofxosc+ofBox2d.
I trying to use other addones such as a combination of ofxCV ofxVector, but can not successfully build. So I decided to do it in a easy way with this two addones.These two addons are popular and with very good documentation. This may let a coding newbie do things less frustrated. With this experience I learn that the importance of good documentation such as coding comment and read me files.

The current technical problem is that I cannot update the circles behind the dude’s face…

PO4 - meng

code is here: https://github.com/mengs/ofxoscOfxBox2d