Daily Archives: 27 Jan 2013

Nathan

27 Jan 2013

Screen Shot 2013-01-27 at 10.17.40 PM

Originally by Camille Utterback and Romy Achituv, I was able to run “Text Rain” in Processing. It was my first time back to Processing in a while, but I was able to figure it out after some help and practice. I simple indexed the camera’s pixels and went about it the brightness() way. Fairly reliable and quite a lot of fun getting it going. I think a performance of my lip-singing my favorite song is in order for a finished piece.

Text Rain – Re Do from Nathan Trevino on Vimeo.

My code is here.


//Nathan Trevino 2013
//Text Rain re-do. Original by Camille Utterback and Romy Achituv 1999
//Processing 2.0b7 by Nathan Trevino
//Special thanks to the processing example codes (website) as well as Golan Levin

//=============================================
import processing.video.*;
Capture camera;

float fallGravity = 1;
float fallStart = 0;

int threshold = 100;
Rain WordLetters[];
int myLetters;


//==============================================

void setup() {

  //going with a larger size but am giving up speed.
  size(640, 480);



  camera = new Capture(this, width, height);
  camera.start();     



  String wordString = "For all the things he could lose he lost them all";

  myLetters = wordString.length();
  WordLetters = new Rain[myLetters];
  for (int i = 0; i < myLetters; i++) {
    char a = wordString.charAt(i);
    float x = width * ((float)(i+1)/(myLetters+1));
    float y = fallStart;
    WordLetters[i] = new Rain(a, x, y);
  }

}

//==============================================


void draw() {
  if (camera.available() == true) {
    camera.read();
    camera.loadPixels();

    //Puts the video where it should be... top left corner beginning.
    image(camera, 0, 0);

    for (int i = 0; i < myLetters; i++) {
      WordLetters[i].update();
      WordLetters[i].draw();
    }
  }
}

//===================================
//simple key pressed fuction to start over the Rain

void keyPressed()
{
  if (key == CODED) {
    if (keyCode == ' ') {
      for (int i=0; i < myLetters; i++) {
        WordLetters[i].reset();
      }
    }
  }
}


//=============================================
class Rain {
  // This conains a single letter of the words of the entire string poem
  // They fall as "individuals" and have their own position (x,y) and character (char)

  char a;
  float x;
  float y;

  Rain (char aa, float xx, float yy)
  {
    a = aa;
    x = xx;
    y = yy;
  }

  //=============================================
  void update() {
    
    //IMPORTANT NOTE!
    // THE TEXT RAIN WORKS WITH A WHITE BACKGROUND AND THE DARK AREAS 
    // MOVE THE TEXT

    // Updates the paramaters of Rain
    // had some problems here for the pixel index, but a peek at Golan's code helped

    int index = width*(int)y;
    index = constrain (index, 0, width*height-1);

    // Grayscale starts here. Range is defined here.
    int thresholdGive = 4;
    int thresholdUpper = threshold + thresholdGive;
    int thresholdBottom = threshold - thresholdGive;


    //find pixel colors and make it into brighness (much like alpha channeling video
    // or images in Adobe photoshop or AE)

    float pBright = brightness(camera.pixels[index]);

    if (pBright > thresholdUpper) {
      y += fallGravity;
    } 
    else {
      while ( (y > fallStart) && (pBright < thresholdBottom)) {
        y -= fallGravity;
        index = width*(int)y + (int)x;
        index = constrain (index, 0, width*height-1);
        pBright = brightness(camera.pixels[index]);
      }
    }

    if ((y >= height) || (y < fallStart)) {
      y = fallStart;
    }
  }

  //============================
  void reset() {
    y = fallStart;
  }

  //=======================================

  void draw() {

    // Here I also couldn't really see my letters that well so I
    // used Golan's "drop shadow" idea and some crazy random colors for funzies

    fill (random(255), random(255), random(255));
    text (""+a, x+1, y+1);
    text (""+a, x-1, y+1); 
    text (""+a, x+1, y-1); 
    text (""+a, x-1, y-1); 
    fill(255, 255, 255);
    text (""+a, x, y);
  }
}

Nathan

27 Jan 2013

Screen Shot 2013-01-27 at 2.31.27 PM

This was one of the most exciting things I have ever done. I was able to use openFrameworks successfully. More than just opening up examples, I really wanted to make something work that I had seen and that I felt would be wonderful together. I chose to use ofxOpenCV, ofxXmlSettings, and ofxIpVideoGrabber because I found out through Andy that there is an antcam. Yes. A live stream of ants all day and all night. I have been fascinated with ants and have made prints, and etchings of the biological wonders. I want to use my combination of addons to create a video-mash up of different antcams across the internet and cross imbed cams of our own civilizations. This piece will probably culminate in an After Effects project where I draw comparisons between the intelligent lives of ants and our own.

Also this picture is my favorite thing of all time.

Screen Shot 2013-01-27 at 2.29.35 PM

 

My code is here.

 

Nathan

27 Jan 2013

Screen Shot 2013-01-27 at 9.24.48 PM

Oh Sifteo… Oh C++. I believe that this was by far the most difficult thing I have ever done. I learned so much over this single weekend. That being said. I love writing poems, and this is one I have been writing over the past month(ish). It is a draw-along story that tells you to keep a sketch book journal along with the poem. I plan on taking the drawings and compiling individual books out of the sifteo-aided drawing assignment. The application is based on the text example (almost didn’t change anything. Only what C++ I could understand).

Sifteo Novel from Nathan Trevino on Vimeo.

Screen Shot 2013-01-27 at 9.02.47 PM

My code is here.

Nathan

27 Jan 2013

 

Screen Shot 2013-01-27 at 5.00.19 PM

This is so freaking fun. I was able to use the FaceOSC application along with simple sketch pad style processing sketch. I really believe in an interface that is very intuitive, if not, almost unnecessary. This project was the most beautiful of my Intensive, using colors coordinated in triples by mapping eyebrows and mouth width. Smiling and eye-raising are natural things that humans do and I wanted to make that the mainstay of the project. Watching yourself make a drawing is quite beautiful and I think they make the project very successful.

My code is here.

Kyna

27 Jan 2013

My sifteo app is an adaptation of Simon Says and a memory game. There are two players, each with their own cube. The first player starts, and performs one of six available actions (touching the screen, shaking their cube, or touching any of the four sides of the middleman cube with their own cube). The second player then must repeat this action, and add an action of their own. Then the first player repeats both actions and adds a new one, and so on, forming a chain of actions that must be remembered and repeated. The first player to mess up loses, unless the sequence reaches 20 steps, at which point the game ends in a draw. If I was cool I would have added audio cues to each action, so you could be sure that it was completed and to aid in memorization, but I’m not so I didn’t.

Git -> i swear i’ll get this to work eventually

Kyna

27 Jan 2013

ofxAddons! This was the hardest and most frustrating project for me. Being totally unfamiliar with the openFrameworks environment as well as both the Code::Blocks and VisualStudio 2010 environments, it took me a good long while to figure out how to do anything in these projects. As it turns out, most of the pre-built project examples for ofxAddons are for Xcode. Of the 15 or so addons I tried, I eventually got three to work, and this is the most interesting combination I found. This is a combination of underdoeg’s openSteer flocking example and toruurakawa’s FakeMotionBlur. While maybe not particularly interesting or ‘lazy like a fox’, I think the result is actually pretty graceful.

Git -> maybe someday, when github and I reconcile our differences

Kyna

27 Jan 2013

This project uses FaceOCS to track the position of your face, and maps it to a face in the Processing environment. It uses the orientation of your face to steer up, down, left and right on the screen. It also maps your mouth, and whether it is open or closed. By looking the direction you want the face to move in and opening your mouth, it is possible to eat the small glowing sprites that wander around the frame. They leave a splatter where they were eaten which fades with time. The bugs utilize a modified version of Daniel Shiffman’s boid class.

Git –> Soon, github hates me

Code?

The simple Splatter class:

class Splatter {
  PVector loc;
  int life;
  PImage splat;

  Splatter(PVector l, PImage s) {
    loc = l;
    life = 255;
    splat = s;
  }

  void run() {
    pushMatrix();
    translate(loc.x, loc.y);
    tint(255, 255, 255, life);
    image(splat, 0, 0);
    if (life > 0) life--;
    popMatrix();
  }
}

Function used to determine if the bug is within range of the mouth:

void eaten(Bug bug, float mouthWidth, float mouthHeight, PVector posePosition) {
  if ((posePosition.x-(mouthWidth*4)-20 < = bug.loc.x) && 
    (bug.loc.x <= (3*(mouthWidth*2)+posePosition.x)+10)) {
    if ((posePosition.y+75 <= bug.loc.y) &&
      (bug.loc.y <= posePosition.y+(mouthHeight*20)+75)) {
      Splatter temp;
      PVector tempLoc = new PVector(bug.loc.x, bug.loc.y);  

      bug.loc.x = random(-100, 0);
      bug.loc.y = random(-100, 0);
      score++;

      if (bug.bug == lpic) {
        temp = new Splatter(tempLoc, (loadImage("l" + (int)random(1, 3) + ".png")));
        splatters.add(temp);
      }
      else if (bug.bug == fpic) {
        temp = new Splatter(tempLoc, (loadImage("f" + (int)random(1, 3) + ".png")));
        splatters.add(temp);
      }
      else if (bug.bug == spic) {
        temp = new Splatter(tempLoc, (loadImage("s" + (int)random(1, 3) + ".png")));
        splatters.add(temp);
      }
    }
  }
}

Kyna

27 Jan 2013

For this implementation of Textrain I made a Letter class and kept track of an array of Letter objects. Each character from the string is stored in this array and updated via for loop in the draw function. They detect the brightness of the pixel immediately beneath them and only drop if the pixel is bright. There is also a function that fades the letters from teal to navy blue over time. The string is the first line of the e.e. cummings poem ‘anyone lived in a pretty how town,’ which really doesn’t have any significance other than the fact that it’s great.

GitHub -> soon, having trouble with it currently…

Code!

import processing.video.*;
Capture cam;

int height = 480;
int width = 640;

int g = 0;
int b = 100;
boolean flipG = false;
boolean flipB = true;

class Letter {
  int y;
  int x;
  int speed;
  char c;

  Letter(char character, int xPos, int yPos) {
    c = character;
    x = xPos;
    y = yPos;
    speed = (int) random(1, 2);
  }
}

ArrayList currentLetters;
Letter testLetter = new Letter('a', 50, 50);

void setup() {
  size(width, height);
  background(255);
  frameRate(10);

  textSize(18);

  cam = new Capture(this, width, height);
  cam.start();

  smooth();

  currentLetters = new ArrayList();
  String poem = "anyone lived in a pretty how town (with up so floating many bells down)";

  int xInit = 0;
  int yInit = 0;

  for (int i=0; i < 71; i++) {
    if (i<33) yInit = (int)random(-10, 10);
    else yInit = (int)random(-15, -25);
    currentLetters.add(new Letter(poem.charAt(i), xInit, yInit));
    xInit += (9 + (int)random(-3, 3));
    if (xInit > width) xInit = 9;
  }
}

void draw() {
  cam.read();
  cam.loadPixels();

  pushMatrix();
  translate (width, 0); 
  scale(-1, 1);
  image (cam, 0, 0, width, height);
  popMatrix();

  fill(0, g, 100);

  for (int i=0; i < 71; i++) {
    Letter curr = currentLetters.get(i);

    int index = width*curr.y + (width-curr.x-1);
    index = constrain (index, 0, width*height-1);

    if ((brightness(cam.pixels[index])) > 105) curr.y += curr.speed;
    else {
      while ((curr.y > 0.0) && ((brightness(cam.pixels[index])) < 95)) {
        curr.y -= .025;
        index = width*curr.y + (width-curr.x-1);
        index = constrain (index, 0, width*height-1);
      }
    }
    text(curr.c, curr.x, curr.y);

    if (curr.y >= height) {
      curr.y = (int)random(-15, 15);
      curr.speed = (int)random(1, 2);
    }
  }

  if(!flipG) {
    if (g<100) g++;
    else flipG=true;
  }
  else if(flipG) {
    if (g>0) g--;
    else flipG=false;
  }

  if(!flipB) {   
    if (b<150) b++;
    else flipB=true;
  }
  else if(flipB) {
    if (b>0) b--;
    else flipB=false;
  }
}

Yvonne

27 Jan 2013

mykittens

Originally I wanted to do a cat that you could have fall through the cubes, similar to an App in the Sifteo video with water. I quickly abandoned that once I realized I could barely get a cat to animate staying still, let alone fall through from one cube to the next. With that said, I focused primarily on animating sprites. I used the Sensors example as my basis with bits of the Connection and Stars examples tossed in. Music was from the Connection example and the kitten sprites were from the Sprites Database; I wish I could have made my own, but unfortunately time did not permit.

Github Repository: https://github.com/yvonnehidle/Sifteo_myKittens
Original Blog Post @ Arealess: http://www.arealess.com/kittens-and-sifteo/
Link to Better Quality Video: http://www.arealess.com/wp-content/uploads/2013/01/myKittens.mp4

Sorry for the crappy video. Vimeo wouldn’t load it and YouTube made it illegible.

Michael

27 Jan 2013

Sifteo GigaViewer from Mike Taylor on Vimeo.

This is an application for viewing tiled images using Sifteo cubes.  I took inspiration for this app from the GigaPan project, which creates massive panoramas using multiple stitched camera images.  The GigaViewer takes a different approach by splitting a single image into many tiles.  The Sifteo cubes can be used to explore the minute details of the image by exploring it tile-by-tile.  In a sense, this deliberately prohibits “seeing the forest for the trees.”  For images like the watch mechanism shown in the demo, this prevents the user from being overwhelmed by the complexity and instead invites them to explore each component separately.

Sample image by Michel Villeneuve can be found here:commons.wikimedia.org/wiki/File:Innards_of_an_AI-139a_mechanical_watch.jpg

The Sifteo code can be found here.

Screen Shot 2013-01-27 at 8.58.07 PM