Varvara Toulkeridou – ButterflySorter

by varvara @ 7:43 am 9 February 2012

In how many different ways can you sort a collection? From how many different perspectives can you view an assembly of things?

collection is a group of things related to each other in some identifiable way. The relationship might be through a topic, a place, a person, a type of object etc. Even though there is a specific motivation for gathering on the first place, what makes a collection dynamic is that one can reorganize the same data in alternative ways and make different sense of it.

 

The motivation about this project came from the ‘Pink project’ by Portia Munson. The pink project comprised a series still life installations created out of the artist’s collection of discarded objects, all of which are varying shades of pink and are objects used by females. The discarded items assume new meaning when seen out of the perspective of the common color and the connotation with gender as well as through the way they are organized in space.

Portia Munson, Pink Project, 1994

 

The data set I am using for the project, comes from a collection of images of butterflies provided by an iPhone app made by Hunter Research and Technology. The collection is composed by 240 images. Each butterfly is shown in plan on a white background; the images I extracted are 260×340 pixels in size. The only data accompanying the images is the name of each butterfly.

Butterfly Collection, by Hunter Research and Technology
Data processing

 

The images have been processed in Matlab to extract a series of values that would enable different ways of sorting.
The following data was extracted for each image:
  1. perimeter of the butterfly outline 
  2. area of the overall shape 
  3. number of detected boundaries on the surface of the wings
  4. the image’s average value 
  5. the image variance 
  6. color histogram

 

For 1, 2,3: I used the Image Processing Toolbox for Matlab. The algorithms were run on the grayscale     representation of the image after thresholding. More specifically the regionprops and the bwboundaries toolset have been used.

 

 

For 4, 5: to compute the image statistics I worked also on grayscale images. To get each image’s average I computed the arithmetic mean. To get the image variance I computed the square of the standard deviation.

 

For 1-5 I got a range of numbers according to which I sorted the images linearly. See below a video capture of a Processing applet that demonstrates the sorted images in a slide show progressing from the image with the smaller value to the image with the bigger value for a given sort. By keyboard input the user can change the sorting mode, change the slide show speed as well as pause the slide show and go forward and backward manually.

From the results I got for the images I noticed that the values do not vary significantly. This, I believe, is also reflected on the slideshow: in most of the cases the reason for transitioning from one butterfly to the other is not observable. My impression is that given the collection under consideration (same family of things with similar characteristics) the linear ordering might not make that much sense in terms of results.

Also, it might have been wrong on the first place to compute the image statistics without taking into consideration that a significant amount of the image pixels were those of the background. So I run the histogram analysis taking a different approach:

(1) I considered all three color channels

(2) I masked the image in order to compute a histogram only on the butterfly shape

(3) I computed the similarity among all pairs of butterflies and got the corresponding sorts.

The results seem more reasonable. I think I should rerun all the previous tests under the new considerations! Also, I am looking forward to try using a spatiogram (a histogram that represents pixels that belong to edges) in order to sort the collection according to shape variation.
As a step further, I tried to see if there were any interrelations across the different linear sorts. The processing applet in the following video shows the butterflies positioned on a circle, represented by dots according to a given sort. The size of the dots is scaled according to the remapped value of each butterfly in the given sort. The user can select another sort and observe a line that connects the sequence of the current sort but on the sequence of the previous one. A curve line was selected to link the nodes because it was offering a better visual result for points on the circle that were close to each other.

Deren Guler_Project1_Float PM

by deren @ 12:11 am

Originally, I wanted to create a visualization using air pollution data from Beijing that my friend had gathered for her project, Float Beijing. As I researched the air pollution reports in China, I came across pages and pages of blog posts and articles about the “fake weather reports” that the government broadcasts. The issue “exploded” a few weeks ago, when the US Embassy, and several external forces, confronted the government about this and demanded they report the real information. The Embassy now posts the actual air pollution index hourly on their twitter feed: http://twitter.com/beijingair

It was more difficult that I thought it would be to find the old data, now that there has been this intervention. Several sites linked to the Air Quality Monitoring and Forecasting in China site that has a pretty extensive archive, but there is a temporary error on accessing their archive, suspicious. I was able to find a monthly report from the Ministry of Environmental Protection of China that reported daily averages from the past month. I used this data set in comparison to the data from the US Embassy feed of the past month.

I wanted to get away from the  “weather map” look, because as I looked at them I felt like they were just pretty colors over a map, and I wasn’t understanding very much from them. I wanted to make something that illustrated what was happening to the city from air pollution according to the government reports, and the actual data.

 

I started with the flowfield sketch from The Nature of Code to create a flowfield of random particles flying across the city. The boids (or flying circles pictured above) are colors of varying size and color. The program is cycling through the data and creating a set of circles for each new index reading. The data from the MEP is PM 10, or particulate matter 10 data, which is what they were reporting. These particles are larger, and do not really settle, they mostly float around in the air and can be filtered pretty easily. They do not lead to extremely serious health problems. I represented these with the larger circles.

The data from the US Embassy is PM 2.5, which is the really bad stuff that the government was not reporting at all. It is the smaller particles that settle throughout the city and creep into your body and can lead to cancer and other health problems. The PM2.5 circles are 1/4 the size and are able to flow around the entire  image of the city, while the PM10 float around the sky portion.

In both cases the colors represent the respective colors used by the Chinese and US  API color code. For example, the US uses light green to signify that the air is healthy (PM 2.5 < 50), while the Chinese uses light blue for PM10 < 100. The articles about the controversy explained that not only does the Chinese government use their own API scale and color code, their standards are 6 times lower than that of the WHO. Additionally, the weather station that the MEP uses to report from has been moved 3 times in the past two years, while the weather station the Embassy uses is right downtown.

I then decided I wanted to show what was happening to the city over time, so I created a blur function that blurs the image by a factor of the daily pollution. This seemed to look better visually, so I created a version of the image blurring without any flying colored circles.

It became hard after a while to decide if the visualization was effective because I was reading so much about air pollution index formats and want they are supposed to mean. I became a bad test subject. My goal was to create something that you can understand without  knowing very much, so after some feedback from my labmates I decided to keep it simple:

 

And here is a short video, showing the different versions of the program cycling through the month.

 

Beijing Air Pollution Visualization from Deren Guler on Vimeo.

code for version 3:

 
 
PImage bg;
PImage bgblur;
float blurval=0;
float blurval2 =0;
int days;
//data from embassy
int realpoll [] = {
  302, 171, 125, 161, 30, 29, 152, 163, 214, 184, 242, 206, 155, 169, 211, 42, 57, 500, 464, 398, 94, 94, 94, 
  171, 232, 184, 55, 385, 325, 241
}; 
int daynumber [] = {
  1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23 , 24, 25, 26, 27, 28, 29, 30
}; 
 
//data from MEP
float liepoll [] = { 
  84, 52, 60, 26, 30, 92, 66, 62, 74, 67, 78, 45, 49, 149, 23, 21, 131, 269, 193, 111, 106, 60, 70, 
  79, 86, 25, 161, 102, 89, 65
};
 
void setup() {
  size(1280, 337);
  bg = loadImage("skylinecropped.jpg");
  bgblur = loadImage("skylinecropped.jpg");
  smooth();
}
 
void draw() {
  delay(500);
 
  background(bg);
    days = frameCount % 30;
 
              if (realpoll[days] &lt; 100) {    //this is a clear day
                blurval=2;
              }
              else {
                blurval= map(realpoll[days], 100, 500, 0,10); //make it blurrier depending on pollution value
 
            }
              if (liepoll[days] &lt; 50) {    //this is a clear day
                blurval=2;
              }
              else {
               blurval2= map(liepoll[days], 50, 300, 0,10); 
            }
 
    println(blurval);  
     println(blurval2); 
    image(bgblur, 0, 0, width/2, height);
      filter(BLUR, blurval);
    image(bgblur, 0, 0, width/2, height);
      filter(BLUR, blurval2);
 
textSize(14); 
text( &quot; Air pollution index (PM 10) report from Ministry of Environmental Protection of China&quot;, 30, 270);
text( &quot; Air pollution quality (PM 2.5) report from the US Embassy&quot;, 800, 270);
int i = 0;
 
textSize(30);
text( &quot;DAY &quot; + daynumber[days], 600, 320);
//text( &quot;DAY ___&quot;, 630, 330);
i++;
 
 
  //  println(blurval);
  delay(500);
}
void mousePressed() {
  noLoop();
}
 
void mouseReleased() {
  loop();
}

VarvaraToulkeridou-Sort.a.fly-Sketch

by varvara @ 8:24 am 2 February 2012

In how many different ways can you sort a collection? From how many different perspectives can you view an assembly of things?

A collection is a group of things related to each other in some identifiable way. The relationship might be through a topic, a place, a person, a type of object etc. Even though there is a specific motivation for gathering on the first place, what makes a collection dynamic is that one can reorganize the same data in alternative ways and make different sense of it. In my data visualization project I would like to explore some of those alternative ways, in the context of a butterfly collection.

The motivation about this project came from the ‘Pink project’ by Portia Munson. The pink project comprised a series still life installations created out of the artist’s collection of discarded objects, all of which are varying shades of pink and are objects used by females. The discarded items assume new meaning when seen out of the perspective of the common color and the connotation with gender as well as through the way they are organized in space.

Portia Munson, Pink Project, 1994

 

The data set I am using for the project, comes from a collection of images of butterflies provided by an iPhone app made by Hunter Research and Technology. The collection is composed by 240 images. Each butterfly is shown in plan on a white background; the images I extracted are 260×340 pixels in size. The only data accompanying the images is the name of each butterfly.

Butterfly Collection, by Hunter Research and Technology
My objective here is to figure different ways to sort the collection of butterflies. To do that I am experimenting with applying some image processing algorithms as a way to extract data. Some first thoughts are trying to extract information regarding the size, the outline and the color of the butterfly. The outline can give me information regarding the size of the perimeter of each butterfly as well as information regarding the curvature. Identifying the blobs on the butterfly wings provides a way to sort them according to the number of pigments on their wings. Experimenting with the color can give me information regarding the mean color of each butterfly, the color variation etc. Extracting data via image processing for each butterfly will allow me to form a data base of numeric values according to which I can sort the collection in different ways.
I would also like to find a way to visualize the whole collection and enable the user to pick the mode of sorting as well as to be able to select two butterflies and get all the intermediate images according to the ordering.
Below are some first attempts in Matlab using the ‘regionprops’ and ‘bwBoundaries’ methods to extract the outline and the blobs on the wings surface.

Alex Wolfe | Gauntlet | Face OSC Monster Mash

by a.wolfe @ 8:30 am 26 January 2012

 

For my project I wrote a monster mash face generator. It substitutes different images in quick succession to create a not so subtle animation that corresponds with the users facial movements. Images to fuel the animation can be nicely scraped using Face OSC to parse out different facial features from any image that even remotely vaguely resembles a face. For this particular incarnation, I gathered the images by hand from old sketches of mine and deviantart.

 

Alex Wolfe | Gauntlet | Frieder Nake

by a.wolfe @ 4:38 am

import processing.opengl.*;

/*Alex Wolfe | Gauntlet*/

//Globals, adjust to edit nitpicky params
//Circles
int numCirc = 4;
int circMax = 100;
int circMin = 10;
//GridLines
int minRowSpace = 50;
int maxRowSpace = 100;
int minColSpace = 30;
int maxColSpace;
int minCols = 8;
int maxCols = 20;
int numrows, numcols;
int linewaver = 8;
int[] colSpace;
boolean[][] gridCheck;

//Lines
int minLines = 10;
int maxLines = 20;

RowLine[] grid;

void setup() {
  size(600, 600, P2D);
  background(255);
  smooth();
  strokeWeight(0.5);
  strokeCap(ROUND);
  noFill();
  stroke(0);

  //init colSpace grid
  int j =0;
  numcols = int(random(minCols, maxCols));
  maxColSpace = width/numcols;
  colSpace = new int[numcols];
  for (int i=0; i<numcols-1; i++) {
    colSpace[i] = j;
    if (j<width) {
      j = j+int(random(minColSpace, maxColSpace));
    }
    else {
      j = j+int(random(min(minColSpace, width-j), max(minColSpace, width-j)));
    }
  }
  colSpace[numcols-1] = width;

  /*********init***************/
  initGrid(); 
  drawCircles();

  for (int i=0; i<grid.length; i++) {
    grid[i].drawLine();
  }

  gridCheck = new boolean[numrows][numcols];
  drawLines();
  /************DeBug************/
  println("numrows = " + numrows + " numcols =" + numcols);
  println("colSpace array");
  for (int k = 0; k<colSpace.length; k++)
    print(colSpace[k] + " ");
  println();
  println();
  /* for(int i=0; i<grid.length; i++){
   for( j=0; j<numcols; j++){
   print("(" + grid[i].segPoints[j].x + ", " + grid[i].segPoints[j].y + " ), ");
   }
   println(" /" + i);*/
  /****************************/
}

void mouseClicked() {
  background(255);
  initGrid(); 
  numCirc = int(random(3, 7));
  drawCircles();

  for (int i=0; i<grid.length; i++) {
    grid[i].drawLine();
  }
  drawLines();
}

void draw() {
}

void initGrid() {
  int y = int(random(minRowSpace, maxRowSpace));
  numrows = int(random(8, 12));
  grid = new RowLine[numrows];

  grid[0] = new RowLine(0);
  for (int i=1; i<numrows-1; i++) {
    grid[i] = new RowLine(y);
    if (y<height) {
      y = y+int(random(minRowSpace, maxRowSpace));
    }
    else {
      y = y+int(random(min(minRowSpace, height-y), max(maxColSpace, height-y)));
    }
  }
  grid[numrows-1] = new RowLine(height);
}

void drawLines() {
  float flip;
  for (int col=0; col<numcols-1; col++) {
    for (int row=0; row<numrows-1; row++) {
      flip = random(14);
      if (flip <4)
        drawVertLines(row, col);
      else if (flip <8)
        drawCrazyLines(row, col);
        }
      }
    }

    void drawCrazyLines(int row, int col)
    {
      int numLines = int(random(minLines, maxLines));
      float p1x = grid[row].segPoints[col].x;
      float p1y = grid[row].segPoints[col].y;
      float p2x = grid[row].segPoints[col+1].x;
      float p2y = grid[row].segPoints[col+1].y;
      float p3x = grid[row+1].segPoints[col].x;
      float p3y = grid[row+1].segPoints[col].y;
      float p4x = grid[row+1].segPoints[col+1].x;
      float p4y = grid[row+1].segPoints[col+1].y;

      float slope1 = (p2y-p1y)/(p2x-p1x);
      float slope2 = (p3y-p4y)/(p3x-p4x);

      for (int i=0; i<numLines; i++) {
        float x1= random(p1x, p2x);
        float x2= random(p1x, p2x);
        stroke(0);
        line(x1, (slope1*(x1-p1x)+p1y), x2, slope2*(x2-p3x)+p3y);
      }
    }
void drawVertLines(int row, int col) {
  int numLines = int(random(minLines, maxLines));

  float p1x = grid[row].segPoints[col].x;
  float p1y = grid[row].segPoints[col].y;
  float p2x = grid[row].segPoints[col+1].x;
  float p2y = grid[row].segPoints[col+1].y;
  float p3x = grid[row+1].segPoints[col].x;
  float p3y = grid[row+1].segPoints[col].y;
  float p4x = grid[row+1].segPoints[col+1].x;
  float p4y = grid[row+1].segPoints[col+1].y;

  float slope1 = (p2y-p1y)/(p2x-p1x);
  float slope2 = (p3y-p4y)/(p3x-p4x);

  for (int i=0; i<numLines; i++) {
    float x= random(p1x, p2x);
    stroke(0);
    line(x, (slope1*(x-p1x)+p1y), x, slope2*(x-p3x)+p3y);
  }
}

Circle[] circles;
boolean good = false;

void drawCircles() {
  circles = new Circle[numCirc];
  for (int i=0; i< numCirc; i++) {
    good = false;
    while (circles[i] == null) {
      circles[i] = createCircle(i);
    }
  }    
  for (int j=0; j<circles.length; j++)
    ellipse(circles[j].x, circles[j].y, circles[j].rad, circles[j].rad);
}
Circle createCircle(int i) {
  boolean fail = false;
  float dim = random(circMin, circMax);
  Circle test = new Circle(random(circMax, width-circMax), random(circMax, height-circMax), dim);
  //circles[i] = test;
  for (int j=0; j<i; j++) {
    if (circleHitTest(test, circles[j])) {
      return null;
    }
  }
  return test;
}

boolean circleHitTest(Circle c1, Circle c2) {
  float hitRad = c1.rad + c2.rad;
  if ( dist(c1.x, c1.y, c2.x, c2.y) < hitRad)
    return true;
  else
    return false;
}


class RowLine{
  PVector[] segPoints;
  int rowStart;
 // RowLine prev, next;
  
  public RowLine(int rowStart1){
    segPoints = new PVector[numcols];
    rowStart=rowStart1;
    
    if( (rowStart == 0) || (rowStart == height))
      initEdgeLine();
    else
      initLine();
  }
  
  void initLine(){
    int x,y;
    y= rowStart;
    for(int i=0; i<numcols; i++){
      segPoints[i] = new PVector(colSpace[i], y);
      y= y + int(random(-linewaver, linewaver));
    }
  }
  
  void initEdgeLine(){
    int x,y;
    y= rowStart;
    for(int i=0; i<numcols; i++){
      segPoints[i] = new PVector(colSpace[i], y);
    }
  }
  
  void drawLine(){
    stroke(0);
    //smooth();
    strokeWeight(0.5);
    strokeJoin(MITER);
    beginShape();
    for(int i=0; i<segPoints.length; i++)
      vertex(segPoints[i].x, segPoints[i].y);
    endShape();
   // for(int i=0; i<segPoints.length-1; i++){
      //line(segPoints[i].x, segPoints[i].y, segPoints[i+1].x, segPoints[i+1].y);
   // }
  }
}

Alex Wolfe | Looking Outwards | Gauntlet

by a.wolfe @ 4:37 am

Doggleganger

Doggleganger  is an app  developed by the Pedigree Adoption Drive and NEC. It uses a simple face recognition system to match dogs in need of a home to the humans that use it, building on the idea that people tend to look like their dogs. Its a ridiculously fun/clever way to aid these dogs who would otherwise die in the shelters. Unfortunately, you currently have to take a quick jaunt to New Zealand to pick up your new soul mate, but hey that’s kind of a win win.

The Artist is Present

Developed by pippinbar games, The Artist is Present is an old school Sierra-style recreation of the famous performance piece of the same name by Marina Abramovic. The user can relieve the experience including the incredibly long and frustrating line into the museum.

“Have the experience only a lucky few have ever had! Stare into Marina Abramovic’s eyes! Make of it what you will! Just like art!”

 

Andreas Schlegel | Procedural Drawings

Andreas uses a CNC to produce these very simplistic drawings that are quite beautiful.

 

Luke Loeffler – 13/9/65

by luke @ 3:32 pm 24 January 2012

Luci Laffitte – 13-9-65

by luci @ 10:25 am

Below is a jpg captured from my interpretation of the 13-9-65 artwork.

 

This is the program running in real-time:

 

While working on this piece I went through many different methods of creating horizontal lines that I found to be accurate and appealing. Then, I proceeding the adding in the vertical lines and ellipses. Looking back, I wish I had considered the overall artwork earlier, because I found it difficult to work the vertical lines into the method I had chosen.

Overall, this project was a good jump back into coding.

 

 

Here is the code


//vectors and globals

////////////////////////////////////////////////////////

PVector v1, v2, v3, v4, v5, v6, v7, v8;
int sizex=600;
int sizey=600;
float t;

 

//setup
////////////////////////////////////////////////////////
void setup() {
size(sizex,sizey);
smooth();
background(255);
stroke(0);
strokeWeight(1);
frameRate(.5);

//draw
////////////////////////////////////////////////////////
}

void draw() {

drawhorizontals();

}

 

 

void drawhorizontals() {

//draw horizontal lines
////////////////////////////////////////////////////////
fill(255);
noStroke();
rect(0,0,width,height);

for (float i=0; i<height; i = i+ random(50,120)) {

float a= 0;
float var1= 0;
float var2= 40;

v1 = new PVector(0, a + random(var1,var2));
v2 = new PVector(.2*width, a + random(var1,var2));
v3 = new PVector(.4*width, a + random(var1,var2));
v4 = new PVector(.6*width, a + random(var1,var2));
v5 = new PVector(.8*width, a + random(var1,var2));
v6 = new PVector( width, a + random(var1,var2));

stroke(1);
line(v1.x, v1.y+i, v2.x, v2.y+i);
line(v2.x, v2.y+i, v3.x, v3.y+i);
line(v3.x, v3.y+i, v4.x, v4.y+i);
line(v4.x, v4.y+i, v5.x, v5.y+i);
line(v5.x, v5.y+i, v6.x, v6.y+i);
//draw circles
////////////////////////////////////////////////////////

ellipseMode(CENTER);
noFill();

int numCircles = int (random(4,12));

for (float k=0; k < numCircles;k= k + (random(0,100))){
int radius = int(random(10,70));
ellipse(random(radius,width-radius), random(radius,width-radius), radius, radius);

float xc = random(v1.x + radius, (v1.x+ width -radius));

}

//draw crossings
////////////////////////////////////////////////////////

int vary = int(random(-10,10));
int vary2 = int(random(0,10));
int var3 = int (random (0,20));

line(v2.x + vary, v2.y + var3, v2.x + vary2, v3.y + vary2);
line(v1.x + vary, v1.y+i + var3, v1.x + vary2, v2.y+i + vary2);
line(v3.x + vary, v3.y+i + var3, v3.x + vary2, v4.y+i + vary2);
line(v4.x + vary, v4.y+i + var3, v4.x + vary2, v5.y+i + vary2);
line(v5.x + vary, v5.y+i + var3, v5.x + vary2, v6.y+i + vary2);

}}
void mousePressed() {

saveFrame("luci13-9-65.jpg");
}

Luci Laffitte – FaceOCS

by luci @ 9:40 am

The Muppet-Puppet
[youtube http://www.youtube.com/watch?v=o5BdglycvZ4&w=560&h=315]

For my FaceOCS Project I made a digital Kermit puppet. The head movement of the puppet is controlled by the head of the user, as are the facial features. Further the puppet with “sing” once the user opens their mouth.
If I were to go further with this project I would want to enhance the puppet (3D modeling, better movement of parts?) or maybe even make it karaoke style with rolling text on the side.

 

13/9/65 Rules

by eli @ 8:57 am

http://dada.compart-bremen.de/node/2875

Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Interactive Art and Computational Design, Spring 2012 | powered by WordPress with Barecity