Category Archives: Uncategorized

dantasse

28 Apr 2015

Ok, pick an address, and this app will show you all the roads, green space, and buildings, and show you what percent of that area is each.

Screen Shot 2015-04-28 at 1.09.57 AM copy

What is neat: I’ve lived in two places in Pittsburgh. I had no idea that my current place in Shadyside has way more buildings-per-square-foot than my old place in Bloomfield. In a sense, my place in Seattle was just as dense as Shadyside. And my parents’ house in Cleveland is, now quantifiably, very green.

Hey, I’m moving to San Francisco for the summer. Where should I live?

Screen Shot 2015-04-28 at 1.29.48 AM

Neat. All the neighborhoods I’m looking at are pretty similar… except it looks like I hit a park in the random square of the Western Addition that I was looking at, and some kind of big roads in Hayes Valley. But that’d be a thing to look for: is the Western Addition a little bit less dense than some others? Is Hayes Valley plagued by roads?

Screen Shot 2015-04-28 at 1.34.57 AM

Huh. Panning around Hayes Valley, I keep hitting road densities in the high 20s or low 30s, while other neighborhoods are ~10 points lower. Seems there are more bigger roads there. That’s a good thing to keep in mind.

Anyway, there’s a ton of stuff I’d like to do with this in the future:

  • let you share and compare different places
  • pre-populate with a lot of places from around the world
  • show you where the closest place that has the same proportions as yours is
  • let you draw an arbitrary polygon instead of just a box
  • tweak the green/road/building finding algorithms so they work better, especially in other countries

Thomas Langerak – Capstone Final

GitHub: https://github.com/tlangerak/Chess_Encryption

Capture

I have made a password alternative for an online bitcoin wallet (though it could be used for anything). By making the correct moves in the correct order one unlocks the correct character for the password (which automatically gets copied to the clipboard), when the wrong move is done one notices nothing but a wrong character gets into the password making it unusuable.

There are some bugs:
Move that results in a check usually give an error
More than 30 moves gives errors
very very rarely the board swaps pieves.

I started out with try to make a tangible interface for this with a raspberryPi and camera. Unfortunately I did not have engough time to achieve this and had to cancel my plans regarding this. Maybe I will continue this in the future.

See for the rest GitHub:

import processing.net.*; 
import java.awt.datatransfer.*;
import java.awt.Toolkit;
import java.awt.event.KeyEvent;

ClipHelper cp = new ClipHelper(); //to copy to clipboard
boolean[] keys = new boolean[526];
boolean checkKey(String k)
{
  for (int i = 0; i < keys.length; i++)
    if (KeyEvent.getKeyText(i).toLowerCase().equals(k.toLowerCase())) return keys[i];  
  return false;
}

Client myClient; //socket with python

PImage R; //all images for pieces
PImage N;
PImage B;
PImage K;
PImage Q;
PImage P;
PImage r;
PImage n;
PImage b;
PImage k;
PImage q;
PImage p;
PImage img;

int t=0; //to draw rows/columns
int ro=-1;//^^
int co=0;//^^

String dataIn; // Data received from enige
String position[]=new String[64];//String to keep track of positions
int imageWidth; //for postion pieces
int imageHeight;//^^

int c1; //to translate mousepress to column/row
int c2;
int r1;
int r2;
String cs1="";
String cs2="";
String rs1="";
String rs2="";//^^
int toggle=0;//switch between first and second mousepress
String saved = "";//result of move eg. e2e3
String poem[]; //string of letters for password
String result[]; //actually password given (this can be different from actual password
String password[]= {
  "b1c3", "g1f3", "e2e3", "f1d3", "e1g1", "d2c3", "f3e5", "h2h3", "c2d3", "d3d4", "f2f3", "f1e1" //the moves that are needed for correct password
};
String alphabet= "1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"; //to generate random letter
String pass ="";//the correct password
int moveTracker = 0;//to track position in array for moves

void setup() {
  open("C:/Users/Administrator/Desktop/Chess_Encryption_v3/sunfish.py");//start python file with engine
  size (800, 800); //set GUI
  background (0);
  noStroke();
  smooth();
  poem = split(pass, " "); //create string of letters from password
  result=new String[poem.length];//setup string with correct length
  myClient = new Client(this, "", 8080);//start connection with chess engine
  myClient.write("connection established");//check connection
}

void draw() { 
  drawBoard();
  dataReceive();
  drawPieces();
}

void drawBoard() { // draw the board(Doh)
  for (int x=0; x<= (width); x+=width/8) { //8 steps width
    t++;
    for (int y=0; y<= (height); y+=height/4) {//4 step height
      if (t%2==1) { //alternate blackwhite with whiteblack drawing depending on row
        fill(255);
        rect (x, y, width/8, height/8);
        fill(150);
        rect (x, y+height/8, width/8, height/8);
      } else {
        fill(150);
        rect (x, y, width/8, height/8);
        fill(255);
        rect (x, y+height/8, width/8, height/8);
      }
    }
  } 
  t=0;
}

void drawPieces() { //draw pieces (doh)
  co=0;
  ro=-1;
  for (int i=0; i 0) { //if available
    dataIn = myClient.readString(); //read
    if (dataIn.length()>75) { //if it is of correct length (not the "Not a Valid input" thingy)
      for (int i=0; i=width/8*i && c1=width/8*i && r1=width/8*i && c2=width/8*i && r2

rlciavar

21 Apr 2015

Since my initial proposal, I have made a lot of progress on my project. The actual project goal has not changed much, however I have scaled back some of the technical components to be more realistic given the time frame.

A quick overview of what I’m doing: I’m making an animatronic avatar for skype conversations. Use video feed from your skype conversation and computer vision to make your friend control a robotic avatar. Skype bot?

Initially I had hoped to control the eye position, eyebrows and, mouth width and height. I learned that the eyetracking required to map eye movements would be very difficult and require specific conditions (lighting) in the room to do well. Therefore I have scaled the movements down to the eyebrows and mouth. To increase emotiveness I have also added ears that are controlled by the servos in the eyebrows. This gives me more motion and expressive quality without the cost of powering and controlling more servos.

So far I have mostly completed the underlying mechanisms and skeleton for both robots. This is what they currently look like. They will eventually be covered in a more friendly material to hide the messy circuits inside.

IMG_1741

Here’s some sketches of what they might look like with the friendly covering. I purposely made them nonhuman to avoid any uncanny valley effects which some people find off-putting.

IMG_1742

Right now I’m working on getting the hardware and software to work. Here’s a video of me testing the example code on my servo shield. (the thresholds were not set up for my mechanisms :/) (sorry its in portrait not landscape)

My plan on the software side involves gluing a lot of different applications together. It’ll go something like this:

SKYPE >> PIXEL GRABBER SOFTWARE >> OF FACEOSC >> OSC >> ARDUINO >> SERVOS

I’ve found a 3rd party pixel grabber software that successfully sends video to OF.

Screen Shot 2015-04-10 at 6.56.30 PM_smTo help with debugging in OSC I’ve created two OF apps that simulate the Facetracker output (OSC sender) using GUI sliders and Arduino response (OSC receiver) by moving a little face I drew. So far everything works well.

Screen Shot 2015-04-20 at 10.55.36 PMNow I’m working on sending OSC messages to an Arduino from OSC using OSCUINO (thanks David Newbury!) and the OSC library on Arudino. So far I’ve successfully run example code written by David and am moving to scaling up the Arduino output to match the outputs on the face.

Once that’s working I can move onto swapping out the slider output for real FaceOSC output.

jackkoo

21 Apr 2015

Finished with most all technical things required to make my parametric dress. Have finished modeling my character, still need to rig it. Once I rig my character I will start trying out more different things with the dress. For different shapes, I’ll be blending between models, that way I can sculpt many different shapes and see what works well, and try out many things and iterate through stuff.

Finished tasks

Extract Animation Data.
Cloth Simulation for Dress.
Material Creation.
Model Character.
Blendshapes.
Parametric polka dots.

Remaining tasks

Rig Character.
Animate Character.
Design Dress.
Iterate Dress Designs.
Iterate Dress Designs.
Iterate Dress Designs.

 

 

demo from Jack Koo on Vimeo.

mileshiroo

21 Apr 2015

I’ve discovered that my personalized dictionary files are specific to Android 4.4 KitKat, and can only be decoded according to the specs of the source code for that version. Confusingly, there are multiple versions/encodings of binary dictionaries. I’ve determined that the one I took from my phone is a “version 4” which is separated into a series of files.

Words and frequencies are stored as nodes in an array of bytes. Each node contains the address of its child in the trie, the address of its parent, the characters of the word, and the word’s frequency. The difficult work that remains is to write the code that loops through the byte array and parses node data.

I’m working in a sandbox folder which has all the Android source files I need to decode the binary dictionary.