Category Archives: Uncategorized

mmontenegro

19 Apr 2015

New Capstone Idea

After working on my original idea and having a working prototype, I realized it was a little to simple/boring. Even though it was a challenging computer vision problem to change peoples cloths. Once it was done there was no real surprise aspect.

With this in mind I changed my original project to a more interesting one. I am creating a game with the LEAP MOTION which will live in your hand. This game will be projected onto your hand and you will use it as the main display and input device.

It will be done using OpenFrameworks for calibration and Unity3D for the main game mechanics.

This are some initial game ideas I have. I will start with one and the if I have time I will make one more.

hands_1

The first game I am doings game design levels. It is a maze, in which the user needs to move its hands and fingers to get the ball to the final destination. As the user moves its fingers, some walls will disappear/ appear to help the user take the ball to the final destination.

Game

Ron

09 Apr 2015

Final Project Update

My final project takes 10,000 Dilbert comic strips and slices each of them into individual panels. It then performs optical character recognition on each of the panels to extract the dialogue. The dialogue is then associated with each panel. Performing natural language processing on the dialog can determine the subject and context of the dialogue, so that a new comic strip can be generated with panels from each strip.

I had previously scraped the text from all of the comic strips published to date. The text is not associated with each panel; they are a bunch of lines that only apply to the strip.

So far, I’ve

Cleaned up the original transcript, which contains a lot of inconsistencies in how the dialog is captured. A lot of the transcripts contain additional text that is not part of the dialogue, so I’ve had to write some code to spearate only the relevant dialog.

Developed code that looks for the borders of each of the three panels of a strip so that it can be cleanly cropped.

Written code to performan OCR on the individual panels. Because of the variation in the text placement in the strip, the OCR is not perfect, so I’m using a Levenshtein algorithm to compare the OCR’ed text with the transcript for a particular strip and then deduce which of the text belongs to one specific panel.

What’s left

I need to refine the code to compare the OCR’ed text with the original transcript. There are still many cases where the OCR’ed text does not match up with the original transcript.

I need to write code to look through the panel-specific dialogue and determine the dialogue context.

I need to then, based on the dialog content of a particular panel, develop code to select panels from different strip that are related.

I would then need to create a web page that allows the user to create new panels based on specific criteria.

Bryce Summers

09 Apr 2015

Logic Tracks

Screenshot depicting intersecting track.

Screenshot depicting intersecting track and a utilitarian GUI so far.

Progress Thus Far

  • I have begun the behind the scenes groundwork for the GUI.
  • I have implemented the GUI triggered functionality that allow the user to create and delete sections of track. In the future, I should make it so the user can click and drag large sections of track to speed up their track creation time.
  • I have made major strides towards implementing the representation of Train cars and their ability to transverse the track network. Their are still a few bugs regarding the transitioning between tracks when intersection curves are present, but not viable.

Major Milestones to Come

  • I need to implement the logic for input locations and output locations.
  • I need to implement the user designed logic system that allows the user to interactively map inputs to the various blocks.
  • I need to implement the handling of collisions between cars and the interaction of the logical components with the cars, in addition to the representation of the car’s load state.
  • I need to fine tune the GUI to make the game as user friendly as possible.
  • I need to implement a distinction between level creation mode and level playing mode.
  • It would be nice if I could functionally compose levels, just like actual programs.

Videos

Zack Aman

08 Apr 2015

Serendipiwiki is a Chrome extension aimed that tracks browsing through wiki’s and allows the user to save and cluster their browsing history. It then uses this information to surface articles that fill holes in your knowledge map or push you into new territory. Educate yourself by reading a single Wikipedia article per day.

Existing visualizations look at analyzing the entire structure of Wikipedia rather than thinking about the role of visualization and mapping for an end user. The goal of this project is to use user habits to create a navigable, personalized metastructure for Wikipedia.

Where I Am Now

I have the Chrome extension skeleton figured out and have history automatically recorded, as well as the ability to save things and go to random pages.

I’ve also downloaded Wikipedia in preparation for calculating a priori degree of interest for articles.

Where I’m Going

Features that I’m working on:

– degree of interest: calculate which articles (for all of wikipedia) are most interesting based on linkages in and out

– retroactive clustering: cluster your saved items as they make sense to you

– display map of connections: requires degree of interest calculation (from entire Wikipedia) as well as user clusters

– predictive surfacing along a spectrum: combine current saved articles with a priori degree of interest to provide articles along a spectrum from relevant to current clusters to generally interesting articles

– streak tracking: after providing articles, give the user a way to check off that they have read an article today and then keep track of their daily streak

– snipped highlighting: allow snippets from the page to be saved

Related Works

Chris Harrison’s WikiViz

Six Degrees of Wikipedia

Degree of Interest Paper

dantasse

08 Apr 2015

Shows you how much of a given land mass is taken up by each type of thing.

Screen Shot 2015-04-08 at 9.54.06 PM

Work in progress, go easy! The total is at the bottom, and the interesting thing is the “percent covered” – 54% of this chunk of downtown is used by buildings. (Data from OpenStreetMap.)

Now, this also assumes that each place is a “good” place that we want. I hope to add the ability to click on each thing and reclassify some as “good”, “bad”, or “neutral.” Then we could see “percent of this map that is covered by good things.” (where “good” means useful buildings, parks, etc; not parking garages or roads.)

Other directions: I’ve been looking into satellite images like this:Screen Shot 2015-04-08 at 10.04.13 PM

and then thresholding, blurring, etc, to figure out where buildings are or whatever else:

Screen Shot 2015-04-08 at 10.08.08 PM

Screen Shot 2015-04-08 at 10.04.31 PM

I mean, as is, this is such garbage, but maybe there’s something here? I thought it’d be cool to, say, find the cars in the image, but after trying it I think that is actually very hard.

Things I would like to know:

– is this interesting at all? do you care what portion of your city is used by what? if so, why?

– do any of these rorschach blobs make you think of some interesting thing that you’d like to see out of a block of city info? maybe there is something more stylistic, like these Mapsburgh cut paper maps, and less utilitarian that I should be doing?

– if this is interesting, would you take the effort to customize this? like you see in the top map, at the top right, there’s a qdoba/mcdonalds that’s not listed on OpenStreetMap so it’s not drawn as a box here. Would you draw that in yourself, in the interest of this map?