Category Archives: CapstoneProposal

Alex Sciuto

20 Apr 2015

My capstone project has changed significantly since the initial proposal for a State of the Union Text Visualization. I decided instead to make something less serious and more whimsical.

The Self-Help Help Line is an introspective service designed to help the user pinpoint exactly what is bothering them today. Using hierarchical data from the WordNet database, this phone service answers a call and through a series of menus brings the user from broad concepts to specific concepts that may be bothering them.

In practice, this traversal of the huge hierarchy is pretty random and hopefully a little entertaining. Few people have been able to find the “problem” they’re having instead finding random subtrees like musical genres or types of public speeches.

To-do List

1. Make WordNet data usable in Node.js environment
2. Create logic around selecting trees through series of menus.
3. Optimize database calls by caching most expensive queries.
4. Connect to Twilio Phone service.
5. Create website to show the different paths people have taken.
6. Create short promo video.

Example TreesScreen Shot 2015-04-20 at 5.11.04 PM

start->
abstract entity->
communication->
expressive style,style->
genre,music genre,musical genre,musical style->
african-american music,black music->
hip-hop or raps

start->
event->
act,deed,human action,human activity->
speech act->
address,speech->
lecture or public lectures

start->
state->
feeling->
sadness,unhappiness->
melancholy->
brooding or pensivenesses

 

mmontenegro

19 Apr 2015

New Capstone Idea

After working on my original idea and having a working prototype, I realized it was a little to simple/boring. Even though it was a challenging computer vision problem to change peoples cloths. Once it was done there was no real surprise aspect.

With this in mind I changed my original project to a more interesting one. I am creating a game with the LEAP MOTION which will live in your hand. This game will be projected onto your hand and you will use it as the main display and input device.

It will be done using OpenFrameworks for calibration and Unity3D for the main game mechanics.

This are some initial game ideas I have. I will start with one and the if I have time I will make one more.

hands_1

The first game I am doings game design levels. It is a maze, in which the user needs to move its hands and fingers to get the ball to the final destination. As the user moves its fingers, some walls will disappear/ appear to help the user take the ball to the final destination.

Game

Amy Friedman

13 Apr 2015

Original Timeline

March 31st: Have entire experiment written out, and know what will be tested

Start using the Eye Tracker, work with the API and begin to program

Looked into current eye tracking research

 

April 7th: Have eye tracking data recording program finished

Look into how to analyze/compare the different information from each test

Recruit people to pilot out the system

Pilot eye tracking system, and fix and bugs

Recruit full participants to do the study

 

April 14th: Test out eye tracking system on several participants

Create analysis program for the data

 

April 21st continue to test participants

Analyze the data to understand more information

Look into visualization of the information, where will it be housed

 

April 28th: Project Completed

Finish everything

 

Revised Timeline

April 14th week – test out 30-40 participants on the eye tracker, create survey

April 21st week – analyze data with created system, if not information test more participants.

John Choi

10 Apr 2015

Here’s all the work done so far as of April 9, 2015:

CAD Design in Rhino (About 5 hours):
CrabOpen

3D Printing (About 10 hours):
crabDSCN5669

Hardware Assembly (About 1 Hour):
crabDSCN5703

Electronics Soldering (About 2 Hours):
crabDSCN5748

Final Assembly (About 1 Hour):
crabDSCN5795

And Done with the Hardware! Now for the software…
crabDSCN5813

Ron

09 Apr 2015

Final Project Update

My final project takes 10,000 Dilbert comic strips and slices each of them into individual panels. It then performs optical character recognition on each of the panels to extract the dialogue. The dialogue is then associated with each panel. Performing natural language processing on the dialog can determine the subject and context of the dialogue, so that a new comic strip can be generated with panels from each strip.

I had previously scraped the text from all of the comic strips published to date. The text is not associated with each panel; they are a bunch of lines that only apply to the overall strip.

So far, I’ve:

Cleaned up the original transcript, which contains a lot of inconsistencies in how the dialog is captured. A lot of the transcripts contain additional text that is not part of the dialogue, so I’ve had to write some code to separate only the relevant dialog.

Developed code that looks for the borders of each of the three panels of a strip so that it can be cleanly cropped.

Written code to performan OCR on the individual panels. Because of the variation in the text placement in the strip, the OCR is not perfect, so I’m using a Levenshtein algorithm to compare the OCR’ed text with the transcript for a particular strip and then deduce which of the text belongs to one specific panel.

What’s left

I need to refine the code to compare the OCR’ed text with the original transcript. There are still many cases where the OCR’ed text does not match up with the original transcript.

I need to write code to look through the panel-specific dialogue and determine the dialogue context.

I need to then, based on the dialog content of a particular panel, develop code to select panels from different strip that are related.

I would then need to create a web page that allows the user to create new panels based on specific criteria.