Category Archives: Capstone

Capstone Projects from IACD-2015

JohnMars—ibldi

ibldi by @john_a_mars is a web app that creates customizable 3D printed models of urban areas.

Using extracted 3D tiles, i.e., buildings, textures, and terrains, from Here.com, ibldi is a webapp that allows users to select custom areas within cities (or anywhere that has 3D data available), and have them printed via Shapeways.

It drew inspiration from two similar apps, Terrafab and The Terrainator — where both of them are focused on terrain, mine is focused instead on buildings.

The 3D prints created with Shapeways are absolutely gorgeous. Printed in full-color sandstone, their bases are 5cm x 5cm (heights obviously vary), and prices range from $10 to $80 depending on the height and density of the buildings.

The application is currently limited to a single zoom-level in New York City, but more areas and zooms will be available soon. Custom selection bounds are also in the works, as well as options to customize the models.

BEHIND THE SCENES:

Many challenges arose when creating the application. The overall quality of the city tile meshes were poor, and their edges were uneven. An algorithm had to be developed to automatically and accurately trace the edges of the models, and use that information to make a closed solid. Creating a filetype that Shapeways would accept for color printing, while keeping the model accurate and the textures aligned, also was quite a process. Finally, the logistics and practice of putting all the parts together, and having them run smoothly on the web was quite a learning experience.

Code available at https://github.com/marsman12019/ibldi

Website available at http://ibldi.xyz/

Amy Friedman

15 May 2015

webfocus

Focus – Eye Tracker Visualization Interface

Focus by Amy Friedman from Amy Friedman on Vimeo.

My project uses the EyeTribe to collect eyetracking data and visualize it. Eyetracking companies have some programs to visualize collected data, but Focus allows for users to change what images are looked at and better understand the differences without limitations. Utilizing clusering algorithms, heat maps and personalized comparisons Focus allows for an at home user experience. Focus emphasizes images from different basketball views to understand where beginners first gaze vs. more advanced players to begin to get small comparisons. Implementing your own images and changing the focal cluster points allows you to customize Focus’ results.

There are two components to my project, which allow the people to create their own data and visually compare it on their own. The first program called FOCUSCollector, I created displays imported images on a timer, and saves the viewers gaze points, image number and time spent per gaze to a .tsv file, at the end of the sequence the program displays the images with the collected data shown in scatter plots, heat maps and clustering map. The second program is a usable visual interface called FOCUSVisualizer takes all of the .tsv created by the FOCUSCollector and allows users to change which image is viewed with their desired visual map. FOCUSVisualizer lets the user learn what they want from their data but giving them options of visualizations. FOCUSCollector and FOCUSVisualizer were both programmed in Processing using Java.

FOCUSVisualizer InterfaceScreen Shot 2015-05-14 at 1.46.17 PM

FEATURES

Scatter Plot
Scatter Plot Lines
Heat Map
Reveal Map
Cluster
Clickable Clustering
Advanced vs. Beginner Comparison [coming soon]
Animate Overtime [coming soon]
4 screen comparison [coming soon]

Inspiration – Current Market Eyetracker

I wanted to allow people to have a more readily available interface to visualize eyetracking data. A lot of research has gone into eyetrackers. In my past posts I have mentioned the work done by the UCD School of Psychology to utilize eye trackers to distinguish between beginner and advanced golf & tennis players using the Tobii eye trackers. Tobii has also developed much research to analyze eye tracking data. Most eyetrackers used to run around $3000, but current market trackers have become more affordable. The EyeTribe and developer kit is sold for $99, while the Tobii EyeX Controller Dev Kit is sold for $164.00, which includes shipping. Both of these can be used on Windows compatible devices, but only the EyeTribe can be used with a Mac. The EyeTribe also has a cloud-based eye tracking analytics Beta program called Eyeproof. This system uses the EyeTracker Recorder to record the gaze point which is only compatible with Windows. Eyeproof allows for users to create heatmaps, traffic maps, and burnthrough visualizations of where different participant gazed.

You choose the image or website uploaded and have participants look at the image. The programs I created developed similar features, but also allows you to cluster data in comparison to different points on the screen, and compare the differences between someone who considers themselves an advanced at playing basketball to those who are beginners or lack knowledge.

Reflection

The images range in amount of visual information in them.  With more visual information people tended to look at different components, when the images had less information participants focused more on the main attributes of the image. There were many similarities between where people looked. In a game situation one would be more focused on the man youre guarding or how to help optimize a play, when youre watching basketball this detail differs.

I think that FOCUSVisualizer allows people to easily view and understand the gaze points which have been collected from FOCUSCollector. At the exhibition people enjoyed seeing their own visualized data, I think that FOCUSVisualizer could improve if FOCUSCollector was integrated into the program. This would be an issue as not everyone has an EyeTribe, but people enjoy manipulating their own data, it personalized the experience and allows for them to gain information about themselves. Another issue is that most of the maps focus on the collected  data as a whole not on individual participants, this would be nice to integrate to allow to better compare specific individuals.

I learned alot while creating the two programs. One of which was how to sort through large amounts of data and even create the files of the gaze points. I didnt find connecting to the Eyetribe to be hard as there was already a Processing import created by Jorge Cardoso allowing me to easily collect the gaze points. It was cool to learn about where one looks compared to where are eyes are located. Our eyes move more than one thinks and I didnt realize this until after. I also learned to work with ArrayList and HashMaps. How I would store the data and then use it was the hardest hurdle to get over because it determined what information I could use. I focused more on the data as a whole rather than individual people but this is something that could be improved as I said before. It was a great experience to create visualizations of peoples data.

Future

My next steps involve allowing users to compare 4 different screens at once, and to allow for the gaze points to be animated over time. The Heat Map and Reveal Map need some adjusting too allow for smoother visualizations which will be done soon.

Program 1 FOCUSCollector: Data collection Java Program can be found here.

Program 2 FOCUSVisualizer: Eyetracking Data Visualization Interface Java Program can be found here.

Images Used in Project come from:
http://hoopshabit.com/2014/12/30/fantasy-basketball-rudy-gobert-shining-utah-jazz/
https://badgerherald.com/sports/2015/03/26/mens-basketball-dekker-koenig-see-familiar-face-in-uncs-j-p-tokoto/
http://www.hawaiiarmyweekly.com/2011/04/28/94th-aamdc-captures-basketball-championship-69-64/

dsrusso

15 May 2015

2015-05-15 12_04_41Click Above for Animation

IMG_2010

This project is an open source and accessible hardware platform for advanced cinematics. These technologies for camera movement are complicated and expensive, but this arrangement will give the same level of dynamic capability to anyone with a laptop and a DSLR Camera.

 

IMG_2015

The hardware uses the kinect to intelligently locate subjects in real space.  In it’s most basic functionality the camera will follow the most prominent subject in the scene (largest contour).  Later iterations will begin to provide scripted choreography that adjusts itself to the subjects position.  This is a tool for those who want high quality and dynamic shots, without the major cost of industrial systems.

IMG_2030

 

 

All fabrication files and code assets will soon be available for download with instructions.

Screen Shot 2015-04-28 at 8.04.10 AM copy

sejalpopat

15 May 2015

D3 Inspector

Summary:
This is an interface (and editor) for D3 visualizations that provides additional features to explain the code. This project was made as a way to explore interfaces that are augmented to help users understand their code. I decided there were three views into this process that would be helpful: the editor,  the visualization itself and the explanation or intermediary panel.

Editor
The left panel is an editor that users can paste their D3 code into. The editor changes based on user interaction with the editor and the visualization panel. For example, if you mouse over a d3 library function the middle panel provides more details on this function and its use.

Explanation
The middle panel starts out with a description of the data array that is driving the features of the visualization. Additional annotations appear as the user mouses over different functions in the editor panel.

Visualization
The visualization panel displays the results of the code in the editor (left panel) but with additional features. For example, if you mouse over the svg container (the “background”) the code that appended this will be highlighted in the editor. In the circle packing example, when you mouse over a circle element the code that is responsible for appending circles is highlighted.

Video:
Vimeo Link 

 

 

 

 

ypag

15 May 2015

My project is called Re-frame.

Inspiration:
Project evolved from the idea of using patterns in urban spaces such as manholes to create virtual worlds. A custom frame was created by adding simple binary patterns at the edges. This frame can be used as a window to frame objects from the world. Virtual content can be rendered inside or around this frame to align with the real world object the frame is pointed to.

spring_15_RE-REview

Re-frame is essentially a step ahead in creating image targets which are not fully known for computation so that virtual content can be created on the fly.

The aesthetic of frame is evolved from Parallel universe window from TV series Fringe

And this clip of roadrunner

Capstone Review

Frame can be used in multiple scenarios: (Video upcoming)
1. Time travel:
History of a place can be rendered inside the frame which can be seen in context through the frame
For example: Different seasons can be visualized using the frame.

Frame_winter
(Please click on the image to see animation)

2. Space travel:
Same place can be seen from different perspectives using frame
For Example: Frame showing what is 200m ahead from where you are looking

Perspectives: Same object can be seen from different perspectives.
For Example: Frame can show satellite view of the place you are looking at

3. Data
Frame can dynamically look at objects and reveal more about the world
For Example: Look at a lab in CMU through frame to see NSF grant the lab makes use of.