Monthly Archives: April 2014

Spencer Barton

29 Apr 2014

Looking Glass: Little Owl Lost

Young readers bring storybook characters to life through the Looking Glass.

I wanted to explore augmented storytelling so I created a device to add content to a book. The reader guides this ‘magnifying glass’ device over the pages in a picturebook. Animations appear on the display based on where the magnifying glass is on the page. These animations add to the content of the story and let the reader explore new interactions.

I used the book Little Owl Lost by Chris Haughton.

book

I used the main character, Owl, as the focal point for the animations.

owlSleep

One of the animations is triggered as the magnifying glass device is brought to the correct position:

animations

Here is another example with before and after:

animation0 animation1

These are some of the animations for Little Owl Lost.

owl owlCry owlFall

How it works

Hardware Specs

This project is composed of only a few parts. Control of the interface happens through the arduino. The OLED screen has its own processor and SD card to store all of the animations. The two processors communicate via serial. There are also 3 Hall effect sensors, an on/off switch and batteries.

Prior tests of the OLED are in this previous post.

arduino

OLED Display

The OLED (organic LED) display comes from 4D Systems. I used their uTOLED_20_G2 display which is no longer in production. Animations were loaded as GIFs onto an SD card that lived on the display. Animations were then triggered via the supported serial interface.

Hall Effect Sensors and Magnet Tags

I use Hall effect sensors and magnets to detect where I am in the book. I have created a series of tags which consist of a group of 3 magnets in an L. Each magnet can have either a positive or negative polarity facing upwards which gives me a total of 8 unique tag combinations. The L shape of the tag enables me to determine orientation.

tags

I then placed the tags inside the front and back cover of a book. The magnetic field can be detected through multiple pages.

tagsInBook

I use Hall effect sensors to measure magnetic polarity. There are 3 sensors to correspond to the 3 magnets of the tags. The sensors are highly accurate and only detect the magnets when they are directly over the magnets.

sensors

Placement Magnet

To help the reader find the animations and trigger the animations at the correct location I have added larger placement magnets to both the book and the magnifying glass object. These magnets hold the display in place as the animation occurs.

magnetAndSensor

 

Flaws

The most glaring flaw of the current design is that the Looking Glass never actually knows which page it is on. The magnetic field passes through all of the pages and so it is impossible to know what page the Looking Glass is actually on.

A solution would involve additional sensors. For example color sensors could sample the colors on the current page an take an educated guess as to which page the Looking Glass was over. I did test basic color sensing but did not get far enough with this project to add that feature.

Software

All of my code lives on Github. The main arduino file is StoryBoard.ino. Connecting to the arduino via USB will enable a calibration mode for the sensors.

Feedback

  • Comments
    • Form factor
      • Ideally the box would be smaller
      • Hallmark cards may be a good model
    • Leah Buechley at MIT has good examples of similar work
    • Jie Qi – http://technolojie.com/circuit_sketchbook/
    • Story Clip – http://highlowtech.org/?p=2923
    • Living Wall – http://highlowtech.org/?p=27
  • Alternate idea
    • Choosing where you go?
    • What about board and card games?
    • What if it had a wireless transmitter so you don’t know what it will show?
    • What about surprise?

Based on feedback this project has many possible future directions. As a first prototype this project is fine but further iterations will need to be smaller. This is well within the realm of possibility, especially if I create a custom circuit board. I would also like to add audio. If further prototypes can be made more robust I hope to make the Looking Glass available for the Carnegie Library of Pittsburgh.

Chanamon Ratanalert

24 Apr 2014

What is my project? – An interactive children’s book designed for the iPad and accessed through Chrome (using Javascript/HTML)

Why did I choose this project?
– Illustration
– Pop-up Books

What is the point of my project? – Incorporate the reader into the story by allowing them to unfold it themselves. I want interaction on each page to draw in more connection from the reader than just flipping through a picture book would.

Why do I have so little done?
– I took longer attempting responsiveness for multiple device sizes that I’d like to admit
– I’m not super speedy at illustrating all the images and animations

Concerns for the rest of work I have:
– Overall experience won’t be good enough
–> I don’t have time to compose and record music
–> It might be pretty easy just to tilt and shake to push through the story and never read it

< <<< eh >>>>
– using phone and comp like roll-it isn’t as “pop-book-esque” as my goal is
– tilt too sensitive on mobile device
– calibration to starting position for what is considered “level”

Shan Huang

24 Apr 2014

Generating a time-lapse of sky contours transitioning into shapes from google street views.

 

http://maps.googleapis.com/maps/api/streetview?size=1200×1200&location=40.720032,-73.988354&fov=180&heading=0&pitch=90&sensor=false

streetview

Identifying contour of sky (using OfxCV color contour recognition)->

QQ20140424-1

 

A storyboard of a simple contour animation

photo-(2)

 

How to compute difference between contours:

photo-(3)

A few views of sky in Hong Kong:

 

 

 

streetview (1) streetview (2) streetview (3) streetview (4)

Questions:

How to scrape sky views of the whole world?

Mapreduce?

Better nearest contour searching algorithm?

Earlier experiments:depth~22.278597~114.17303100000004~6_28I3YnTRTfKruRKnMxkQ

QQ20140413-3 QQ20140413-7 QQ20140414-2

Austin McCasland

24 Apr 2014

Overview

I am working with CMU’s own Richard Pell and his Center for Postnatural History to create an interactive exhibit which helps visitors understand how a set of model organisms create the gene pool for all genetically modified organisms.

What is the Center for Postnatural History you ask? Professor Pell says it best…

The Center for Postnatural History.

 

But wait, what about the model organisms?  What are those?

Model Organisms are the building blocks for every genetically modified organism you can think of.  They have a gene pool which is thought to cover every possible combination of attributes.  If you want to create a goat with spider silk in its milk, or a cactus that glows, you are likely going to be working with combining genes from one or more of these model organisms.

 

What They Already had:

A database

The Center for Postnatural History maintains a database of genetically modified organisms.  This is a manually maintained database

A processing sketch.

Screen Shot 2014-04-24 at 6.14.35 AM

Wrapping my head around the problem

Getting the model organism tree to display – playing around:

Developing the App:

First Tree reading from database:

Screen Shot 2014-04-03 at 10.46.31 AM

Creating a tree using verlet springs from Memo Akten’s ofPhysics addon.

Screen Shot 2014-04-18 at 1.22.25 PM 1 Screen Shot 2014-04-18 at 1.24.40 PM

Linking UI to read database information and visual improvements on tree.  Nodes now draggable.Screen Shot 2014-04-24 at 1.15.27 AM Screen Shot 2014-04-22 at 4.51.54 PM

 

 

Next Steps

Tree Improvements  – Better Branch Width, More “tree-like” spread, intelligent initial placement of leaves, Organism Path Highlighting.

Ambient Background – Smooth background with some sort of cellular movement instead of the plain gradient.  Possibly lay with depth of field (though I’ll have to learn some basic shader stuff, so time could be an issue)

Get pictures out from behind the password wall – Right now the database API doesn’t contain links to the photos, they only exist as part of the wordpress site.  How to get these photos given an organism ID?

Compile on iPad – Compile OF for iPad so it can live on that device in the physical Center for Postnatural History

 

Andrew Sweet

24 Apr 2014

For our final project, Emily Danchik and I are collaborating on a song generation tool that uses TEDTalks as it’s source for vocals. The primary tools we are using are Python, NLTK and various python libraries, and Praat.

We’ve chosen to divide the work into two parts:

Emily is working on audio processing. In order to make the TED speakers seem like they’re rapping to a beat, we first need to know where the syllables are in each sentence. Unfortunately, accurate syllable detection is still an open research topic, so we are exploring ways to approximate boundaries between syllables.

While determining syllable boundaries is a challenge, it is possible to detect the center of a syllable with relative accuracy. We have used a script written for Praat, a speech analysis tool often used in linguistics research, to identify these spots.

So far, we have determined approximate syllable boundaries by finding the midpoint between syllables, and calling it a boundary. This seems to work relatively well, but could use some improvements: for silibants (like ‘ssss’) and fricatives (like ‘ffffff”), this method is not accurate.

This is how we detect the syllable nuclei. It's far from perfect, but works well enough for our purposes.

We have been lucky to meet with Professor Alan Black in the CMU Language Technologies Institute, to determine ways of improving our process. As we move forward, we will document the changes here.

Once we have each syllable in isolation, we perform a stretch (or squash) by a given ratio so that each syllable lasts for exactly one beat of the rap song. We find this by determining the ratio between the length of the syllable and the beats per minute of the rap. To form a phrase, we simply string these syllables together over a beat.

Here are some of our initial tests:


I am working on the lyric generation. We’ve scraped 100 GB of TEDTalk videos and their corresponding transcripts (about 1100 TEDTalks). The transcript files contain what they’ve said between a given start time and end time, which is usually 5-15 words, for hundreds or thousands of time periods in each of the 2-36 minute videos. Using NLTK, we’re able to analyze each line of text for how many syllables the line is, as well as what it would rhyme with. We’ve created a series of functions that allow us to query for given terms in a line and lines that would rhyme with this line given syllable-count constraints. This combined with some ngram analysis of common TEDTalk phrases, a set of swear words, or other pointed queries allow us to have some creative control over what we want our TEDTalkers to say.

Here we see the filler speech that TEDTalkers use. Using TFIDF and a corpus of common phrases, we could instead find even more specific TED phrases. Here you can see that even on the elevated stage, we have many swearers. Here we can see what TEDTalkers are.

Using this idea, we will generate a chorus based on some set of constraints that will define what the song is about. We will then use the chorus as a seed for the verses to ensure some thematic thread is maintained, even if it’s minimal, and the song ends up being grammatically incorrect.

A sample chorus:

and my heart rate 

and my heart rate

If you buy a two by four and it’s not straight

like smoking or vaccination

is that it’s a combination

and you work out if you make the pie rate

no one asked me for a donation

soap and water vaccination

like smoking or vaccination

The end product is expected to be a music video that jumpcuts between multiple TEDTalks, where the video is time-manipulated to match the augmented audio clips.

 

We’re using the TEDTalk series for multiple reasons. Some reasons include:

  • We believed the single speaker, enunciated speech, and microphone-assisted audio would be helpful in audio processing.
  • People know TED, or at least some of the source speakers.
  • There’s also a lot to say by combining powerful people into a rap.
  • It’s fun to poke fun at.

 

Kevan Loney

03 Apr 2014

Work in Progress…. Description coming soon.

IACD_Cap3 IACD_Cap2 IACD_Cap1