Category Archives: project-3

Alan

02 Apr 2013

Abstraction: Browser and iPhone Interaction Experiment

Background: There are many interactions with sensors like Kinect, LeapMotion, Wii. All these interactions require additional sensors with limitation of context and cost. They are either scalable but with high cost or with cheap device but limited with size. Therefore, an idea using ubiquitous devices like a smart phone or iPad as sensors is on the right track. Using browser as the interaction medium also frees the system from heterogeneousness between different mobile operating systems.

Inspired by Boundary Functions (1998) by Scott Snibbe, every person in real world has a physical relationship with others, I am thinking how about relation between people in virtual space. This interaction installation will be set in public spaces, like airports and train stations. People are getting into Internet through devices. Identity of human is represented as identity of digital devices. With interactions between their mobile devices, people are getting sense of relations between others(whether anonymous or not) in the same public spaces.


Verification & Communication

Currently I didn’t implement the verification for the system right now. User can access the system by visit the server ip address with websocket. Once a device is connected to server, the server will generate a new ball for the user. Server can get the device id to identify the user.


Interaction

The browser on the mobile devices will obtain gravity and accelerometer of the current device and send data to the server. The server will normalize the velocity and orientation of x-axis and y-axis values and map them to the position on the browser page.

context


Tech

I use Node.js as server with now.js and socket.io for live interaction. Apple accelerometer is the sensor on the client device.

Github repo: https://github.com/hhua/xBall

Joshua

06 Mar 2013

I am interested in using the kinect to manipulate 3d meshes or surfaces in a program like rhinoceros or blender.  There are plenty of motion capture projects out there, but I am more interested in taking hand gestures and mapping them to various 3d modeling commands.  The leap motion might be much more suited to this, but because I have never done anything involving hand tracking, perhaps a good first step would be using the kinect.

I think it would be interesting to track fingertips in 3d space and connect them with a curve in the modeling environment.  As the hand moved around it would sweep out a surface.  I want to computationally alter that surface according to the velocity or even acceleration of the points.  The surface could become more convoluted, spikey or maybe perforated. A sculpture made from this 3d model would therefore contain not only visual information about the path of the fingertips, but also about the motion (velocity and accel.)

kind of like this (but hopefully smoother, and computationally altered):

Of course I also just want to continue working on the truss genetic algorithm.

Elwin

06 Mar 2013

I have a couple of random ideas for this project. Not sure which one I should do yet.

Face-away

This idea doesn’t really have a purpose. It’s more experimental and artsy I guess. Imagine a panel sticking out of the wall that can rotate on the x- and y-axis. The panel reacts to a person and will rotate away from the person’s head, facing away from the user. For example, if the user goes to the right, the panel will rotate to the left on the y-axis. The panel rotates the other way around if the user goes to the left. Same inverse movement will occur when the user tries to look at the panel from above or below.

**sketch image coming very very soon**

Possible implementation:
– I could either use a webcam or Kinect to track the user. I think the most important part is the ability to track the location of a person’s head. If I use the Kinect, I should be able to get the head position from the skeleton (I’ve never worked with the Kinect). I could also use blob detection with the webcam from a top-down view or something, but I don’t think it would be accurate enough. Perhaps a better and easier method would be to use FaceOSC to track the head. I would have to place the camera in such a way that I would be able to see and capture the face from all angles.
– For rotating the panel I could use 2 servo or stepper motors; 1 for each axis. These shouldn’t be hard to implement.

???
For now there’s nothing to see on the panel. Could be just a plain piece of material, wood, acrylic or something. But I’m not sure if that’s interesting enough or if I should come up with something to display on the panel.

 

Michael

06 Mar 2013

How often?

We often hear statistics about how frequently certain events occur.  One child dies from hunger every five seconds.  Someone buys an iPad every 1.5 seconds.  Someone dies from poor indoor air quality every 15 seconds.  A baby is born every quarter second.  These numbers only let us understand these phenomena on a very cerebral level, though.  Even well-designed infographics only engage the user visually.  I would like to make an installation that cycles through a database of these statistics and allows the user to experience each through a combination of touch, light, or sound.  For example, a light could blink with a period of 1.5 seconds to indicate the frantic pace at which the world is buying up iPads while a gentle burst of compressed air to the back of the hand every five seconds reminds the user how often the world lets a child starve to death.  Approximately five children starved in the time it took to read this paragraph.

 

QR Code Infobombs

People love to scan QR codes, even if they don’t know what they lead to.  I might like to pepper sidewalks with QR codes made with chalk and stencils that lead to a website that presents highly localized and continuously-updated information on smog and air pollution.  If people scan them while walking along a busy road, I hope I can make the presentation compelling enough to make the link between air quality and traffic stick in their minds.

 

Secret Keeper

I imagine a tiny black cube with a phone number and instructions on the side, to be placed on a pedestal in some public location.  If you text it a secret (and the text checks out in terms of length and variability to weed out messages like “butts butts butts”), it will store it and reply with an anonymized secret that it has heard before and is most similar to yours.  Each secret gets sent to only one other person after a suitable number have accumulated, so you know that when you tell it a secret, only one other person will receive it.  In a sense, it’s a bit like Post Secret, except for the strange sensation that exactly one stranger will know something deeply personal about you. (Also, the cube may emit a faint red glow when it receives the secret, to indicate some link between the physical object and the process).

 

Kyna

06 Mar 2013

Interactivity Project ->

For this project I’m really hoping to make a game for Android tablets/phones that utilizes the touch screen. I’m not sure if that’s too ambitious for the time we’re given but I feel like it’s an area I’m going to need to explore eventually.

My current idea, which I think is definitely too big for this assignment, is to make a wave-based (think Tower Defense / Plants vs Zombies) game wherein you play as a goblin warlock’s apprentice, and your job is to go clear out an old fort that’s infested with humans. Levels would be different rooms, and the waves would consist of different types of people (knights, knaves, whatever). As a warlock apprentice, you know some spells that you can cast onto the oncoming waves by drawing different symbols.

ugh

I have some other less time-consuming ideas that I might fall back on in the event I can’t get the barebones version of this running by the due date.

SamGruber::Interactive::Sketch

I began thinking about this project with a question: why is code text? Almost all programming must be accomplished by writing out long stretches of symbols into a text box, with the only “graphical” component being (often incomplete) syntax highlighting. Back when all computers could display was text and the primary input device was a keyboard, this was perfectly reasonable.

But now even a high school calculator draws color graphics, and more and more we use phones and tablets which are meant to be touch-driven. And yet, programming remains chained to the clunky old keyboard. Producing programs on a tablet or phone is all but impossible. But there’s no reason it should be. Creating programs should be as easy as drawing a picture.

lambda_graphical

I draw from the computational framework of Lambda Calculus, in which all computation is represented through anonymous function-objects. Naturally, this mode of thinking about programs lends itself to a graphical interpretation.

Lambda Calculus needs only a few metaphors defined. A line charts the passage of a function-object through the space of the program. Helix squiggles denote passing the squiggled function-object to the other function-object. Double bars indicate an object which dead-ends inside of an abstraction. Large circles enclose “Lambda abstractions” which are ways to reference a set of operations as a unit with inputs and an output.

The goal of this project is to develop a drawing-based editor for Lambda Calculus programs that can be expressed in this manner, which automatically converts the user’s sketches into programs.

Erica

06 Mar 2013

I have a couple of ideas that I am trying to decide between for my interactivity project.  I am interested in doing something that is both screen and touch based using either a phone/tablet, Sifteo, the AR toolkit, or Reactivision.  I’m not really sure what I would do with these later two tools, as I was just introduced to them in class on Monday but I’m keeping them in mind.

My first idea is to continue the Sifteo project that Caroline, Bueno and I worked on for project 1.  I think that we had a really neat idea and I would like to find a way to optimize the clock to alleviate the memory issues we were having as well as create an interface that would allow users to design their own “puzzles” for turning off the alarm clock.

Another idea I have to to use As-Rigid-As-Possible Shape Manipulation (which makes it possible to manipulate and deform 2d shapes without using a skeleton) to create a tool for real-time, interactive story-telling.  I plan to implement this algorithm in C++ for my final project for Technical Animation, and I thought that I could extend upon this to let users draw the characters to be manipulated on a tablet, then, by connecting to a monitor or a projector, tell stories by manipulating the characters.  I see two possible applications of this: 1) as a story-telling tool to create a sort of digital puppetry, and 2) as more of an interactive exhibit where visitors could add to the story by either creating new characters or manipulating the characters that are already there.  I’d also be interested to hear other suggestions of applications of this.

I’m also really interested in the idea of educational software.  For my BCSA capstone project I’m working on an educational game and I really appreciated the iPad app we saw Monday that counts your fingers.  I would like to maybe apply the Shape Manipulation I discussed above to an educational context but I don’t have one definitely in mind yet, so I’d also like to hear ideas of such applications or ideas of interactive educational software in general.

 

Anna

03 Mar 2013

The kinect presentation last week by James George and Jonathan Minard made me start thinking about all the old school sci-fi novels I’ve read, so the idea for my interactivity project is unsurprisingly inspired by one of my favorite books, The Demolished Man by Alfred Bester (1951). I’ve always found the book extremely clever, both in its ideas and in its execution—particularly when it comes to Bester trying to depict what communication via Telepathy would look like in textual representation. Take this passage, for example.

demolishedman

I completely loved idea that people talking with their minds would somehow translate differently in space and time, when compared to normal speech. It not only made the book more engaging, since every page felt like a puzzle, but it also made me wonder about different ways you could represent normal party conversation in a way that better captured its overlapping chaos, serendipitous convergences, and trending topics.

So, with that said, enter my idea: The Esper Room. (‘Esper’ is the term for a telepath in the novel…)

esper_room

I’d like to create a room where everybody entering is given a pin-on microphone adjusted to pick up their voice only. All the microphones would feed into a computer, where openFrameworks or Processing would convert the speech to text and visualize the words according to some pre-selected pattern (“Basket-weave? Math? Curves? Music? Architectural Design?”). Recurring words and phrases would be used as the backbone of the pattern, and the whole visualization could be projected in real-time onto the walls or ceiling of the room.

Aside from being a nifty homeage to 1950s sci-fi, I think this could be an interesting way to realize that people on opposite sides of a party are actually talking about the same topic. Maybe it could bring together the wallflowers of the world. Maybe it could cause political arguments, or deter people from gossipping. In a way, the installation would be like pseudo-telepathy, because you could read the thoughts of people whom you normally wouldn’t be able to hear. I’m interested in seeing if that fact would have a substantial impact on people’s behavior.

John

03 Mar 2013

project_proposal

 

For my third (and possibly fourth) project I’d like to create a room with an overhead tracking camera and and front facing recording camera. Users will walk into the room and will receive a series of commands. User actions and responses will be monitored as commands become increasingly antagonistic and or incoherent. Video and photographic recordings will be made of all participants and streamed on the web.