Category Archives: looking-outwards

Michael

01 Apr 2013

1) Meng’s Pepper’s Ghost project

Pepper's Ghost

Meng’s interaction project was what originally got me thinking about my capstone.  Her work involved visualizing different maps in three dimensions using a small Pepper’s Ghost apparatus.  The Pepper’s Ghost illusion works by projecting an image onto an angled piece of glass, which causes it to appear to float in space.  Meng’s map layers and images could be viewed simultaneously by changing one’s viewing angle.  I like the idea of being able to peer around layers and data to see what lies beneath or behind it, and I would like to incorporate this idea into my capstone.  I think that the Pepper’s Ghost apparatus may not be the best suited tool for this, though, and I hope to find a way of interacting with layered data on a two-dimensional screen.

 

2) GigaPan Time Machine and Solar Dynamics Observatory

SDO image

Time Machine is a project out of the CREATE Lab that allows users to view and create zoomable time-lapse videos.  Similar to the GigaPan project, very high resolution interactions are made possible on even modest internet connections because of smart fetching of tiled data.  The project has created time-lapse videos from photos of plants growing to data from the Landsat and Modus satellites and images of the sun from the Solar Dynamics Observatory.  I’m particularly entranced by the time lapse of the sun.  It’s amazing to see solar flares that dance across the surface of the sun and arc over spaces large enough to fit several earths inside.  It’s also possible to explore images of the sun taken from different wavelengths, allowing for visualizations of not only visible light, but also the infrared and ultraviolet spectra and the magnetosphere.  Unfortunately switching between these fixed spectra is the only digital interaction in Time Machine (meaning that only one frequency can be viewed at a time), which means that it is hard to draw connections between activity at one frequency versus another, even though the behavior varies wildly.  I would like to create a way of viewing multiple layers at once, either combined or displayed separately but in a way which allows a viewer to see events in at least two different frequencies at once.

 

3) Panamap

panamap 2  Panamap 1

This company uses a patented process to create physical maps that change the information they show depending on the angle at which they are viewed.  Three different layers can be seen on a single map.  My understanding of the technology is that it is very similar to the little plastic animated cards that show simple movements (such as a dinosaur walking) when the card is tilted back and forth.  What’s interesting to me about this project is that it suggests one method of interaction to view a multi-layered map (tilting up and down).  I want to think about similar ways to manipulate and view layers, especially digital ones on either a fixed screen or a tablet.  Again, there might be an idea here which could help me visualize multiple frequencies of sun imagery.

 

4) Minecraft Dynamic Map

Screen Shot 2013-04-01 at 5.23.01 PM

Of course I had to post at least one thing about Minecraft.  This site shows a dynamically-updated view of a fairly well-populated Minecraft server map, including day/night lighting and player positions.  The map is zoomable to a fairly extreme degree, which lets the user find and discover all of the various buildings and settlements that different groups of players have formed over time. Some of them seem abandoned, while others are almost always occupied.  Maps of Minecraft worlds aren’t particularly unique, but the fluid way in which this one is explorable and the fact that it updates itself constantly is new to me.  I would like to incorporate the ability to stream fresh data into my final project in some capacity.

Sam

01 Apr 2013

For my capstone project, I intend to continue with my Interactive project on visualizing lambda calculus and actually make an interactive lambda calculus editor. To that end, I have researched a few approaches to code visualization.

To Dissect a Mockingbird – David C. Keenan

Graphical_lambda26

Keenan is one of the few people who have also undertaken the project of visualizing lambda calculus; however, the representations he has created become overly complex very quickly. Because of his adherence to the most pedantic definition of lambda calculus syntax (in which all abstractions take only one argument), the graphics become cluttered with containing lines. Additionally, the structure of his diagrams do not differ significantly from textual representations: they evaluate horizontally, largely sequentially, when in fact the strength of a graphical representation is that it frees us to engage with the lambda calculus in a nonlinear manner.

Blockly – Google

sample

Blockly is a tool developed by Google to make programming similar to fitting together pieces of a puzzle. This seems to be a common approach to visual programming, which frequently has drawbacks. The usefulness of the tool is constrained by the fact that the blocks all have to be defined, and this requires writing text-based code, and it also induces a jarring switch between the visual programming environment and the textual. Because of these limitations, the plug-together style is one that I intend to avoid for the most part. However, these projects to strike at the necessity of abstraction in a visual programming language. It will be important to be able to collapse elements within the lambda calculus editor, to avoid being overwhelmed in a complex program. Block-based visual editors are often designed from this standpoint, and incorporating that use pattern should improve the quality of my user experience.

Visual Lambda – Viktor Massalogin

visuallambda

Visual Lambda is another take on the problem of creating a graphical lambda calculus environment. Like Keenan, Massalogin focuses in his work more on evaluation of lambda expressions and less on composition. Visual Lambda also appears to prefer a left-to-right orientation and does not compress abstractions for simplicity. It also does not seem clear how the user should interpret the diagram to understand the lambda expressions. Thus I can see that presenting information clearly, particularly to those who have never worked with lambda calculus before, will likely be a challenge in future work with graphical lambda calculus.

Kyna

31 Mar 2013

Solar2 –

This game was recommended to us by Nathan during our critique. It’s a very interesting game with a really unique mechanic. You play as an asteroid that gains mass over time, although no specific instructions are ever given to you. The idea of learning through playing and acting really appeals to me and I hope that it’s something we can pull off in Small Bones. While our game will certainly not be as elegant as Solar2, I hope that we can emulate the same learning techniques that it employs so well.

LIMBO –

Along the same vein of instructionless games, LIMBO is a very unique puzzle game with simple mechanics used in interesting ways. The game only allows for movement and an all-purpose interaction button. Done entirely in monochrome silhouettes, LIMBO is a very dark game that does a very good job of making you feel alone. While I don’t necessarily want Small Bones to reflect the same darkness that LIMBO does, I’d very much like to emulate its wordless teaching and ingenious use of a simple mechanic for extremely varied gameplay.

Canabalt –

This was one of the games mentioned in the comments during our critique. While it’s similar to Small Bones in that it’s a runner, it’s quite fast-paced and much more reaction based than I think Small Bones will ever be. However, I really liked the two-player aspect to this game, and how it became somewhat of a race between the players. It seems like the terrain is randomly generated so that the track is infinite, and I wonder if someday we could implement two-player in Small Bones. We’d have to work on the mechanic a bit, since right now it definitely wouldn’t work for two players, but I think having to race/cooperate with only a finite number of skeletons could be really interesting.

Yvonne

31 Mar 2013

Sketch your Game: Ideas/Research
“Sketch the levels for your game.”
My capstone project will be a continuation of my second project on interactivity. I am dropping the floor switches and wall projection in order to scale my project down to a size more appropriate to the act of sketching. My current goal is to create a game rig where an individual can sit, place a piece of paper on the table, sketch, and then have the characters projected directly onto the drawn image. I’m still trying to figure out a good method of character control, but at this point I am leaning toward a generic game controller.

I’m also tossing around different ideas. Perhaps certain symbols can be drawn that a character or the AI can interact with in special ways? I.E. death traps and portals. Also, I need to work on my AI. Being very new to programming (second semester of Processing), I’m not exactly sure where to start. The ghosts were programmed, admittedly, in a very stupid way. It would be nice to give them a bit more intelligence, I would prefer that their movements become more unexpected and challenging to interact with.

[edited April 02, 2013]

SketchSynth

This is very similar to what I want to do. Basically you sketch something (in this case, a GUI), the computer recognizes the sketched forms, a projector maps additional information, and a camera reads your motions for interactivity and feedback.

Scratch
http://scratch.mit.edu/projects/archmage/9397
An interesting project. You basically sketch your own game. Walls, death traps, goals etc. Then you play your game. It doesn’t have the physical sketching part, but the general idea is similar to what I am going for. Maybe I’ll have different icons you can sketch that perform different things, kind of like the buttons/checkboxes/scrollbars on SketchSynth. …..Death traps!

Augmented Reality Project using BuildAR and Sketchup


In this project the computer recognizes certain symbols and letters. It responds by generating objects which you see on the computer screen. Pretty much like Reactables, except the physical tokens have an obvious meaning (the letters C A R generate a car). This is going back to my idea of being able to sketch certain icons and have the computer respond.

John

30 Mar 2013

For my 2nd to last project, I built a somewhat crude drawing application using Synapse, a Kinect and OpenFrameworks. For my final project, I Intend to expand on this tool too make it (a) better and (b) more kick ass. I’m still piecing together the interaction and implementation details, but for now, here’s some nice prior art.

Alan

27 Feb 2013

#Listen To Color

Artist Neil Harbisson was born completely color blind, but this device attached to his head turns color into audible frequencies. Instead of seeing a world in grayscale, Harbisson can hear a symphony of color — and yes, even listen to faces and paintings. It is a great idea to transform visual signals into different wavelength which enlarges human ability to sense. It is still disappointing that the device can only generate sound right now. If it can recognize color and shapes and even depth and relationships of objects in space and convert it into real visual signal directly to human brain, this will be revolutionary.

#Google Driveless Cars

The driveless car is not a new concept all, even for the real implementation. You may track back to automated car experiment by Mecedez in 1995. Thrun was criticized in this talk for the reason he didn’t understated the value of other pioneers in this field. The video below is a keynote speech by Ernst DickManns with introduction of automated car.

One reason for Google to introduce driveless car is it is safer than cars driven by human. This is doubtful and even controversial among people who insist on freedom of driving. The car in the future must by designed for autonomousness rather than a car that is designed to be driven can drive automatically.

#Photo-real Digital Face

This project is good, since it generate a highly simulated face which is very close to human face. However, there are several deficiencies yet. Due to uncanny valley, you can still tell it is a digital face and the movement in this real-time is static.

TED Talks on Computer Vision

Michael

27 Feb 2013

I hope you’ll pardon me for doing work that I’m already familiar with and may not be particularly categorized as art… I’ve been in New York since Monday morning, and unfortunately Megabus’s promise of even infrequently functional internet is a pack of lies. My bus gets in at midnight, so to avoid having to stay up too late and risk sleeping through my alarms, I’m doing the draft mostly from memory and correcting later.

Alexei Efros’s Research

Visual Memex

Professor Efros is tackling a variety of research projects that address the fact that we have an immense amount of data at our fingertips in the form of publicly-available images on flickr and other sites, but relatively few ways of powerfully using it.  What I find unique about the main thrust of the research is that it acknowledges that categorization (which is very common in computer vision) is not a goal in and of itself, but is just one simple method for knowledge transfer.  Thus, instead of asking the question “what is this,” we may wish to ask “what is it like or associated with.”  For example, it is very easy to detect “coffee mugs” if you assume a toy world where every mug is identical in shape and color. It is somewhat more difficult to identify coffee mugs if the images contain both the gallon-size vessels you can get from 7-11 and the little ones they use in Taza d’Oro and the weird handmade thing your kid brings home from pottery class. It is more difficult still to actually associate a coffee mug with coffee itself.  In general, I’m attracted to Professor Efros’s work because it gets its power from using the massive amount of data available in publicly-sourced images, and is built upon a variety of well-known image processing techniques.

GigaPan

GigaPan
This may not be a fair example, but I want to share it for those who aren’t familiar. GigaPan is a project out of the CREATE Lab that consists of a robotic pan-tilt camera mount and intelligent stitching software that allows anyone to capture panoramic images with multiple-gigapixel resolution. The camera mount itself is relatively low-cost, and will work wth practically any small digital camera with a shutter button. The stitching software is advanced, but the user interface is basic enough that almost anyone is capable of using it. We send these all over the globe and are constantly surprised by the new and unique ways that people find to use them, from teachers in Uganda capturing images of their classroom to paleontologists in Pittsburgh capturing high-resolution macro-panoramas of fossils from the Museum of Natural History. I appreciate this project because at its core, the software is simply a very effective and efficient stitching algorithm packaged with a clever piece of hardware, but it gets its magic from the way in which it is applied to allow people to share their environment and culture.

Google Goggles

Goggles
Google Goggles is an Android app that allows the user to take a picture of an object in order to perform a Google search. It is unclear to me how much of the computer vision is performed on the phone and how much is performed on Google’s servers, but it is my impression that must of it is done on the phone itself. At the very least, some of the techniques employed involve feature classification and OCR for text analysis. The app does not seem to have found widespread use, but I still find it an interesting direction for the future because it could make QR codes obsolete. Part of me hates to rag on QR codes, because I’ve seen them used cleverly, but I feel like most of the time they simply serve as bait because whenever people see QR codes, they want to scan them regardless of the content,, because people love to flex their technology for technology’s sake. I think Google Goggles might be a case where people will use it more naturally than QR codes, since in some instances it is simply easier to search short text rather than taking pictures.

SamGruber::LookingOutwards::ComputerVision

Pinokio – Shanshan Zhou, Adam Ben-Dror and Joss Doggett

Pinokio is an animatronic lamp which gazes around and responds to the presence of humans in a surprisingly lifelike way. It will even respond to sudden sounds in its environment and resists being turned off. Based upon the video, it would appear that the lamp is a very straightforward application of face detection, which surprises given the quality of the character shown. The project is of course reminiscent of the Pixar Lamp, though one of the core elements lacking relative to that earlier work is the partner. The ability to just watch two lamps clumsily interact with each other provided the Pixar piece with much more character than the single lamp alone can manage.

Virtual Ground – Andrew Hieronymi

Virtual Ground is a projected game that is played by two people who try to steer a bouncing ball to light up the floor. I find this project interesting because it seems to run counter to our typical rivalrous impression of a game. However, in terms of computer vision, it is a fairly basic exercise of tracking moving objects that does not seem to push the bounds of the possible. I am immediately curious to see how the game could evolve if more participants entered the playing area, though the presentation of this project seems to suggest that there would not be a meaningful result.

Can

27 Feb 2013

Reactable

Reactable, the well known, futuristic-looking instrument, uses opencv to detect fiducial markers, and generate/control sounds using them. It’s mostly used by electronic music lovers. (perhaps the most important musician who uses Reactable is Björk.)

 

Nosaj Thing – Eclipse / Blue

An amazing modern dance performance, by Nosaj Thing. The visualization of an amazing song. The they blend the dance moves with the visuals and the way they visualize the sounds are pretty mind-blowing.

 

Orbitone

A life-size version of Reactable.. (or) an ambient music making tool. What I can’t understand about this project is the timing/positioning. It doesn’t seem too precise to me, and although I think it can be useful for making ambient music, It could be more useful as a toy for kids.

Elwin

27 Feb 2013

IllumiRoom: Peripheral Projected Illusions for Interactive Experiences // Microsoft Research


Wow! IllumiRoom is a proof-of-concept system from Microsoft Research. It augments the area surrounding a television screen with projected visualizations to enhance the traditional living room entertainment experience. I think this is an excellent and smart implementation using the Kinect and projection. They have taken the confined experience of a tv screen and extended the virtual world to the physical world. I love that they didn’t do a literal extension of the virtual environment, but decided to depict the surroundings with a variety visual styles (otherwise they could have just used a projector). I can definitely see this system make gaming to be more engaging and immersive.
 

PhobiAR // HITLabNZ


An advanced interactive exposure therapy system to treat specific phobias, such as the fear of spiders. This system will be based on AR technology, allowing patients to see virtual fear stimuli overlaid on the real world and to interact with the stimuli in real time. I think this is a very interesting usage of Augmented Reality. Even though I know the spider is fake, it still gives me goosebumps seeing how it walks up that person’s hand. I would love to see how effective this treatment is for people who actually have arachnophobia.
 

Leap Motion


Leap Motion represents an entirely new way to interact with your computers. It’s more accurate than a mouse, as reliable as a keyboard and more sensitive than a touchscreen. For the first time, you can control a computer in three dimensions with your natural hand and finger movements. Just look at the accuracy! It’s hard to believe how precise the tracking of the fingers are and how fast it responds. I’m very curious and excited to get my hands on a leap motion and test it out for myself. I think this will be the next big thing for designers, similar to what the Kinect was, but this time it’s for your hands/fingers!