Daily Archives: 08 Apr 2013

Erica

08 Apr 2013

1. ExR3 by Kyle McDonald and Elliot Woods

ExR3 is an installation involving a number of mirrors that reflect and refract geometric shapes found  on the various walls of the room.  As the user moves through the space, they explore and discover the shapes and the interrelation between them.  This project is really interesting because the placement of the mirrors and the shapes were carefully calculated out using computer vision, reminding me of the way an architect would plan out the experience of a space.

2. IllumiRoom by Microsoft

This is a really interesting project that allows the experience of a movie or video game to be expanded beyond the boundaries of a television screen and interact with the other objects in the room in a number of ways.  The system uses a kinect to gauge the room and find the location and outlines of objects within the room.  It definitely changes the experience of both the space and the game/movie being enjoyed but I would like to see some more interesting augmentations.  The ones shown in the videos are somewhat predictive and I think that there is much more potential in this new system.

3. Smart Light by Google Creative Lab

This project is a series of explorations involving projecting (literally projecting) digital functionality onto analog objects in the real world.  It is inspired by the idea of extending the knowledge based of the web outside of the computer and into the everyday world.  I find this idea intriguing but it begs a few questions that I would hope would be explored in the future.  Firstly, in the documentation, there is no indication of what is making the projection and I’m wondering how plausible it is to take this technology out of the lab and into the everyday world.  If it is not, it is somewhat limiting and not addressing the question as well as it could.  Secondly, in the second documentation video, they are using objects that seem to have been built for the sole purpose of these experiments.  I prefer the ideas presented in the first video that suggest using this technology in conjunction with everyday objects.

Erica

08 Apr 2013

For our Capstone Project, Kyna and I are continuing to collaborate on our mobile game Small Bones.  As such, for this Looking Outwards I tried out some current popular runner mobile games:

1) Jetpack Joyride by Halfbrick

As the name implies, this game is an infinite runner where the character you control is wearing a jetpack.  The player is able to have the jetpack hover in mid-air by tapping and holding his finger on the mobile screen.  The purpose of hovering is three-fold: 1) to avoid obstacles, 2) to collect coins, and 3) to gain power ups.  Each power up is a different vehicle that has different capabilities, though each is still controlled by tapping or tapping and holding.  On the plus side, this game has a simple, but clear premise based on a simple mechanic that is intuitive and cohesive with the theme. The game is very easy to learn how to play, even without tutorial.  I think that the reason for this is due to the simplicity of the mechanic and the player’s intuitive notion that to make a jetpack fly you press and hold a button.  I also think that the graphics are very well done, though I’m not as into the “cutsy” style that seems to dominate mobile games, in particle when dealing with depicting humans.  In terms of negatives, the storyline is very unclear.  If I had not seen the above video, I would not understand that the character is a typical 9-to-5 American worker who is unhappy with his life and decides to steal a jetpack and go on a joyride.  In addition, although the different power-up vehicles are creative and give game-play more character, in seems to take away from the storyline and the main premise of stealing a joypack.  Also, like a lot of mobile runner games, there’s this idea of collecting coins to buy items outside of the actual gameplay to be used in gameplay that takes the player out of the suspended disbelief of the game, which I’m not such a fan of.

2) MegaRun by getset

Megarun is also a runner, but it is broken up into levels (the direction Small Bones is currently heading).  Again, there is a simple jump mechanic of tapping and holding to jump higher.  This time, if you collect a power-up it will automatically be activated and stay activated until another power-up is collected, an enemy is run into, or the power-up’s timer is up.  This game too uses “cutsy” graphics but I think it works better here because the characters and the world are meant too be cartoon-ish and not resemble the “real” world.  Furthermore, the power-ups in this game make more sense than those in Jetpack Joyride because they actually make finishing the level easier.  As I said, in Jetpack Joyride the main purpose of the power-ups seems to just be making the gameplay more interesting.  Also, the cohesiveness of the game’s narrative extends to those coins I hate so much.  For one, the storyline is based on the character trying to regain his riches, as seen in the above trailer (though, again, without the trailer this would not be immediately obvious), and, secondly, collecting different types of coins helps the character run faster, thereby helping the player complete the level.  On the negative side, the use of levels is purely to separate out difficultly; I wish there was more storyline reveals in the different levels.

3) Temple Run and Temple Run 2 by Imangi Studios

(sorry for such long videos)

This is actually one of my favorite runner games.  This game uses a few variations on one mechanic, the swipe, to do a few different movements.  Sliding your finger up makes the character jump, down makes the character slide, and on side to another makes the character turn.  It also uses the accelerometer to get the character to tilt his run pattern to one side or the other.  All of these mechanics are simple yet intuitive and add to the sense of depth in the 3d world.  Although I am partial to 2d games, I happen to really like the aesthetics of Temple Run, and even more of Temple Run 2 and I think they really enhance the storyline of the game.  Like with the first two games, the storyline is somewhat vague and implicit, but unlike with the first two games, it is less bothersome for Temple Run.  The beginning sequence of the scary gorilla-like monsters chasing you along with the graphics imply that you need to run as fast as you can to safety, and that gives the player enough agency to feel engaged with the premise.  One of the best features of this game is the tutorial.  The tutorial does a good job of teaching you the mechanics of the game one at a time with a combination of world obstacles and text overlay.  It shows you different obstacles where you want to use different mechanics and lets you die if you make a mistake, resetting you to the part  of the tutorial at which you died.  I also liked that Temple Run also incorporates coin collection more into gameplay.  Although there is no reason to collect coins in terms of storyline, the location of the paths of coins suggest to the player the path and mechanics they may want to use at that particular time.

The two versions of the game are pretty similar but have a couple of key differences.  Firstly, the first version is at a constant height and the world has a purely orthogonal layout.  The second version allows for variation in height and in curvature of the path.  Although I like the variation in height, the curvature distracts from the mechanics in my opinion because it makes it more unclear as to when you need to swipe a turn.  The second difference is that in the second version, there is a double-tap to enable a power-up.  One potential issue we were running into with Small Bones was differentiating between drawing a path and enabling a power-up, the first of which was to be a tap and drag and the second a tap and release.  We could use a double-tap to better distinguish this.

The Big Knob

[ Project with Ziyun ]

In music production studios, artists tend to get overly excited, and they think the engineer can fix even the largest musical mistakes.

So every single time this happens, (which happens a lot and in every studio), engineers shout this sentence with passion, “Don’t worry, it’ll sound better in Mastering”

In mastering studios, when musicians are presented with multiple mastered options, they tend to pick the loudest and the ugliest master, which would in reality have peaks, and awful pops and crack sounds,so mastering engineers feel the need to say : “Don’t worry it’ll sound better when printed to CD/Converted to MP3”

In TV / Cinema, this is usually an issue with the Color. Coloring engineers, calm down their clients with the standardized sentence : “Don’t worry it’ll look better after Color” or even more sometimes add : “Don’t worry it’ll look better in broadcast/projection”

So essentially, it’s all about the artist psychology. Almost everyone in the music/videography industry jokes about having a big red button, that fixes the mix/master/color/final. There are even products named after this. (This one is called The Big Knob, by Mackie)

The Big Knob

And sadly, for the most of the part, mastering or color is done using presets. And in these cases, it really is a matter of a couple of buttons being pushed, to make the artist feel better. And we think we can fix this, by actually making that magic button.

button

One fader, one knob, and one button, that fixes everything.

It’ll be a big Music Tech / Videography Joke, which partly works. The fader and the pot will control most of the parameters (at the same time) and the button will switch it on/off.

Andy

08 Apr 2013

I have some papers, that I will scan and put here, but here is some text as well. and here they are!

This project will build upon my previous work with RGBD and Augmented Reality, with a clear vision of a final product and some extra ambitious pieces to justify two people working together.

Our premise is to use RGBD and Augmented Reality to create a window into a fantasy world, or in the words of Golan, “a whimsical augmentation of a physical space.” Whimsical and personal graffiti sans vandalism? This piece that Golan showed me is definitely inspiration:

Another possible source of inspiration from artist Mark Jenkins:

We both discussed what forms this augmentation might take, and realized that we both saw actors performing with a sort of unreal physicality as part of our vision. In order to make that augmentation   be the best it can be (and not necessarily have the Kinect depth mesh distract at times with stretched areas formed from lack of data), we are considering if we can get RGBD toolkit to work with the use of 3 kinects, creating one mesh using data from 3 angles.

An example of a project which uses this is here: http://vimeo.com/21676294

The major challenge is fitting DSLR data onto this newly created mesh. If we can get this to work, our augmented actors would fit better with their space.

So, we take the meshes of a 1-4 second video, load them into Qualcomm’s Vuforia library for augmented reality on a mobile phone, and strategically place them around the space which we are augmenting using either custom markers or (maybe) pictures of the archictecture itself. We’ll see how far we can push the library. It should be cool!

Marlena

08 Apr 2013

photo-3

Skecth

Capstone Checkin 1

Your avatar is on a boat floating through the air. The boat is slowly heading for a light house light spinning far in the distance. If you look above you see clouds and sky; if you look below you see lightning. Schools of fish swim next to the boat and flit in and out of the clouds. You are free to walk around your ship, though there is not much to explore. However, if you leap over the sides of the boat you yourself become a fish. You can swim and get back to the boat as long as you control the character. Left to its own devices, though, your character will join a school of fish and begin to travel far away from your ship. You may encounter other ships, but you may never find your own again.

TO-DO LIST
Modeling
-Ship
-Fish
-Fish state change
-Main character
-Lighthouse

Animation
-Fish swimming
-Other fish activities
-Main character walking
-Main character swimming
-Main character state change

Scripting
-Main character state change
-Main character movement
-Move and spawn clouds
-Clouds avoid ship
-Fish on ship
-Fish flocking
-Compelled flocking

Meng

08 Apr 2013

Screen Shot 2013-04-08 at 8.41.55 AM

Increase Interactivity

figure from
http://interactiondesign.wordpress.com/2011/06/21/interface-design-positions-my-own/
http://librairie.immateriel.fr/fr/read_book/9780596518394/ch01s03#

Hardware:
Kinect to detect point of view
Arduino to add sensor – e.g hall effect sensor
Connection user data to cosm

Software:
Another application for 3D map
A more meaningful/interesting story
Maybe – keep working on Upfolding map

Screen Shot 2013-04-08 at 9.04.29 AM

Some Maybes:
RGBD – version of average face
Point Cloud Library

Alan

08 Apr 2013

## Background ##

In spring this year, people in northern China experienced severe weather with extreme dangerous sand storm and air pollution. One of dangerous factors in air pollution is PM2.5, which at that time was 400 times over the most dangerous level defined by WHO.

However, PM2.5 data was not open to public in China until recently. The studio BestApp.us in Guangzhou, China collected and veirfied data from official sources, and opened API to public. Therefore I decided to build a website with visualization to help people easily find dangerous level in their own cities.

## Design 1 ##

Screen Shot 2013-04-08 at 9.30.40 AM

Map visualization:

I got PM2.5 data of 74 cities in China with 496 air detection stations. Since the data not only includes PM 2.5 value but also contains air pollution with SO2, NO2, PM10, O3, the website will allow users to choose the which air pollutant they want to view.

Visualization Tool: TileMill, D3.js, Processing.js

## Design 2 ##

PM2.5 history data for each city in China. When users click on a label on the map, they can not only get the current PM2.5 value but also are able to scan PM2.5 history for the specified city.

## Tech ##

Server: Node.js, Express and MongoDB

Data: PM25.in

Github Repository: https://github.com/hhua/PM2.5

Notes from comments:

  1. Cosm / Pachube open data for PM2.5
  2. whether a map should be the main (or ONLY) entrance to the data
  3. include some basic information about these pollutants and how to protect yourself (if possible) on the website. Accessible public service information would definitely be helpful

  4. how you visualize over wide areas
  5. Will you be able to zoom in/out, and thereby get a greater/lesser resolution of PM2.5 distribution
  6. What is the important part of the data? How people want to see this way of visualization?

  7. Immediate concerns include interpolating data across geographic features that aren’t well-categorized, which is a proven hard problem.  There’s also the interesting ethical question of providing people with data that indicates imminent or constant danger without also providing them a means of acting on it.

Elwin

08 Apr 2013

I’ve decided to take my “shy mirror” idea from project 3 to the next level for my capstone project. The comments that I received from fellow students really helped me to think a bit deeper about the concept and how far I could take this.

Development & Improvements


– Embed the camera behind the mirror in the center. This way the camera’s viewing angle will always rotate with the mirror and wouldn’t be restricted compared to a fixed camera with a fixed viewing angle like in my current design. Golan mentioned this in the comments and I had this idea earlier before, but the idea kind of got lost during the building process. This time I would definitely want to try out this method and probably purchase some acrylic mirror instead of the mirror I bought from RiteAid.

– Golan also mentioned using the standard OpenCV face tracker. I wasn’t aware that the standard library had a face tracking option. This is definitely something I will try out, since the ofxFaceTracker was lagging for some reason.

– Trajectory planning for smoother movement. At the moment I’m just sending out a rotational angle to the servo, hence the quick motion to a specific location.


– I always had the idea that this would be a wall piece. I think for the capstone project, I would be able to pull it off if I plan it in advance and arrange a space and materials to actually construct a wall for it. Also, the current mount is pretty ghetto and built last minute. For the capstone version, I would try to hide the electronics and spend more time creating and polishing a casing for the piece. Probably going to do some more sketches and then model it in Rhino, and then perhaps 3D print the shell?

Personality

This would be the major attraction. Apart from further developing the points above, I’ve received a lot of feedback about creating more personality for the mirror. I think this is a very interesting idea and something I would like to pursue of the capstone version.

In the realm of the “shy mirror”, I could create and showcase several personalities based on motion, speed and timing. For example:
– Slow and smooth motion to create a shy and innocent character
– Quicker and smooth motion for scared (?)
– Quick and jerky to purposely neglect your presence like giving you the cold shoulder
– Quick and slow to ignore
These are now very quick ideas, but I would need to define them more in-depth. In order to do this, I’ve been diving into academic literature about expressive emotion in motion, LABAN movement analysis and robotics.

Also, Dev mentioned looking at Disney for inspiration which is an awesome idea.

Someone also mentioned adding motion of roaming around slowly in the absence of a face, and becomes startled when it finds one. I think that’s a great idea and it would really help in creating a character.

Anna

08 Apr 2013

My plan for my capstone project is to construct a working, tangible version of the interactive novel concept I prototyped for my Interactivity Project. That said, my sketch looks an awful like (read: the same) as the sketches I posted in my Project 3 deliverable post. For an overview of my intentions, please visit the IMISPHYX IV project page, and for a continually updating list of prior art that’s inspired me, try this link.

I’ve iterated slightly on my goals for the project, based upon feedback from the class and also on my own daydreams, of which I tend to have a ton. Moving forward, I hope to accomplish the following things:

1. implement the reactivision prototype.
2. ditch the objects/props concept from the last iteration, and focus instead on illuminating the contrast that exists between the dialogue that occurs among characters and thoughts characters hold within them.
3. really push the way I display text upon the table to make it as engaging as possible.

I’m also still torn between using my own personal story, Imisphyx, for this project, or proceeding with a story that has already been told, and thereby at least allow people to compare the interactive version to the original, static version. This would also eliminate my need to worry about perfecting the story at the same time I’m perfecting my interaction — although it *is* arguable that evolving both story and presentation simultaneously is the best way to go.

If I were to shy away from using ‘Imisphyx’, I would like to revisit the Alfred Bester novel I was toying with in my original Project 3 sketch. What’s nice about The Demolished Man is that it deals heavily with the exact themes I’m trying to explore in my interactive piece : the tension between what’s happening inside somebody’s head and what they’re actually saying out loud. I think this story could allow me to play with interesting visualizations of text in the characters’ ‘first person’, ‘internal’ mode.

For example, below is a passage from the book where the murderer, Ben Reich, is trying to get a very annoying song stuck in his head, so that the telepathic cop, Linc Powell, can’t pry beyond it into Reich’s mind to discover his guilt.

A tune of utter monotony filled the room with agonizing, unforgettable banality. It was the quintessence of every melodic cliche Reich had ever heard. No matter what melody you tried to remember, it invariably led down the path of familiarity to “Tenser, Said The Tensor.” Then Duffy began to sing:

Eight, sir; seven, sir;
Six, sir; five, sir;
Four, sir; three, sir;
Two, sir; one!
Tenser, said the Tensor.
Tenser, said the Tensor.
Tension, apprehension,
And dissension have begun.

“Oh my God!” Reich exclaimed.

“I’ve got some real gone tricks in that tune,” Duffy said, still playing. “Notice the beat after `one’? That’s a semicadence. Then you get another beat after `begun.’ That turns the end of the song into a semicadence, too, so you can’t ever end it. The beat keeps you running in circles, like: Tension, apprehension, and dissension have begun. RIFF. Tension, apprehension, and dissension have begun. RIFF. Tension, appre—”

“You little devil!” Reich started to his feet, pounding his palms on his ears. “I’m accursed. How long is this affliction going to last?”

“Not more than a month.”

The description Bester provides of the nature of the song, the patterns it possesses, and it’s cyclical nature, lends itself to some really awesome interactive portrayals. On the table, one could envision Reich’s character object set to ‘internal’ mode, and suddenly emitting these endless spirals of the annoying, mindworm tune. Perhaps, every time Linc tries to pry into his head, his thoughts are physically deflected on the screen by the facade of Reich’s textual whirlpool. See below:

imisphyx_reactable-05