Category Archives: Uncategorized

amwatson

12 May 2015

(Last week, I posted a more lengthy “documentation” post outlining my process.  In light of updates, I thought I would also post a condensed version, which hopefully is more attuned to quick comprehension)

Some screenshots from the experience

Overview

A Theatrical Device is an original play, seen through the front-facing cameras on laptops, desktops and cellphones held by the actors — these devices provide multiple, simultaneous perspectives of the same timeline across multiple rooms, showcasing the progression of different characters.  Audiences find their own way through the play by exploring it in a 3D, virtual reality world (via the Oculus Rift)

The Play

Inspiration


It wasn’t until very recently — probably within the last five years — that high-quality, front-facing cameras became standard on cell phones, laptops, tablets and other consumer electronics.  Suddenly, whenever we’re staring at a screen, we are also staring into a camera.

This is hardly an original observation.  Many of my friends live in true fear that their front-facing cameras might be accessed by hackers or the NSA or bored strangers on a website, and no amount of hardware-enabled indicator lights can reassure them.  Some cover these cameras with stickers, disable them entirely, or go so far as to destroy them.  I think these reactions speak a lot to just how complete a voyeur’s picture of us could be: how much he or she would be able to discern about our private lives by staring at us through our screens.  I started to wonder: what exactly would someone be able to discern through these perspectives?  How much intimacy could the audience have with their subject?  What kinds of stories might we be able to tell?

The advent of these cameras ends up being very interesting from a storytelling perspective.  While I’m by no means a film director, I do know that being able to shoot multiple, simultaneous perspectives of the same scene is sort of an impossible dream for many filmmakers.  For one, assuming you can even get ahold of multiple cameras, filming from multiple angles requires hiding the cameras from one another: no movement, no interior angles, and a very limited number of possible shots.  However, it’s 2015: there are plenty of contexts, conflicts and actions for which cell phones and laptops are natural, invisible parts of the landscape.  Using the cameras on devices allows us to seamlessly integrate any number of cameras into a scene, compelling us to ask: exactly what new kinds of experiences are we now able to create?

Story

 

Shots from Device Theatre

Shots from A Theatrical Device

To me, writing the script became the most interesting part of the project.  After all, in order to justify a cool, new platform, you needed a story it would be able to tell better than anyone else.

A Theatrical Device centers around three college alums visiting their former classmate, still a student, two years after graduation.  The student, Hattie (who is played by me for no more meaningful a reason than the fact that the actor got sick on shooting day), suffers from uncontrollable bouts of emotional instability.  The reunion grinds to a halt after a casual comment triggers her to hole up in a bedroom and suffer a breakdown, leaving the rest of her guests struggling to figure out what to do next.  My hope is that the different perspectives manage to present the play as either a comedy or a tragedy: from the outside, you have friends trying to reconnect in spite of the awkwardness of their friend’s ill-timed “moment”.  Other perspectives, however, will reveal the interior struggles of Hattie and her friends, showing what it means to be unable to hide, cure, or explain one’s inner turmoil.

At its core, each character’s story revolves around the notion of sensitivity — to what extent must we go out of our way to accommodate others? How “real” can someone’s struggle be when it’s all in their head?  I’ll admit, the script was hastily written, and perhaps presents too much of a bleeding heart liberal perspective on this question to have particularly complex commentary.  In future iterations, though, I’d like to more strongly point each character to a different conclusion.

(for more about the process, please see the previous documentation post)

The Virtual Reality

Inspiration

Virtual reality is pretty new, and while there are a lot of very cool things it will one day be able to do well, many of those simply aren’t possible or passable with an Oculus DK2.  One thing the DK2 is very good at, I’m told is providing realistic cinema experiences.  While movement and rendering is still iffy, VR cinemas showing 2D films are able to create surprisingly effective presence.

Of course, no one finds that particularly exciting.  However, this meant it could be very effective at displaying a movie — MY movie.  In VR I could have as many screens as I wanted, with any sizes I wanted.  The DK2 was thoroughly equipped to make the perfect 3D interface for navigating through films.

On that note, I’m very interested in what sort of new interfaces we can create with VR.  In VR, the two banes of any set designer’s existence — cost of construction and the laws of physics — can be ignored entirely.  How does this change the ways in which we can interface with data or explore an idea?  How might we be able to see things differently?

Implementation

Ideally, I would have come up with some kind of interface that’s literally impossible in the physical world — something mindblowing and earth shattering, which can only be done in VR. In all honesty, someone who was very rich and kind of clever could probably make a physical version of my VR world, which failed to meet my expectations but will hopefully be rectified in another iteration (hopefully by then, I will be more clever)

The world is a series of rooms, like a gallery — each room in the gallery is a scene in the play, with framed videos on the walls showing the many camera angles.  Walking into a room causes the scene to restart from the beginning, and an audience member can focus a cursor on a video to bring it into full focus.  Admittedly, this is all a little hard to describe.

Each room provides a slightly different way to interface with the videos.  In one, there is a fake living room with actual cell phones and laptops representing each perspective.  In another, the audio perspective is chosen at random, and you watch all the videos at once on a single wall as one story is told.  I tried to make each room larger and more fantastical as the play progresses, to give each new scene a progressively larger sense of gravity and power.  By the end, the videos are literally towering over you.

Since each scene is in a different room, you can enter and re-enter a scene at will.  This gives the audience the option of exploring the same scene multiple times, and having the play progress in a vaguely nonlinear fashion.

Future Work

At the end of the day, so much about this platform and process were new to me that I saw it very much as a first draft.  Feeling like it all went pretty well, I’d love to try it all again with another iteration.  First, I’d love to revise my script, integrating what I discover about how the many narratives were communicated.  Second, I’d love to better organize my shooting process to avoid the deadly expense of phones without memory/battery, as well as the time spent synchronizing in post.  Finally, I’d love to create a less amateur VR world, and make use of the technology on hand to create truly novel ways to experience media.

I wasn’t sure when I began how interesting or effective this platform would be.  Now, I feel like it has a ton of power and potential.  I’ll upload the final version of this iteration, soon, but I hope to have something even better for the future.

I will also post a video once I figure out how to do that!  Apparently streaming game footage is something everyone can do except me!

Find the source at github.com/amwatson/Device-Theatre

rlciavar

12 May 2015


A two-way robotic avatar communication system. Skype controlled animatronic heads (chatbots).

The chatbots work by “gluing” together several software applications to communicate with and actuate the hardware animatronic heads. The application pipeline looks like this.

Skype  >  Pixel grabber  >  FaceOSC (OF)  >  Oscuino (OF)  >  Serial  >  Arduino  > Servos

Creating the bots began with designing their hardware and mechanisms for movement. This was also the most difficult and time consuming part of the process.

IMG_1742

I spent most of my time iterating new mechanisms and face designs. (it was a lot of fun though). My original plan included much more complicated movements.

DESIGN1

However, I eventually realized animatronics are very hard and should be approached with baby steps first. So I scaled back my design to only include eyebrow and mouth movements. This helped limit the number of servos I needed to drive also.

IMG_1741

lots of prototypes

Screen Shot 2015-04-20 at 10.55.36 PM

I began prototyping the software by building debugging OF applications that simulated the animatronic face and OSC output from FaceOSC.

IMG_1814_sm

I eventually settled on this design for the hardware. (Greg left, Rob right).

Here’s the whole application pipeline working for the first time.

And again at the final show.

112066

Yeliz Karadayi

11 May 2015

Guided Hand will use the Haptic Phantom Touch’s ability to snap to virtual points in physical space and snap to surface boundaries of virtual models. This snapping allows for the hand to be physically guided. I will fix a 3D Pen onto the Phantom so that the guided hand can now 3D print at a higher efficiency, allowing for drawing on virtual boundaries that do not physical exist. This project will explore the capabilities and limitations of the proposed system and conclude with a completed setup that allows for quick and accurate sculptures or prototypes.

This sort of ‘augmented virtual/physical’ process allows limitations on the print by guiding the hand, thus allowing the designer to focus more on the design, worrying less about how to be accurate. It takes advantage of digital accuracy but maintains enough freedom to make way for the valuable hand-craft process. This could potentially open up an entirely new method of working in the design phase. Additionally, this can allow for multiple designs to be directly output based on the same virtual model, so that an artist can iterate through many ideas, all the while maintaining consistency.

Epic Jefferson

11 May 2015

Signal, a free-hand gesture-based instrument for sound manipulation and performance.


This project is an exploration in alternatives for interaction with sound editing and synthesis techniques, in this case granulation.

Gestures
Currently there are 2 gestures implemented, Selection and Triangulation.


For the Selection gesture (right hand), the distance between the thumb and index finger determines the size of the window, which in turn determines which area within the sample (the subsample) that is processed by the granulation engine.


For the Triangulation gesture (left hand), the distance between the thumb and index alter the pitch, the distance between the thumb and middle finger set the grain duration. That’s it. This is already a very sensitive and expressive setup, and the key is in the mapping. Which aspect of the gestures should control which synthesis parameters? And which function should I apply to the leap data to provide the most interesting results? I think this will prove to be the great challenge of this project.
I’m happy with where it’s headed. And since I’ll be in Pittsburgh during most of the summer, I’ll have some time to work on it before next semester starts. Oh, right, this is going to be my Thesis Project for the Tangible Interaction Design program.

Sound Engine
Currently, I’m using Max for the audio engine, but it’s likely I’ll return to Pure Data to allow for embedding the engine within the application itself, and not run anything separately. It seems that sending ALL of the leap data over OSC is too much for Pd to handle (so far, this is only an issue for osx). So, the obvious fix is to only send the necessary data, the minimum. There’s still the possibility of using a C++ lib like Maximilian. Here’s my previous post on the subject.

The Studio for Creative Inquiry interview.

Future work
The next thing to be implemented is the Gesture Recognition Toolkit (GRT) library so people can teach the system their own gestures and possibly replace mine for specific tasks, like selection. Currently, the GRT library has a conflict with OpenFrameworks 0.8.4 (which is what I’m using) about C++11, here’s the forum post. This seems to have been resolved for OF version 0.9, which will be released in a few weeks, I hope. For now, it’s recommended to use a previous version of GRT, when C++11 was not yet added.

On the interface side, I’ll be incorporating a textured surface for the right hand to regain some tangibility in the interaction and rest from muscle fatigue. This should also help with repeatability in the selection gesture.

For anyone interested, I’ll post future updates on epicjefferson.com

Get the code: github.com/epicjefferson/signal

Final Documentation

Twitter:

Exploration into chess as a password alternative.

Concept:

For the final project I tried to explore the opportunities in tangible interfaces as a password alternative. I have chosen to use chess for this purpose because of several reasons:

  • It is something I know quite well and would therefore not be an obstacle in the project
  • My assumption was that chess would provide enough options to be a feasible password (the assumption was that the combinatory would be in my favor)
  • Worst case scenario I could quite easily move everything into a total virtual interface.

Prototype:

The final prototype is a chess GUI that acts as a password alternative for a bitcoin wallet. The prototype is fully working with the exception of some bugs (with regard to invalid moves in case of check). After a certain amount of moves the website will open and the person has the chance to paste the password from the clipboard when prompted. When done the correct moves the password is accepted, otherwise not.

The prototype consists out of three main parts:

  • Movement detection
    1. In case of the tangible interface I would have used a raspberry pi with the piCamera and openCV. I have gotten the openCV working in python but the road to there was too long and hard to have enough time to further develop this concept. There were quite some problems I ran into with using a raspberryPi, which were mainly due in my ignorance of using the raspberryPi. Though I could not continue with this till the stage I wanted to, I definitely will use the raspberryPi in the future and will be able to set it up far quicker now. The problem I ran into were:
      1. Connecting to the internet (computer services was not very helpful with registering my raspberryPi)
      2. I had bought a 4GB SD card, this proofed too small to install everything I needed
  • The piCamera does not work with processing as I initially had intended.
  1. In my final prototype (GUI-based) I used the computer mouse as an input device. The user would click on the square he/she wants to move from and click again on the square where the user want to move the piece to.
  • Visualization/Final unlockable/Password generation
    1. There must be something to unlock in order to use a password. In the final deliverable this was a bitcoin wallet. Each move generated a character. When the move was correct the correct character for the password would be generated, when the move was false a wrong character would be generated. This password would be copied to the clipboard so one could paste it on the bitcoin wallet site.

 

There are several improvements to be made here:

  1. I am quite sure there is a more interesting and effective way to generate a password. I just do not know how yet.
  2. I should make some kind of nice background visualization that shows the progress which is quite unclear now.
  • Chess engine
    1. The final part was the chess engine, instead of making my own (which would have been a far too hard challenge) I used to open source Sunfish engine by Thomas Ahle. Sunfish is written in python and generally regarded as the easiest engine to understand. It was written to be as few lines as possible. Since my final prototype was written in the processing IDE I created a localhost to communicate between the python and processing sketches.

 

Conclusion:

My conclusion for the complete assignment was that was a nice exercise for me and I certainly have learned a lot, yet it is nowhere near what I was hoping to deliver. I think this could have been solved by not underestimating the amount of time it takes to learn a new technology.

It is a shame that I could not get to exploring tangible interfaces as a password alternative, but only chess as a password alternative. This, at least for me, looked promising.

Github: https://github.com/tlangerak/Chess_Encryption

Video: