supercgeek-FinalProcess

While working on my final project (update to my Place project), I spent my time on three main areas:

  1. Creating the Portal Effect from Bioshock Infinite that originally inspired the project.
  2. Getting Vuforia to automatically pin the geometry to real space.
  3. Making the portal “look cool”

01 // The first stage went well and I was able to get a portal like affect working which allows the HoloLens wearer to only see the 3D Geometry through a ‘tear in reality.’ [1]

02 // Vuforia implementation went considerably less well. Though I was able to get some object registering onto a image targets, performance wan’t robust enough to depend on and the size of the great hall made all of these issues worse. In the end, I completely scrapped this part of the project and reverted to a simple manual placing of the old CFA, but I think this significantly undermined the effect I was going for.

03 // This final stage of the project was squeezed for time because of all the time spent searching for Vuforia solutions. That said, I did manage to experiment briefly with particle simulators and effects. [2]

supercgeek-Final

Augmented reality portals for Historical Photographs

Historical realities of environments through portals, juxtaposing the past against the present in a HoloLens AR application.


About a year ago, I saw a post on overheard at CMU with photos of campus circa WWI and at first I genuinely didn’t believe them. I even told a friend I thought they were photoshopped. To me, the idea that a place I’m so familiar with could at one point have had such a divergent content was stunning. The idea I set out to capture was time.

 

I describe my process of visiting the Carnegie Mellon archives and ‘finding history’ in my place process post.

To allow for people to see these versions of time, I created a version of the historical reality (3D Model) that could be “captured” in infinite different ways by those using looking at the experience in the HoloLens. The process of creating this 3D reality from a 2D photo scan was a challenge in itself – see the diagram below to understand this process – in many ways, it was the key technical breakthroughs that enabled my project.

Click to View Fullscreen
In order to experience the recreated environment, I placed it into a mixed reality environment with HoloLens. I believe the mixed reality nature of my project brings together two ideas that had previously not intersected directly:

1 // Juxtaposition of History

The Old & New Directly Compared
HistoryPin App

2 // User-Centered Perspective

The ability to interactively control a perceptive of a virtual or augmented reality

CardBoard Sample App
I believe by connecting two these concepts, a richer understanding of the historical juxtaposition can be gained because the viewer is personally in control of the capture process – in a way, my project does not declare a produced capture result, but actually allows each participant to capture their own perspective on history.

Video & Experience


Process

The process of creating this project can be found across a number of blog posts, see below.

Personal Evaluation & Reflection

When I set out on this project, I had no idea how many technical challenges would have to be overcome. I assumed much of it could have been done easily, but found instead that each and every step between original image and mixed reality experience was its own significant undertaking. When I started this project, I had no idea of the steps that would be required or how I would tackle them, I can confidently say now that I know all of them intimately. That said, I can’t say I’m actually happy with how they’re all integrated in my project.

supercgeek-FinalProposal

I met with CARO yesterday (April 17th) to discuss ideas for a final project collaboration and we generated some interesting things:

  1. Spherical Capture and Representation
  2. Small Cube “Object-Based Capture” with Physical Memory
  3. Sound of Things

Anyway, after a lot of time chatting, we decided it would be best to go our own directions: I’m planning to revisit my Place project for the HoloLens with a number of main goals.

Revamp Goals:

  1. Nail the Craft & Kill (or integrate) the glitch (into the aesthetic)
  2. Use Vuforia Image Targets to create a ‘continuously pinned’ spatial environment that stays synchronized
  3. Realize the time-tear origins of the project with portals that open and close in a convincing and engaging way
  4. Create more scenes (hopefully some outdoors) & make a killer documentation video of the whole thing.

https://vimeo.com/209779985

Place Posts:

supercgeek-EventProcess

Idea Track 1

With regard to the scrolling project I proposed in my EventProposal, I did few explorations around this:

Idea Track 2

Earlier this year I did a bunch of microscope explorations (1, 2) for a different class and I was thinking about trying to revisit some of this stuff in slo-motion. I was also inspired by Kim Pimmel’s work in this area:

Other References [1, 2]

supercgeek-EventProposal

Idea 1

Everything scrolls under subject: A day in the life of the modern human. I have this idea tell a subject-based story of modern life by looking at someone’s day through locked perspective on particular objects.

I’m particularly interested in exploring the comparison between a finger scrolling a phone and a person biking across a land.

supercgeek-PlaceProcess

Previous Blog Post: PlaceProposal

CONCEPT

historical photos of Carnegie-Mellon X a dynamic (i.e. user-driven) augmented reality viewport that allows one to time travel by looking through visual tears in their perception of space

PIECES OF THE PUZZLE

TECHNOLOGY: Working through the HoloLens academy tutorials (I tested my dev setup last night with a HoloLens in the studio and was able to place objects and demo live in holographic)

MATERIAL: I’m going to the CMU Historical Archives tomorrow to find some interior photos that will jive better with the HoloLens’ limited recognition range. Some of the original photos I was planning on using would have required being outside 30-40 feet away from buildings/markers which seems to break the HoloLens’ ability to establish presence.

TECHNICAL ART: Adding depth data to flat 2D images by manually sculpting them around basic 3D geometry and generating 3D scenes from that process.

DESIGN: Creating location-based portal effects (see BioShock Infinite Tears) and designing affordances for both the general portal location and to infer when one is going to enter/exit a portal by going beyond the adequate range of view.

TECHNOLOGY

I started by going through some of the HoloLens Academy lessons to get a general handle on the pipeline and how to build custom apps and push them to a live HoloLens. Here’s a quick video of capturing the results of that setup process:

[this area of work is going]

reference:

  • https://forums.hololens.com/discussion/1951/align-hologram-s-with-real-world-objects-and-or-room
  • https://www.youtube.com/watch?v=jy8XHQAFyU0
  • https://www.youtube.com/watch?v=iUmTi3_Ynus
  • https://www.youtube.com/watch?v=jy8XHQAFyU0
  • https://developer.microsoft.com/en-us/windows/holographic/spatial_mapping#using_the_surface_observer
  • https://github.com/Microsoft/HoloToolkit-Unity
  • https://developer.microsoft.com/en-us/windows/holographic/spatial_mapping_in_unity
  • https://forums.hololens.com/discussion/1951/align-hologram-s-with-real-world-objects-and-or-room
  • https://youtu.be/C7mLH_5QzvU
  • https://forums.hololens.com/discussion/1033/using-spatial-mapping-to-recognize-a-pre-scanned-space?

From Case Study on Looking Through Holes

 

MATERIAL

On March 6th, I visited the University Archives to begin searching through historical content for my primary material. It was an absolutely awe-inspiring experience that I won’t forget soon. Some highlights from my March 6th trip are below; I’ll be returning to the archives on March 7th for further review.

[this area of work is going]

TECHNICAL ART

As of March 6th, I’ve begun learning Maya to do 3D environment recreation from the 2D historical historical photo content.

[this area of work is going]

DESIGN

Talking to Austin Lee about my project for this class inspired me to take the Design of the ‘temporal shifting’ interface seriously. He showed me two projects which sparked my imagination in particular, the Khronos Projector and Art+Com Timescope.

[this area of work is going]

Other References

* == from fellow student A

supercgeek-PlaceProposal

Virtual/Augmented/Mixed Reality Historical Exploration of Carnegie Mellon University

HistoryPin Android App

I’ve been itching to do something with mixed reality methods where I can really focus on execution and craft in either VR or AR/MR. I’m not sure yet if I’ll explore full-presence with live video (i.e. hololens) or something where I pre-record a number of locations with 360 video, map into VR (i.e. rift/vive) and then place the imagery above. The main-stay of the idea is that I’ll mix historical photographs with VR/AR/MR exploration with time-rift effect to start/end viewing the images.

Historical Material

I’ve been really inspired by looking at plenty of the historical photos from around CMU, I definitely think I’ll have far more material than I actually have time to work with.

Reality Tear Effect

I’ve been looking into different ways to ‘tear’ into the historical layers of reality. I really liked the example from New York Times 360 on Malcolm X, but it doesn’t directly address the issue of having a dynamic viewpoint (i.e. the imagery is painted and placed around the camera).



 

Other References

 

*=thanks to austin lee for references

supercgeek-Portrait

For my portrait I used a live-image slit-scanner controlled via a custom OF App and the Griffen PowerMate to capture images of fourth. This blog post describes my project in three steps:

  • What Happened: My Process
  • What Resulted: The Portraits
  • What (could be) Next: Areas for Future Research

Process

Near the beginning of my exploration into machine-based portrait capture, I read Golan’s overview of Slit-Scanning and started thinking a lot about how differently artists had approached the simple rudiment of assembling a series of captured slits. For my project, I didn’t want to create a new one of these approaches for my portrait subject, fourth, as much as create a machine that would allow an artist to create their own algorithm physically. To enable the creation of this portrait machine, I worked with a jog-wheel Human-Interface Device called the Griffen PowerMate (controlled via ofxPowerMate), which would paint slits to the right of the starting location when turned clockwise, and to the left when turned counter-clockwise.

In effect, this allows an artist to craft custom-slit-scan works with a high degree of personal expression and control. In the below video, you can see me painting in fourth’s portrait using the custom OF app I created for this project.

Video of OF App Working

Capture Setup

More Software Iterations

Portraits

Other Portraits (Still Developing)

Future Directions

Though my capture system works, I believe there’s significant room for future improvement. Going forward, I’ve been thinking about a number of areas where I could iterate this project:

  • Extend the functionality beyond just placing a captured slit, to include the ability of choosing where the slit is captured from (currently, it is always captured from width/2).
  • Experiment with more continuous blurring between captured states instead of just leaving the blank bars (which adds an glitch aesthetic to the scans).
  • Work on improving the capture & paint playheads to better communicate what the software is doing.
  • Someone in my review pod mentioned the IO Brush, I think it would be interesting to look into this more.
  • Investigate the notion of transcompiling the OF application to iOS and shifting from the wheel as creative input to some version of touch interface.
  • Slit-scanning often has a ‘glitchy quality’, in part due to the low resolution that it is normally captured at. I may explore a way to hook up a high resolution DSLR-quality camera to work towards a more professional look.

supercgeek-PortraitPlan

Idea 1: A slit-scanning machine with physical (capturer-available) algorithmic control

I’ve been thinking deeply about the notion that that artists can use their own rule-based approaches (sometimes computer-based) to create art. These computational rule-based methods obviously afford results that often never would have been impossible otherwise. That said, I believe the creation of such algorithms places an undue burden on artists to think in ways that are antithetical to traditional creative and designerly processes — processes that are often iterative. When programming, iteration is often much harder and less natural than when drawing on paper or using art-board based applications. With this as a context, I’ve been working to create a method that would allow for an artist (in this case me) to tune and generate the algorithm physically with controls while capturing my subject, instead of before the fact in an act of algorithmic planning and forethought.

ref: An Informal Catalogue of Slit-Scan Video Artworks and Research

 

Idea 2: A Device to Capture The Hands

Freshman year in Placing (51-171), Cameron Tonkinwise talked about the concept of the Human-Thing where you are what you find yourself in contact with — you are extended, in a way, by the contacted thing. I’ve been throwing around the idea of attaching a device to my subject’s arm that would take pictures of everything they come into contact with. But, I don’t just want to take a video of their entire day (as that wouldn’t be very attuned), so I’ve been thinking about ways to modulate the capturing using sensed-motion or capactive activation.

ref: On the Subject of Objects: Four Views on Object Perception and Tool Use

 

[more ideas coming]

Reco Supercgeek ¬

Hello Humans—

My name is Cameron Burgess (@supercgeek) I’m a Bachelor of Design for Environments student at the School of Design. I’m also interested in emerging technologies and how we can design thoughtfully with interactive and dynamic materials across multiple levels of scale in physical, digital, and hybrid environments.

 

some of my recent projects: CeeMat, The ColumnWork from 60-212