kerjos-ARTech

So I’ve successfully run the demos that come with the Unity asset (available on the Asset Store; Golan bought a class copy) for the Google Cloud Vision API. Here’s a video of me using the Optical Character Recognition capability, which was actually the TutorialExample demo in the Unity package:

Additionally, the Unity package comes with the demo that’s shown off in this video.

The Google Cloud Vision API is a very powerful asset that can let you send an image to the Google Cloud and get back data regarding its labels (with what confidence Google thinks this is a picture, say, of a duck, a bird, water, etc.), its properties (dominant colors), adult or violent content, logo recognition, and web results for similar images, in addition to OCR and a few other features.

If you’re interested in using the Cloud Vision API yourself (and not just in Unity), then you’ll need to create an account on the Google Cloud Platform and enter your billing information. Google has a lot of documentation for this stuff, but really doesn’t have a place to start (or I haven’t found one yet) for total beginner individuals.

To run Cloud Vision, in Unity and elsewhere, you’ll need an API key which you can generate from the Credentials page under the APIs & Services menu option on your Google Cloud Platform console. Following the pdf tutorial that comes with the Unity package, you’ll enter the API key you generate on your Platform console into the GCVision GameObject in your Unity scene.

You’ll also need to create a new project—as far as I know, that’s just a way to separate your billing between different apps if you’re making multiple ones—from the console, and set up the billing for that project. I had to enter my billing information twice, once to set up my Platform account and again to connect the billing to my project. I don’t know if that was a mistake.

Google gives you $300 credit for the first year, which farcar told me amounts to something like 50,000 requests to their server. And even afterward, I think the next 50,000 is only $1.50 (bottom of this link).

I also made a dummy button and built it to my phone, to begin to move towards my final app:

sheep – ARTech

For the tech demo, I essentially got a capsule, representing a ghost, to move towards any gameobjects that have the tag “object.” I was working on repulsion but I didn’t get it done yet.  My plan is to have the ghost spawn at one of the points and then you move the point itself, to get close enough to a capsule. I was able to figure out how to have multiple image targets appear on the cards.

 

 

 

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class Body : MonoBehaviour {
	public float forceAmount = 0;
	// Use this for initialization
	void Start () {
		foreach (GameObject obj in GameObject.FindGameObjectsWithTag("obj")) {
			float make = Random.Range (0f, 1f);
			print (make);
			/*
			if (make < 0.5f) {
				obj.GetComponent ().attract = false;
				obj.GetComponent ().material.color = Color.red;
			}
			*/
			//if (make >= 0.5f) {
				obj.GetComponent ().attract = true;
				obj.GetComponent ().material.color = Color.blue;
			//}
		}
	}
	
	// Update is called once per frame
	void Update () {
		foreach (GameObject obj in GameObject.FindGameObjectsWithTag("obj")) {
			if (obj.GetComponent ().attract == true) {
				Vector3 direction = (obj.transform.position - transform.position).normalized;
				GetComponent ().AddRelativeForce (direction * forceAmount);
			}
			if (obj.GetComponent ().attract == false) {
				Vector3 direction = (obj.transform.position - transform.position).normalized;
				GetComponent ().AddRelativeForce (direction * forceAmount);
			}
		}
	}
}


 

 

Written by Comments Off on sheep – ARTech Posted in 09-ARTech

kerjos-ARConcept

 

I want to develop an AR app in which you hunt for the words of a mystery poem or passage from literature from real-world text.

The flow of this game would work, as in the above sketch, like this:

  1. Based on a given list of words to find, the player finds the desired word from text existing in the real world. The player holds their phone up to the word and taps the screen to capture it and bring it into their inventory.
  2. The word moves into their inventory with a brief animation on the phone: a glowing circle appears above the words location on the screen and quickly moves into the inventory button in the corner of the screen.
  3. The player then checks their inventory to see both the words that they’ve found and the words they have left to find.
  4. The player may continue to look in the real world for their remaining words.
  5. Upon finding all the words in the scrambled poem or passage, the full text is read to the player by a narrator, their inventory is emptied, and a new list of words to find is generated from a new poem or passage and presented to the player.

This project relies on the Optical Character Recognition capabilities offered by the  Google Cloud Vision API, of which thankfully I’ve been able to get the demo running in Unity.

There is potentially a different possible game to be built using OCR and AR: collecting words and using them towards the construction of some sort of generative poem or other piece. I am interested in getting some feedback regarding other exciting applications for OCR.

 

Jackalope-ARConcept

Okay so my idea started out as just a little virtual buddy that would hang around on your phone. When you’re working and doing stuff it’ll continuously babble stuff at you, but when you turn to phone to face you, it’ll immediately stop and turn away from you to ignore you.

After discussing with my buddy a bit, I decided to also make it sleep depending on if phone is flat somewhere or not (or maybe just when there’s a good plane to sleep on?).

dechoes – fatik – ARConcept

Made with collaborator fatik

Our project plays with the bigger idea of intent and perception in visual storytelling. Both of us have deep interests in film and find inspiration from the moving image. We are curious about how people’s perception differ when watching the same movies, different people notice and remember different parts (whether that’s one scene, the plot, the theme, colors and objects etc.). There is a degree of intent in making and curating a film/story from the director (and the many people involved in the story/ narrative) but the take away can differ. We wanted to play with this idea of curation and how each persons’ perception could be their own “augmentation” of the story.

Each scene we curate and create will have parts that are augmented and that could conceptually add to and alter perspectives. Some ideas we have include the idea of adding and animation objects within the curated space that might lead the viewer into a different world.

One of the main reasons behind this idea, is that we wanted to create something that was more than just a tech-demo. By creating this series of animated photographs that exist more so in the realm of aesthetics than technical prowess,  we hope to use AR in a way that has not been used before: with visual care, detail and poetry.

Mood board/ Inspiration:

Snapshots from Delicatessen- strong sense of material curation, wonder, whimsy

  

  

Snapshots from Gummo – great colors and sense of narrative, nostalgic

  

Other film references – Mommy, Manifesto, the Handmaiden.

 

 

Music videos:

 

dechoes – fatik – ARTech

Made with collaborator fatik.

 

For our project we wanted to use Cinema 4D to render specific models for our concept.  The issue was importing the textures and materials from Cinema 4D to Unity. We learned that more transparent materials such as glass is not possible to import from Cinema 4D. For other textures however, it is possible to “bake” textures and import them as assets into Unity. We also clicked around Unity to create similar textures from C4D instead of exporting and importing. We played around with the default shaders on Unity and were able to get some good results. We plan on 3D modeling all of our AR objects in Cinema 4D and continuing to explore different ways of manipulating materials.

Written by Comments Off on dechoes – fatik – ARTech Posted in 09-ARTech

farcar – ARTech

I began using AR Core from Google after realizing Vuforia had a lot of bugs. But then I realized that AR Core has a lot of bugs too. Such as:

  • Can not track vertical surfaces
  • Can not recognize horizontal surfaces that are far away
  • Can not track vertical surfaces
  • Can not track planes well in the dark
  • Can not track vertical surfaces
  • Track planes can not work on angles (no hills or inclines)
  • Can not track vertical surfaces
  • Can not track vertical surfaces
  • Did I forget to mention it can’t track vertical surfaces?

Fans of AR Core have been asking for months since the release of AR Core for there to be vertical wall detection. Google responds that it’s something they’re working on, but when the release may be, nobody knows. For now, some people have tried to be hack-y and create their own algorithm for creating vertical wall detection. It goes as follows:

  • Read from point-cloud data
  • Filter data (anything with confidence less than a certain interval, drop)
  • Assemble data in form of matrix and solve for the normal vector.
  • Use normal vector to create tracking plane.

Here’s the catch: Google makes it hard, very hard, to generate planes. It is easy to read from the Point Cloud and TrackerPlane data, but using the data to instantiate a new TrackerPlane is not an easy task. They bury this code far deep in their scripts. I had searched through more than 10 TrackerPlane scripts only to find no example of how to actually generate the thing. Nor has anyone online posted on how to do it.

As a result, I was unable to get the vertical wall detection to work. The most I got from the last week was being able to read from the Point Cloud data. I didn’t want to spend all my time this week without any feasible product, so I put the wall detection algorithm aside and focused on another technical problem.

Another, more feasible challenge was generating text that would pass through a lyric-segment array and update during each event call (tap of the screen). I ended up using the default Android AR Kit Demo, and replacing their Android Icon prefab with a 3D text prefab. From there, I wrote a script to update the contents of the text with each tap of the device. I stored all the lyric-segments in an array, and an accumulator counting the total amount of touches to the screen would indicate which index in the array to use as the new text content.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class TextUpdate : MonoBehaviour {
 
public int clickCount;
public TextMesh textLayer;
 
string[] content = new string[] {"How", "are", "you?"};
 
void Update() {
if (Input.touchCount &gt; 0)
{
Touch touch = Input.GetTouch(0);
if(touch.phase == TouchPhase.Began) {
clickCount++;
clickCount = clickCount % 3;
textLayer.text = content[clickCount];
}
}
}
}

This allows me to go out into the world and place pre-determined text down onto surfaces as followed:

Note that the product is not meant to be a real-time AR interaction. Rather, it is a way for videographers to record the AR interaction and edit the footage in an expensive (After Effects) or not-so-expensive (iMovie) editing software along to the audio. Though, there exists no official program to screen record on the Google Pixel, so I downloaded a third-party application AZ Screen Record. Although it records pretty decently, it cannot go up to HD quality.

Another challenge is timing the text. Because AR Core is slow at detecting new surfaces, this means that for fast songs, it is hard to place lyrics/text down fast enough. After experimenting some more, I came to find that it is easier place text down at half speed to let AR Core have its time to recognize tracking planes, and then when editing the footage, speed it up to map it to the actual audio.

I made a demo of this and saw drastic improvements as opposed to real-time text placement.

Though, AR Core does this undesirable thing where it picks a random color and paints the tracker plane that color to indicate the active area. But most times the color is distracting. I modified the code to only use white as a color for painting the tracker plane so as not to be too distracting to the text.

farcar – ARConcept

I used motion tracking in After Effects to attach typography to walls of buildings (The video makes more sense by looking here). I added a quick particle effect as a way of experimenting with how motion graphics could work in unison with the typography. This brings a new form of interactivity to kinetic typography. You no longer watch it, but now interact with it. Lyric segments are loaded into Unity, so that when the user taps on a surface, text is placed down until all the text from the lyrics array has been exhausted. Thus, the user can time lyrics to music as they place lyrics around their environment.

jackalope-zbeok-arsculpture

Zbeok and I (Jackalope) made an app that uses Markov chains to generate Marx-like sentences to appear in front of people’s faces in nice rainbow letters. The placement of words on faces came from thinking about propaganda, but the effect came about because we just wanted to have a fun time with this and offset the serious gravity of communism with nonsensical computer generated sentences and rainbows and bubbly letters except we mostly had a not fun time due a wide variety of problems including but not limited to planning things unfeasible for the tools we had and not having mac computers for the ios devices we wanted to use. Anyways, there’s a lot lacking from this, but here it is.

dechoes-fatik-arsculpture

Forgotten Room

For our first exploration in AR, fatik and I decided to show the existing but invisible, instead of creating a new reality from scratch. Inspired by things we couldn’t see, such as Zach Liebermann’s sound-waves , we decided to revive a forgotten CMU artifact. A couple decades ago a CMU architecture student passed away, and in her memory, the school built an underground room with a glass ceiling that you could walk over to explore its content. Unfortunately, due to a poor sealing job, humidity got into the glass room and mold started forming on the inside. Deciding that it would be too costly to maintain, CMU decided to cover it up. It is now still present underground on our campus field, with very few people knowing of its existence.

Fatik and I therefore 3D-modeled this room, which could then be observed by walking on the cut, at the place where it is buried (its location was discovered by another CMU student, for our Experimental Capture class of Spring 2017). All objects are covered in mold, moving ever so slightly as to create an uncomfortable feeling when looking inside it.

Over the shoulder shot:

Screen capture shot:

conye – miyehn – AR sculpture

Our project is a model/sculpture of a frog that resides in a well, loosely based off of the imagery from an old Chinese folktale -> http://www.taiwandc.org/folk-fro.htm

I worked on it with the amazing miyehn! Thank you miyehn for working with me :))

Although in the documentation it is using a random piece of art as an image target, it was our intention to use actual manhole covers as image targets, thus making our “site” any manhole, which would then become an interactive well. You can interact with the well by shaking your phone, and items will drop into the well.

Here is it in video form:

Process:

Testing dropping objects into a hole, making new prefabs with script. These babies don’t have the shader that applies the “hole effect.”

testing with webcam, but using input from my phone’s accelerator.

The particle system!

Miyehn and I learned a ton about using Unity in general through this project! I think we feel much more familiar with using and attaching scripts in Unity, such as for making new instances of prefabs with randomized transforms, depth masking/shaders, particle systems, etc. We also learned how to test projects with phone input by connecting a device and selecting it in the editor view! It’s so useful! Overall I’m really amazed that Unity made this possible in a short period of time for a beginner like me, so I’m super excited to tackle harder and more rewarding projects in Unity (hopefully!).

avatar – ackso – arsketch

 

A Dirt Room and A Dirty Tunnel

 

A slightly suspicious picture of Avatar:

 

 

Some friendly pictures of Ackso:

 

 

Portrait of us:

 

Written by Comments Off on avatar – ackso – arsketch Posted in 08-arsketch

avatar – ackso – arsculpture

 

HAVE YOU EVER BEEN HUNGRY?

 

 

HOW MUCH DO YOU LOVE CAPITALISM?

 

 

HAVE YOU EVER BEEN LOST, PERFORMING ILLEGAL ACTIVITY, AND HUNGRY?

 

 

WATCH US DO -SOMETHING- WITH SIMILAR THEMES

 

 

sheep-rolerman-arsculpture

We went to a Starbucks and put a Funky Feast in there.

 

 

 

Over the shoulder

We used a lot of easing functions to make things look more alive. The overall concept was to make a Thanksgiving feast in a Starbucks, a place that capitalizes on a conversational, autumn atmosphere while at the same time existing only to speedily serve their customers expensive coffee. We downloaded a ton of free food assets to make the scene look as nice as possible.

ango-aahdee-arsculpture

Aren and I created an above-ground dolphin shrine ruin. We use the CFA floor plan illustrations as AR trackers to construct a microenvironment sprung from an idea of what that floor plan might’ve once represented.

We were successful in animating a ring of dolphin avatars to swim in respect to a floor illustration. In our original idea, however, we wanted to include more advanced materials and shaders for the dolphins- unfortunately because of the pre-set uv mapping of the dolphin model we used, we couldn’t easily apply the materials we wanted.

ango-phiaq-arsketch

Sophia and I gained insight regarding the affordances and limitations of the Just A Line app through through our different iterations.

For one, detailed illustrating is especially difficult- not only because the drawing tool itself is of lower fidelity but also because of how tracking is updated, displacing or deleting the smaller details of the sketch. The less detailed the sketch was, the less stress it put on the tracking system.

We were still able to draw some pretty cute Kodamas though…

For our final sketch, we settled on something expressive but structurally simple so that it wouldn’t overwhelm the tracking mechanism.

aahdee-zbeok-arsketch

We liked the idea of putting small little stick figures in places, so we created a few of them posing on a few objects around CFA. It’s a bit whimsical and cute and reminds us of tales of faeries that live among us but are in hiding.




phiaq – kerjos – arsculpture

Half of a Heart – kerjos and phiaq

This is an interactive AR sculpture set between two people with half-hearts and when they get closer the two halves of the heart form a whole.

 

 

As people become closer in proximity, the closer they become and the stronger connection they form. Our experience could be taken anywhere, as long as two people are close enough in distance. Our two halves of the heart are programmed to come together like a magnet to mirror the attraction between two people who are touching.

Upon further iterations, we see it as a AR experience that could encourage people to form fun connections through Snapchat with different locations and different people in public.