zbeok-ARTech

There are a few major technical problems with my concept, in which I’ll be fixing them with a combination of concessions and libraries.

Regarding the magic circle detection, I’ve resolved to use text. Simply take the name of your preferred devil, and POOF! It shall appear. For that, I’m using Google Cloud Vision.

However, within this fix is another issue: how do I send an image to the Cloud if it’s warped? How do I even get a .png from Unity? I’m relying on perspective correction to address the issue, looking to this library as example.

empty scene example

By making some Game Objects to focus on the corners of the page (which I can make using Image Targets, currently represented by spheres) I can effectively get the vertices representative of the page coordinates, mapped to 2D. Then I can shove them through a homography matrix, to map that to a RenderTexture, then to a clean rectangular 2D Texture. Then I can turn that into a png that I can port to the Cloud.

Currently I’m trying to get the shader that actually does the homography calculations to work. It’s a work in progress.

Avatar – looking outwards 04

Shoutout to the STUDIO for supporting all of the artists I looked at on Eyeo’s website.

I don’t find this project SUPER interesting but I think her documentation is exciting.

I did something similar in my A is for Ava project:

Except the grid was my intention from the beginning.

Continuously looking for new ways to break out of the traditional video doc style / look.

I want to do crazy things with this grid like figure out how to illustrate systems like a house or apartment building at that intensity. Plus how do you break the confines that you set for yourself in documentation. PLUS it makes me think about how the video could literally go through the screen possibly AR? Possibly video + projection on a non flat screen?

Its so great to see even slightly creative new methods of documentation.

avatar – looking outwards 02

 

VVVV

SOoooo as far as generative art goes VVVV seems nice. It could be a useful logical tool to use to make a variety of standardized generative objects. It seems to offer a HUGE range. I wonder what VVVV means or stands for. Do you think its just to sound s-i-c-k? I can’t find anywhere to download it though so who really knows. Would consider looking more into it if I ever needed a generative object.

 

V v V v V v V v V v V v V

 

V                              V

   V                       V

       V               V

           V       V

                V

 

V                              V

   V                       V

       V               V

           V       V

                V

 

V                              V

   V                       V

       V               V

           V       V

                V

 

V                              V

   V                       V

       V               V

           V       V

                V

avatar – looking Outwards 07

Daily Life VR

Credits

This is the first VR project that anyone ever told me about. It was also my second VR experience. I enjoy this project for its crudeness and incredibly thorough/consistent appearance. It used to make me uncomfortable because I thought it was a little aggressive but now I appreciate it even more because I frequently pee myself and have been making a lot of art about that. The crayon drawings/dirty bar bathroom look are cool. It also doesn’t live in the Uncanny Valley and I’m kind of tired of that conversation. Now after 2 years of art school I think it could use more irony, humor, and filth.

Link

http://dailylifevr.com/

Jackalope-ARTech

So for my tech problem, I set out to figure out how to make an object respond only when you look directly at it with your phone. I originally thought it would be through the phone’s accelerometer and gyroscope, but after suffering for a long time, I realized there was a much easier way using the location of the first person point of view camera. It also helped to delete mass amounts of code that was stopping the object from moving at all.

Attached is video of my little Android buddy spazzing out whenever I look at him directly. (I need to shift it a bit; it’s slightly off)

Avatar Ackso – ARTech

We got this 360 video to work in unity, loops endlessly, and is supposedly 4k.

Also Ackso and I got both 360 cameras to record and we can access them on our computers.

 

Written by Comments Off on Avatar Ackso – ARTech Posted in 09-ARTech

creatyde-ARConcept

My concept is to allow a viewer to visit a micro world.   I want to allow people to explore and appreciate the tiny places they are familiar with on a macro level but do not see or think about.  For example, the skin of a banana peel, the dust floating in the air, the tiny circuitry in processors.

I plan to use the Electron Scanning microscope in Pitt and construct a 3D model using a series of 2D images.  The model of these tiny landscapes/worlds will be blown up in AR so that viewers can see and explore them in their phone.

For my technical aspect, I tried to implement GPS geopositioning with limited success.  I’ll try to keep working on it, but I am also interested in the other difficult challenge of properly creating a 3D model from 2D scanning images.

miyehn-arconcept

I have pretty much everything in this pic ↓ hope the words are still readable…?
I want to thank my buddy Jackalope for input! I think I’m going with the last idea of lighting a mini firework step by step, using image target cards as the needed tools & materials. If there’s extra time I think it’s also fun to implement how firework sparks turn into schools of fish and swim around.

tesh – ARConcept

AR Portals that take you into a slightly more fantastical version of our real world.

I have seen a few of these Portal type AR experiences but often times they are used to transition the viewer from a real or AR world into a fully constructed VR world. I am more interested in using them to frame certain real-world locations and populating those locations with small traces of AR.

These may include faeries, foggy particle effects, and shiny things that move in the corner of your eyes to invoke a sense of adventure and discovery. I want to create the kind of experience of a kid in a young adult novel exploring the world, and in fact some of the theming of my project is inspired by books like The Spiderwick Chronicles or even The Borrowers.

zbeok-ARConcept

Trying to summon the devil seems so anticlimactic sometimes. I’d like to fix that problem by guaranteeing a success rate of 100%. What I mean is that every time you make a magic circle, any circle, you’ll be able to spawn a creature in it. Optimally, the type of creature should depend on the glyph drawn.

The main challenges of this concept will be the circle detection and glyph detection. I may have to compromise by making a set printout of the magic circle and then drawing the glyph, but other ways I can avoid that is to create a bounding box for the circle and having everything go from there, or printouts (which is last resort).

I may also want to add behavioral modifications to each monster: for example, the ghost can circle around you, or the devil can walk along with you and do your evil bidding.

devilspawnghostiesketch

tesh – ARTech

I have been looking at  how people use shaders to hide things behind each other, and found some good references on how they work and on how to go about hiding certain objects from the camera to create portal-style effects.

One tutorial I have been following that has been instrumental in this is this playlist by Pirates Just VR which talks about the specific kinds of Unity shaders used to create the portal effect and how to modify the shaders as you walk “through” the portal to hide or show objects in a convincing way.

Here is a gif showing what walking through the portal looks like, and how it changes how the cubes in the “portal dimension” are rendered out:

The next step going forward is learning how to make the scripts that are triggering the shaders to change to cooperate with other shaders so I can have multiple materials and particle effects going on in the Portal Dimension!

Written by Comments Off on tesh – ARTech Posted in 09-ARTech

avatar ackso – ARConcept

Ackso and I are planning on taking anyone interested in a virtual reality experience of a lifetime for their own personal trip through a Waste Management Landfill ! 🙂

Imagine – A 360 Video experience from both of our perspectives as we hopefully cry from the overpowering smell of trash or even maybe vomit.

 

We would like 3d models of our heads to float around over the 360 video that narrate the experience.

For our sketch we went to a dump the Reserve Park Landfill . The good news is there is no security to guard trash.

 

Also we got the 360 video to work.

 

phiaq – ARTech

I’ll have some more later; here I got Vuforia to detect my personal physical photos and I had example models/images added on there, as well as particle systems and lights. My tech problem to fix right now is detecting my physical photos (which only had 2 stars) and adding a scene onto it. Even though my picture was only 2 stars, it could be detected.

Eventually, I want to create a scene (not sure if it will be navigable yet with interactions (such as colliders, etc.) with audio. animations, particles, lights, and 3D models to set the mood. Here are some tutorials that will help me later on and is a refresher.

https://www.lynda.com/Unity-tutorials/Animation-basics-editors-Unity/639062/689309-4.html

https://www.lynda.com/Unity-tutorials/Adding-ambient-sound/639062/689320-4.html

https://www.lynda.com/Unity-tutorials/Triggering-sounds-animation/639062/689322-4.html

https://www.lynda.com/Unity-tutorials/Environment-lighting/639062/689326-4.html

https://www.lynda.com/Unity-tutorials/Ambient-particles-Dust/639062/689335-4.html

https://www.lynda.com/Unity-tutorials/Audio-timeline/639062/689347-4.html

 

Physical photo

I made my physical picture (scanned) higher contrast so Vuforia can detect it better

 

Detects it

I’ve put some models, material, point light, particles on it, to see if it would pop up once Vuforia tries to detect it.

UPDATE I got video to overlay on the image target. I have to create a material with the video that will be decoded, and add it the material renderer. After that, I have to add it to the video player and then drag all the video into the material.

Video is on the left (with the city landscape) and the right (me and kerjos’s last project)Next, I will see if audio and video will work, as well as finding out how to make it high resolution

ango-ARTech

Pure Data: An open source visual programming language useful for sound synthesis and analysis: https://puredata.info

Cloud-based service for integrating interactive sound within unity:  https://enzienaudio.com

Magicolo: PureData Plug-In for Unity: https://github.com/Magicolo/uPD

Tutorial: How to use PureData for Unity: http://melodrive.com/blog/how-to-use-pure-data-in-unity/

Using Enzein’s Heavy to upload Pure Data patches to Unity: 

 

Unity Forum: Audio Analysis and Isolating Frequency values: https://answers.unity.com/questions/175173/audio-analysis-pass-filter.html

 

ango-ARConcept

For this project I’m interested in exploring how speech (in a lo-fi scenario, using the phonetic, sonic qualities of speech) could be used as a generative framework for world-building in AR. How could a tool like this help surface interesting relationships between how we emphasize words or parts of a utterance and how we create things?

zaport-ARTech

The technical problem I tackled was figuring out how to use the phone’s GPS in my Unity app. I used a number of online sources/tutorials to work through this. I credit N3K EN for provide a significant amount of support on this project (https://www.youtube.com/watch?v=g04jaC-Tpn0). To use the GPS data from a phone, I needed to write a script to do a couple of things. The first was to ensure the phone allows Unity to access that data. If the phone is set up to deny this location service data, the app will not work. Next, I needed to find the latitude and longitude values of the position. Finally, I wrote a script to update the output position as I move around.

GPS Code:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
 
public class GPS : MonoBehaviour {
 
	public static GPS Instance {set; get;}
 
	public float latitude;
	public float longitude;
 
	private void Start()
	{
		Instance = this;
		DontDestroyOnLoad (gameObject);
		StartCoroutine (StartLocationService ());
	}
 
	private IEnumerator StartLocationService()
	{
		if(!Input.location.isEnabledByUser)
		{
			Debug.Log ("Enable the GPS!!! :)");
			yield break;
		}
 
		Input.location.Start();
		int maxWait = 20;
		while(Input.location.status == LocationServiceStatus.Initializing && maxWait > 0)
		{
			yield return new WaitForSeconds(1);
			maxWait--;
		}
 
		if(maxWait<= 0)
		{
			Debug.Log("Out of Time...");
			yield break;
		}
 
		if (Input.location.status == LocationServiceStatus.Failed)
		{
			Debug.Log("Can't determine ya location");
			yield break;
		}
 
		latitude = Input.location.lastData.latitude;
		longitude = Input.location.lastData.longitude;
 
		yield break;
 
	}
}

Update GPS Text Code:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
 
public class UpdateGPSText : MonoBehaviour {
 
	public Text coordinates;
 
	private void Update()
	{
		coordinates.text = "Lat:" + GPS.Instance.latitude.ToString () + "    Long:" + GPS.Instance.longitude.ToString ();
	}
}

aahdee-ARTech

For my tech demo, I demonstrated that I can use particles in Unity and properly size the bathhouse. Next up is making the inground pool and establishing the door trigger.


aahdee-ARConcept

We’re all stressed at CFA, so why not a nice “warm” “bath” in the Great Hall to help you relax?

My concept is placing a bathhouse in the middle of the Great Hall. It maps to the architecture layouts in the center of the hall and a calming virtual environment when you’re annoyed about Vurforia.

phiaq – ARConcept

For my AR project, I am going to create 3D scenes of my memories from photographs. Photographs capture only a picture of the memory, actually reliving those memories can be done through exploring a custom 3D space. I want viewers to scan physical photographs and a narrative about the person will be revealed through sounds, animations, and 3D models of the AR experience. I am also thinking about scanning over a picture book I made or a frame sculpture that I made that will reveal the AR experience.

 

Inspiration Pictures:

Moving pictures from Harry Potter: What if I made an experience where a bunch of scenes moved together at once (like a narrative) on a massive scale?

Maybe I could play with 3D text in this?

 

Frame sculpture I did to preserve memory – What if I built upon the images that I already have on my sculpture? Or does the digital and analog art not mix well?

 

zaport-ARConcept

This AR experience uses the PLU sticker on Sunkist Oranges to tell a short (very simplified) story of an orange’s trip from farm to eatery.

Scene 1: the farm

  • There is a representation of a farm in virtual space
  • Depicted are orange trees being sprayed with chemicals

Scene 2: the processing facility

  • Oranges are put onto conveyor belts
  • The oranges are deposited into baths
  • The baths wash the chemicals off the oranges

Scene 3: transportation

  • The oranges are inside a freight train
  • The train is moving

Scene 4: distribution center

  • The oranges (in boxes) are moving along conveyor belts

Scene 5: store delivery

  • The oranges arrive at the store

I will be using the phone’s GPS capability to display a near-accurate distance a Sunkist Orange traveled. I have chosen the Sunkist brand navel orange for two main reasons. One: it is the only orange I have been able to find on campus, thus allowing me to standardize the story and image target (for Vuforia purposes). Two: Sunkist navel oranges are grown in a relatively contained region of the U.S. in southern California and western Arizona. Therefore, my distance approximation will be fairly accurate.

I am interested in using the orange because it’s a food product that many of us come into contact each day. However, it’s a food that it not native to Pittsburgh, not could be grown in Pittsburgh. Thus, my aim is to bring attention to distances food travels before ending up in a supermarket. Additionally, I call to attention the drawbacks of this highly globalized, highly fossil fuel dependent industry. I do this by focusing the app’s attention on the ecologically degrading components of food production, such as agrochemicals, processing, transportation, and distribution. I also want to mention that this project is not focusing on the social implications of agriculture and food policy. While these aspects are extremely important, they are not in the scope of this project.

I intend for this application to be used mostly in eateries and grocery stores. For example, in a Giant Eagle, Aldi’s, or Entropy, this piece would function as a semi-immersive AR experience, giving consumes an opportunity to learn about the production process of an orange before it is purchased.

This is how the project would work from start to finish:

  • An AR camera is pointed at an orange with a Sunkist PLU sticker
  • Vuforia recognized the image and creates the virtual scenes (listed above)
  • As a participant moves counterclockwise away from the orange, they would view the scenes in reverse order, ending at the farm.
  • As they move, they would notice their distance from their physical location increasing. This would require the phone’s GPS data.
  • When the user reaches the “farm” scene (scene 1), the AR experience is over

Here’s a photo of the PLU sticker:

Here’s a map of all Sunkist navel orange farms:

conye – ARTech

My project, a Secret Base app, will use firebase.

so this was my technical problem – using firebase in unity. I’ve used firebase before but after following all of the steps in their quickstart, I kept seeing this horrendous error – “InitializationException: Firebase app creation failed” I tried multiple times, with different settings and files, but this error kept showing up!!!

I switched my version of unity to 5.6 because I had 4.something originally and they were asking for 5.3, but it still didn’t fix it.

EDIT!!!! I solved the problem!!!!! Kind of! The error I kept getting was because I needed to create a folder called StreamingAssets to put the andriod google files into. None of the tutorials said I had to do that…. but it’s fine. The database works, but now I need to figure out their user authorization system.

these files changed everything?? no more errors! for now!

database demo

EDIT 2: I got the user auth to work! It’s just their basic quickstart example, but it’s working. It took a few hours for me to get it to build for iOS because of some problem with cocoapods :(. If anyone else wants to try Firebase/iOS, I noticed that a lot of people online had issues with the cocoapods/unity/firebase combo, but I think I’ve mostly figured it out!

For me the two big things were:

1) cocoapods couldn’t initialize its specs by itself for some (????) reason, so when I tried to build in Unity, it would get stuck on the C++ part, and eventually yell at me that “OS framework addition failed due to a Cocoapods installation failure. This will likely result in an non-functional Xcode project…

Analyzing dependencies
Setting up CocoaPods master repo
[!] Unable to add a source with url `https://github.com/CocoaPods/Specs.git` named `master-1`.
You can try adding it manually in `~/.cocoapods/repos` or via `pod repo add`.

You need to git clone it in yourself to fix it, and also maybe add the line: “export LANG=en_US.UTF-8” to your ~/.profile file:

ALSO I think they’ve changed something in the past 3 years because my repo already had a master directory and git starting throwing fatal error messages at me. You need to clone to “master-1”, not “master”, which is what the error message that Unity gives you says too.

2) ALSO, in Xcode, you have to open NOT the project, but the workspace for some reason for it to compile.

 

Written by Comments Off on conye – ARTech Posted in 09-ARTech

conye – ARConcept

Secret Base – make an AR room that only exists in one geolocation and is only available to you and your friends. You have to build it from ground up with a room builder or something.

 

sheep – ARConcept

My concept is to make a game where you use tarot cards to guide your dead lover through the after life. The initial concept came after the idea to have the player appear to be shuffling tarot cards in an arbitrary fashion, and it would be revealed that what they are seeing on their phone is actually the ghost moving in the direction the Tarot cards are moving. Each time the ghost is brought to the hand of the player, which will be wearing a glove with an image target on it, the level will restart, where the tarot cards will either have attracters or repellant spheres to the ghost. Based on this random drawing, you must move the cards in such a way to bring the ghost to you.

I want it to be somewhat relaxing and simple.