Tyvan – LookingOutwards-04

 

 

This looking outwards is dedicated to the artist, scientist, and technologist Philipp Drieger. Three interpretive information translations rendered from his code. Each video exhibits examples of tension between two and three dimensional representations of information.

Propagation of Error forflashes codes to the upcoming multi-dimensional experience. Priming the viewer w/ highly graphic orthogonal graphics adds a 2d lens to the introduction of a third dimension, interaction; multiple 2d patterns, upon addition to the  central shape, cause interference with the addition of information layers. I enjoy Drieger’s decision to include a non-spacial dimension before revealing the central shape’s form because it ties complexity to interaction, rather than space itself. As the third spacial dimension is revealed, so is color, Dorothy!

The first project of his I discovered was through a climb across youTube, viewing some music visualizer processing demos. Drieger’s video contrasted other videos because the author recognized timing intervals and dramatic movement as core traits of music.

 

2.073.600 Pixels in Space is a 2.5d extraction of brightness values. This is the same method of dimensional interpolation I used for Mocap (particularly 1:30). The camera movement throughout the reliefs pop the tension of boundaries in 2, 2.5, & 3d spaces.

 

 

hes also a sales engineer who makes music.

 

 

ookey-ARTech

The technical problem I had to solve with my piece was deciding how I wanted to compose my song.  Something that was immediately important to me was to convey the meaning through the cards through song in a way that the images themselves do.  I also planned on doing a three card spread, with the first card being past, the middle card being present, and the third card being future.  I was also interested in finding a way to convey this through song.  Ultimately, I decided to experiment with loops for each card, inspired by the piece ROT NERO by Mark Vomit.  I also decided that this would be the project I would carry on to be my final piece, whereupon I will expand my concept both in the generation of the sounds and how the cards impact each other.

aahdee-ARProject

This is an update on my CFA AR Bathhouse.

Since last time, I added a small pool into the bath house and a trigger to open doors. It doesn’t seem like a lot, but working with shaders was very difficult and I plan to turn this into my final project since I think that I can do much more with this.

First up are my tech sketches.

In another scene, I imported a prefab from this website to get an idea of what to do, and from some video tutorials I created my own hole in the floor. I then experimented with inserting long prisms into the hole and I know how to implement it, but I need more time to understand shaders. I also tried to use multiple image targets since even with extended tracking, there’s some uncertainty. I realized that I would have to get a tape measure and map the distance between the multiple image targets and place them accordingly in Unity. I also noticed that the Main Camera object works leagues better than the AR camera for image tracking for some reason. This discovery made my project look nicer and more flushed.

Next is what I have:

(I later realized that in this video, extended image tracking isn’t turned on here, whoops)

conye – ARProject

So my concept is:

Secret Base – make an AR room that only exists in one geolocation and is only available to you and your friends. You have to build it from ground up with a room builder.

This was my personal todo list for the project:

  • implement firebase user auth and database to get the friend system working
    • todo later:
    • fix security rules to not everyone can read/write
    • add a sign out feature
    • figure out authorization persistence
  • figure out how to use vuforia user defined targets / uploading image targets from a camera
  • figure out geolocation with mapbox probably??
  • try to add uConstruct
    • ask golan to help me pay $30 for uConstruct

What I have currently is a working user authorization system and database.

Basic Vuforia setup is working!

Above is a screenshot of the current database. It’s just two people, Connie and connie. They’re ordered by user ids, and contain- lists of friends, personal friend code, and a list of furniture and their info.

Questions for the presentation:

  • I also would like some aesthetic direction
  • does anyone know anything about dynamically uploading vuforia image targets
  • build base in 3d view? or first person view? build in blank world or in AR?
  • only one person can edit the base at a time to reduce confusion? or multiple users can edit at a time?

farcar – ArProject

The biggest technical problem in my project was getting wall detection to work for Google AR Core: something that I wasn’t able to solve last week. I had spent so much time trying to figure our how to generate TrackingPlanes when in reality I could have just used the Point Cloud data I was getting and track objects to it. And that’s exactly what I did.

The algorithm works like this:

Computer Language

  • Save an array ‘Content’ of String values.
  • Read from the point cloud data (Points are in the form <x,y,z,c> where c is the confidence).
  • If screen is touched, read from the touch data.
  • Increment ‘touchCount’.
  • Assign Content[touchCount % Content.Count] to a 3D text layer (this will be used later when we actually place down objects).
  • Project all the 3D points onto a 2D screen.
  • Using the <x,y> position from the touch data, find the closest 2D projected 3D point.
  • Find the two closest points to that point in 3D space.
  • Construct two vectors out of the three points (this is our plane).
  • Get the cross product of the two vectors.
  • Using the initial 3D point with the cross product, instantiate a new 3D text layer with the position of the 3D point and rotation of the cross product.

Human Language

  • Press somewhere on the screen
  • The closest point to where you touched will be the location a new text object will be generated.
  • The text will try to orient to the surface by extrapolating a plane based on its neighboring points

After some additional refining (such as point filtering for low confidence values) I was able to get more accurate results.

 

Code                                   

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
using GoogleARCore.HelloAR;
 
public class TextCore : MonoBehaviour {
 
    public int clickCount;  //number of screen taps
    public TextMesh textLayer;  //text layer to be changed 
    public GameObject TrackedPointPrefab;  //prefab that generates over tracked points onscreen
    public GameObject GeneratePointPrefab;  //prefab that generates over a particular tracked point when screen is tapped
    public List points;  //array of tracked points &lt;x,y,z,c&gt; s.t. c is confidence from 0 to 1
 
    public Camera Camera;
 
    List content = new List();  //array of words to process
 
    void filterPoints(float filter) {
        for(var i = 0; i &lt; points.Count; i++) {
            if(points[i].w &lt; filter) points.RemoveAt(i); } } //canSee (camera, point) = true if point is in view of camera, false otherwise bool canSee (Camera camera, Vector3 point) { Vector3 viewportPoint = camera.WorldToViewportPoint(point); return (viewportPoint.z &gt; 0 &amp;&amp; (new Rect(0, 0, 1, 1)).Contains(viewportPoint));
    }
 
    //minIndxFinder (minIndx, minReg, touchPos) = the index i s.t points[i] is the closest point to touchPos
    int minIndxFinder(int minIndx, float minReg, Vector3 touchPos) {
        for(int i = 0; i &lt; points.Count; i++) {
            if(canSee(Camera, new Vector3( points[i].x, points[i].y, points[i].z))) {
                Vector3 pointTemp = Camera.WorldToViewportPoint(new Vector3( points[i].x, points[i].y, points[i].z));
                Vector3 pointTempAdj = new Vector3( (Screen.width) * pointTemp.x, (Screen.height) * pointTemp.y, 0);
 
                float minRegTemp = (pointTempAdj - touchPos).magnitude;
                if(minRegTemp &lt; minReg) {
                    minReg = minRegTemp;
                    minIndx = i;
                }
            }
        }
        return minIndx;
    }
 
    void generateText (Vector3 target, Vector3 cross, int depthLen) {
 
        Vector3 camDir = Camera.transform.forward;
        float dot = Vector3.Dot(camDir, cross);
        if(dot &lt; 0) {
            cross = Vector3.Reflect(camDir, cross);
        }
 
        float dist = ((Camera.transform.position) - cross).magnitude;
        textLayer.fontSize = 1000 + 1000 * (int)dist;
 
        textLayer.color = Color.black;
        Vector3 depth = target;
        for(int i = 0; i &lt; depthLen; i++) { if(i &gt; depthLen - 5) {
                textLayer.color = Color.white;
            }
            Instantiate(GeneratePointPrefab, depth, Quaternion.LookRotation(cross, new Vector3 (0,1,0)), transform);
            depth -= 0.001f*cross;
        }
    }
 
    void updateTouch() {
        if (Input.touchCount &gt; 0)
        {
            Touch touch = Input.GetTouch(0);
            if(touch.phase == TouchPhase.Began) {
                float x = touch.position.x;
                float y = touch.position.y;
                Vector3 touchPos = new Vector3(x, y, 0.0f);
                clickCount++;
                clickCount = clickCount % content.Count;
                textLayer.text = content[clickCount];
 
                if(points.Count &gt; 2) {
                    int minIndx = minIndxFinder(0, 99999999999999f, touchPos);
                    int minIndx2 = 1;
                    int minIndx3 = 2;
                    float minDis1 = 99999999999999f;
                    float minDis2 = 99999999999999f;
                    for(int i = 0; i &lt; points.Count; i++) {
                        if(i != minIndx) {
                            float minDisTemp = (points[minIndx] - points[i]).magnitude;
                            if(minDisTemp &lt; minDis1) {
                                minDis1 = minDisTemp;
                                minIndx2 = i;
                            }
                            else if(minDisTemp &lt; minDis2) {
                                minDis2 = minDisTemp;
                                minIndx3 = i;
                            }
                        }
                    }
                    Vector3 v1 = points[minIndx2] - points[minIndx];
                    Vector3 v2 = points[minIndx3] - points[minIndx];
                    Vector3 cross = Vector3.Cross(v1, v2);
                    cross /= cross.magnitude;
                    Vector3 target = new Vector3(points[minIndx].x, points[minIndx].y, points[minIndx].z);
                    generateText(target, cross, 25);
                }
            }
        }
    }
 
    void Start() {
        content.Add("Now it's time \n to go to bed");
    }
 
    void Update() {
        points = GetComponent().getPoints();
 
        print(Camera.transform.forward);
 
        //filterPoints(0.1);
 
        updateTouch();
    }
}

Thanks to Golan & friends of the Studio for helping me mathematically plan the project.

kerjos-ARProject

(In Progess)

I have made a lot of progress towards my idea for a complete app. Below are some GIFs showing the core features I’ve been able to develop so far, including loading an inventory of words to find based on a given poem, recognizing photographs words and checking them off that list, and having a working cycle for a new game.

This demo is using the first three lines from e. e. cumming’s “anyone lived in a pretty town how.”

Working Inventory:

This inventory still needs to be formatted correctly, including for iOS. (This was one of the bug that prevented me from demoing my progress on an iPhone.) Additionally, it needs to alphabetize or otherwise scramble the words of the given poem. It also is currently splitting up the .txt file incorrectly and missing words that begin with punctuation [like “(with”] . It also needs to be able to accommodate longer poems, ideally by generating multiple pages of inventory.

 

Takes Pictures and Crosses Words Off List:

This I’m not sure if it’s working on iOS yet. Ideally, taking a photo will be accompanied by an animation to indicate whether or not a desired word was found. If one is not, I would like to include some fading stars or some other indicating of failure. If one is found, I would like to animate a star or a sparkle travelling over to the inventory icon.

Recognizes when all the words have been found and offers start of a new game:

Ideally, the inventory would open first and display all the found words, then have a smoother transition to the poem displayed in full. I would like to accommodate the completed-poem page with an audio file reading the poem to the player.

One of the worries I had about this idea was that it was fairly easy to cheat on this game by loading words on another screen and snapping pics of them. Fortunately through playtesting this, I realized that the camera has a tough time looking at and reading from a digital screen, and that’s frustrating enough that I think it will it discourage cheating, at least immediately.

Thanks to my buddy Peter and my friends in the STUDIO.

 

miyehn-ARproject

Here’s what I have (for now).

My concept hasn’t changed since my previous blog posts. I just wasn’t able to implement everything by now. There’s still a lot to work on but hopefully it shows the very basics of my idea.

The next things to work on include: fixing the current bugs (not so apparent in the video but I know they’re there), making everything’s scale correct, texture the model(s), maybe changing image target for the placement of firework into something more interesting or less obvious as an image target, putting the image target for the lighter onto a paper model of a lighter, and possibly also implement the part for connecting multiple fireworks with gunpowder and lighting them in sequence.

Actually I do like one of the image targets I made. It just doesn’t fit in the context very well. I’ll figure this out later.

rolerman-ARProject

AR Copy & Paste

I created AR Copy and Paste using Apple’s ARKit. You hit the “capture” button to grab an image from the real world, and then tap all around you to paste it wherever you want. At any time, you can grab new frames and add to your masterpiece.

I think it’s a fun prototype, and I’d like to keep developing it further. The UI is currently very rudimentary, and I’d like to be able to isolate specific sections of images (i.e. to highlight a single leaf and paste that) rather than copy and pasting the entire camera view. This was my first experience with ARKit and I’d like to keep learning!

Thank you to Zach Lieberman’s app Weird Type for a chunk of the inspiration for this project!

 

 

miyehn-artech

First I tried detecting the “shaking motion” of an image target, by comparing an image target’s position change per unit time. I could use this trick to implement the action of leaving a trail of gunpowder from the firework. But this didn’t work because the frame rate of AR camera was extremely low, so if I actually shake the image target in front of the camera, it appears discontinuous and Vuforia would no longer recognize it.

So I tried something else- I managed to add two image targets to the scene and Vuforia could recognize both of them. I then wanted to get their relative positions but I was stuck. Some Googling gave me several fixes but none of them worked and I was frustrated for a while until I realized it was caused by a bug in my own code. Oops.

There’s really not much image to show here because most of all this was just Googling and debugging. To see my progress you can check out my next few blog posts.

phiaq – ARProject

Photo Album AR Experience

Full Album on Desktop (Video texture on my phone is really slow)

(better documentation to come)

I created a physical photo album which has videos inside the pictures as a way to relive your memories. Here, I grabbed a family video and made them into still images and videos in the album. Since the videos did not come with audio, I had to edit online audio and sometimes create my own. In this project, I wanted to explore ways of capturing and remembering your memories in the most real form which includes movement and sounds. The photo album is a way of looking back at the happy moments that you would want to remember and cling on to, in case you forget.

Technically, I struggled with having the videos play normally on the phone when the image target was detected. The phone would just have a still image of the video, because the video lagged so much. I tried using HandBrake, preloading assets, and other things, but I could not get the videos to play well on the phone. I am going to continue to try out taking out the lag on the phone and take better documentation in my free time.

I think this is an idea that could be developed further if I were to make it again, and I also think it would be interesting to further develop the aesthetics of the physical album. If I were to iterate, I  want to create a narrative with the movies, and place more specified and interesting audio recording which would  reveal more of the family  in a more real, and not just idealized, sense.

Thanks to Claire, Sydney, Peter, and Alex for helping out

Jackalope-ARProject

Okay so my idea was to make an AR buddy that would be kinda rude/avoidant in that it would hang around you all the time and talk constantly at you and bother you, but as soon as you try to say anything to or look at it, it’d turn away and ignore you. So these functions were all achieved. I also wanted to animate the model to look more natural and also have it go to sleep whenever there was a good plane detected, however both of these didn’t work out because I had a lot of trouble making maya features translate to unity (this is also why my blobby dude is such a strange color), and I didn’t have to time this week to learn to rig and animate in unity. So all in all, I’m happy I did the main thing I set out to do, but regret how this project still feels quite minimal.

Special thanks to Miyehn my buddy who talked about ideas with me and Zbeok, who’s speech project mildly inspired this and also the Boos in the Mario games.

zaport-ARProject

Produce Stories

ProduceStories is an augmented reality app that aims to humanize farmworkers with spoken word performances and poetry. I made this piece to push back on the notion of commodified labor in the agriculture industry and to humanize those that sustain it. I wanted to focus on the creativity and humanness in the individuals implicated in the production of food, to remind consumers that PEOPLE with STORIES are implicated in the production of food. From a consumer’s perspective, it’s not hard to lose sight of where a product comes from. In the case fresh produce, it’s a challenge to connect a fruit or vegetable conveniently displayed in your supermarket to a human life. Thus, the app I’ve created is a scavenger-hunt of sorts that is in direct conversation with these ideas. Hidden among (or within) different fruits and vegetables are the stories and creative writings of farmworkers. This app does more than just remind consumers that the pear or orange they eat comes from someone’s backbreaking work. It challenges the traditional narrative of objectified and commodified farmworkers. I was interested in storytelling and sourced my interviews from StoryCorps. The first interview in the video is Vito de la Cruz and Maria Sefchick-Del Paso and the second is Alicia Beltrán-Castañeda and Serena Castañeda. You can find the complete stories here (storycorps.org/listen) or you can interact with my app.

My plan from here is to continue to increase my library of image targets and stories. Eventually, I would like to get to a point where I can have poems or spoken word excepts for nearly every type of fruit in a supermarket. Finally, my biggest concern is that I am using produce to represent the lives of farmworkers. In an effort to not work counterintuitively, I ask you, my audience, if this project succeeds in accomplishing its goal, or if it on perpetuates the notions it is trying to push back against.

Special thanks to Tyvan and Claire for helping me on this project!

Here are some photos:

dechoes – fatik – ARProject

Made in collaboration with Fatik.

Step 1: Curation and Cinematography.

Fatik and I were interested in the idea of intent and perception. We wanted to go beyond the simple tech demo and concentrate heavily on curation and craft. Our project  lies at the intersection of emerging visual tech, AR, and a more classical art form, cinema. Fatik and I spent an entire day collecting items an curating an environment which we could use for filming. We concentrated heavily on color and composition in order to enhance the mundane activities we planned to depict in our scene, which themselves lied at the crossroad normal and uncanny.

 

    

   

Step 2: Editing and Animation.

After filming our scenes, we did a bit of post production to finalize the footage. We then decided which part of the scenes we wanted to augment. We decided to concentrate on different background  aspects of the shot which relied on perception and attention.

Step 3: Final.

We augmented our scenes by projecting them onto a wall, suggesting a possibility of using AR to augement cinema in theaters in the future.

Troubleshooting:

We had a lot of issues with the model positioning and tracking. After spending a decent amount of time on Stackoverflow and Vuforia Support, we came to the conclusion that this was a Vuforia bug that we could not fix perfectly. We got it to work well enough, i.e. one in every five times or so. Vuforia is still quite the wild card.

#10 – Avatar and Ackso

Once upon a time Ackso and I went to a Landfill.

It’s called the Reserve Park Landfill.

But that landfill only had building waste and was also easy to break into!

So we decided to take on the big fish – the Waste Management Landill in Monroville – where all of Pittsburgh and the surrounding area dumps its trash.

This is the entrance. If you can’t tell ( which we couldn’t) the entire property is surrounded by barbed wire fences.

At first we decided to flank them. With 360 video camera’s strapped to our heads we bushwhacked through the forest. We only had an hour before sunset because we needed to sneak in unnoticed and WML closes at 4.

But as I said before the entire property was surrounded by 9 ft barbed wire so how were we supposed to get in without wire cutters or a shovel? I suggested to just go through the front gate which was sketchy and people were still working. So thats what we did.

It was much easier than anticipated.

The property had several mountains of covered trash. We ran up to the top just in time to catch the sunset in search of our beloved waste.

Over every hill top we thought we would find this sea of trash but the only trash we could find was in this covered room.

Jackson climbed the pile because we forgot gloves .

He is smiling from ear to ear in this and in conversation with 2 cats and several raccoons.

We finally found our trash.

Super unfortunately for us though both 360 cameras missed him climbing the pile.

Our entire project was dependent on that footage so we will be going back on Saturday. We love committing crimes so this will be another touch with almost being arrested.

rolerman-ARTech

Placing 2D images in 3D space with ARKit + SpriteKit

For my AR Copy & Paste app, I need to be able to place 2D images in the real world. I’ve never used ARKit before, so this will be an adventure. I also want the images to billboard, so that when the user is looking at them, they are facing the user, not turned around.

The reference page that is most helpful to me: https://developer.apple.com/documentation/spritekit/skspritenode

It turns out that this is a common example app for SpriteKit, so I set up the example ARKit app and played with that. Every time you tap the screen, it places a new sprite. I tried it with text as well, and placing other images.



 

 

rolerman-ARConcept

I want to create an AR “copy & paste” app using Apple’s ARKit. The app would allow you to select something from the world around you, and then stamp that same image all over your world. For this first prototype, I’m going to have people grab rectangularly cropped chunks of the ARFrame around them, and then paste those throughout the world.