geebo-FinalProposal

An Audio Player for One Person (me)

There are certain audio files that, when listened to, make me feel like being human isn’t so bad after all. They can be anything: songs, recordings from a friend, sound clips from a movie, or Formula 1 team radio exchanges.

However, the interfaces and procedures to access these files are dehumanizing and everyday, conveying no sense of occasion (e.g. below). I want to build a player that lets me play these files in a human, simple, clear way. Additionally, I want there to be a physical interaction that allows me to find a ritualized focus on the sound, with minimal distractions from UI’s and screens. Vinyl, cd’s, and cassettes provide such an interface, but are laborious to produce and record your own content onto. My device will utilize micro SD cards so files can quickly be loaded on using Finder, a nice calm place. 

Notice below how when trying to listen to this one specific file, while fast and convenient, I get bombarded by all these other distracting messages that have nothing to do with the actual thing I’m trying to hear.

Form: I found these screenshots on Simone Reubadengo’s Are.na and they really inspired me. Since an intensely personal project, I don’t mind just having the form given so that aspect is fixed. I want this prototype to focus on me building a high craft, actually working product with high fidelity electronic prototyping. This area is definitely still open to interpretation, below are my initial cad models. The red top pieces would be interchangeable cartridges that would contain MicroSD cards, connecting to an arduino inside the device through pogo pins when they are inserted. 
Additionally, I want to test my ability to interpret something fairly abstract such as these forms into a fully working electronic device.

geebo-tricorder

My Tricorder is called ‘Damera’. It is striving to be a perfect recreation of the iOS default camera app, except for one thing: It only takes pictures of dogs. I find myself taking pictures of all kinds of things, some good and some bad, but myself and a few others in the studio found it might make the world a slightly more wholesome if all that was allowed to be photographed were dogs. Whenever I scroll through the gallery on this app, I definitely feel a lot calmer than when I do on my normal photos app.

I have a slight obsession with redrawing interfaces and I love to add a weird twist to them. For that I chose the absurd camera button, going back and forth between a prohibitive sign indicating that all non-dog photos are not allowed and a nice, happy Corgi. Unfortunately, there is still a lot of React Native troubleshooting to go to replicate a lot of the UI elements that would make it funny such as having a carousel of options such as ‘people’, ‘pano’, ‘dog’ etc. Additional attention to detail is required on the typographic elements as well as app icon if I want to make it a perfect recreation of the original Camera.

As far as technical implementation goes, the most interesting thing for me was learning about coreML on iOS. Apple distributes a mobilenet trained on imagenet on their developer downloads page that’s extremely fast and able to identify a somewhat hilarious number of dog breeds. I use this model against the camera feed and check its outputs. If it matches any of the known classifications, I enable the shutter button.

[Video Documentation and Gallery Coming Soon]

geebo – tricorder check-in

I’m working on a camera that uses computer vision to constrain what can be captured to something very specific. I also thought that this would be a good excuse to learn CoreML and take advantage of the Neural Engine inside iPhone to run the models at good frame rates.

I first started by creating an application that can only takes pictures of dogs, but I want to move the classification to a much more niche and specific topic. One area that I’m especially interested in taking this specific camera is being able to determine if a photo that I’m about to take will do well on a specific sub-reddit or andys.world. My next step is to scrape some of these social websites and compare images that are upvoted to the front page vs those that are not and see if I can build a camera app that only will take photos of things that would end up being upvoted.

Here you can see a react native app I’ve prototyped that runs the MobileNet on CoreML.

geebo-LookingOutwards03

Space Time camera by Justin Bumstead is a small, real time slit scanning device that anyone can use to create their own space time images. The system consists of a small, wide angle camera, some simple buttons and potentiometers, and a raspberry pi running processing.

The system is battery operated and portable, allowing people to preview and edit their own images and videos in real time.

In 2012 Adam magyar used this diy camera built from industrial slit scanner parts and a medium format camera lens. He then used these to capture striking photos

The part of the project that is most interesting to me is the interface described on the camera. I unfortunately haven’t managed to find any photos of the project, but I think that the kind of interface used to move the slit could allow for new interpretations of the ‘time space’ concept made in real time by the photographer.

“Magyar wrote a program that would operate the scanner “with a special user interface that was optimized for the project,” which allowed him to preview the compositions before he began to scan the scenes.”

Links:

[1] https://www.pdnonline.com/gear/diy-camera-adam-magyars-slit-scan-camera/

[2] https://www.instructables.com/id/How-to-Stretch-Images-Through-Time-With-Space-time/

[3] http://www.magyaradam.com/

 

geebo-DrawingSoftware

“How will we change the way we think about objects, once we can become one ourselves?” – Simone Rebaudengo

The aim of this project is to leverage a surprisingly available technology, the Wacom tablet, in order to change one’s perspective on drawing. By placing you’re view right at the tip of the pen, mirroring your every tiny hand gesture, scale changes meaning and drawing becomes a lot more visceral.

Part of this is every micro movement of your hand (precision limited by the wacom tablet) becoming magnified to alter the pose of your camera. Furthermore, having such a direct, scaled connection between your hand and your POV allows you to do interesting things with where your looking.

Technical Implementation

I wrote a processing application that uses the tablet library to read the wacom’s data as you draw. From there, it also records the most precise version of your drawing to the canvas as well as sends the pen’s pose data to the VR headset over OSC.

Note: One thing that the wacom cannot send right now is the pen’s rotation, or absolute heading. However, the wacom art pen can enable this capabilities with one line of code.

The VR app interprets the OSC messages and poses a camera accordingly. The Unity app also has a drawing canvas inside as well, and your drawing is mirrored to that canvas through the pen position + pen down signals. This section is still a work in progress, as I only found too late (and after much experimentation) that it’s much easier to smooth the cursors path in screen space and then project it onto the mesh.

The sound was done using granulation of pre recorded audio of pens and pencils writing in Unity3D. This causes some performance issues on Android and I may have to look into scaling this back.

geebo-2Dphysics

I wanted to create a project that used some kind of gaze tracker after I had seen a simple experiment using a webcam published on the chrome experiments page. I also loved the idea of a project that purposely avoids being seen!

My initial idea was to create a simple 2D platformer where players own ability to look at their own character would be hindered by a massive physics repulsion from their own gaze, making jumping, running, etc. much more difficult. I created simple flying rectangles that were planned to be enemies/obstacles that were also disrupted by gaze.

However, after some initial experimentation, I became fascinated by the behavior and visual nature of the simple system. I found that the small beads took on their own life and seemed to squirm away whenever they were put into the microscope. Rather than force them into the platformer, I gave them a small voice to portray their anger at being observed.

 

geebo-reading1

I think that Flanagan’s second notion that critical play can mean “Toying with the Notion of Goals” is most aligned with the area that I want to explore in this course. Many mainstream games rely on players implicit or intuitive understandings of game patterns that are common across games and built up over time to communicate the goal of a game. A classic example is a 2d side scroller: when people who are familiar with games are presented with the set of visual metaphors common to side scrollers, they instantly know they have a duty to explore what’s off screen to the right. When our characters have health bars, we generally assume our objective is to preserve that health and stay alive. These small mechanics or interactive elements are steeped in background context for players, but the games in this category play on that context and the expectation it creates to make the player reflect in different ways.