So far, I have created a software with some basic linkages and the ability to paste drawings on top of them. I have yet to add functionality for these linkages to be exported as a vector file that can be laser cut. I also want to connect to the Ponoko API, so those without access to a laser cutter will be able to order and assemble their own linkage toys. My project also needs an interface that will allow users to assemble the custom linkage toys. I am planning on creating a “dress-up game” type interface, where users can choose from a preset collection of body parts that will snap on to the linkages.
There are certain audio files that, when listened to, make me feel like being human isn’t so bad after all. They can be anything: songs, recordings from a friend, sound clips from a movie, or Formula 1 team radio exchanges.
However, the interfaces and procedures to access these files are dehumanizing and everyday, conveying no sense of occasion (e.g. below). I want to build a player that lets me play these files in a human, simple, clear way. Additionally, I want there to be a physical interaction that allows me to find a ritualized focus on the sound, with minimal distractions from UI’s and screens. Vinyl, cd’s, and cassettes provide such an interface, but are laborious to produce and record your own content onto. My device will utilize micro SD cards so files can quickly be loaded on using Finder, a nice calm place.
Notice below how when trying to listen to this one specific file, while fast and convenient, I get bombarded by all these other distracting messages that have nothing to do with the actual thing I’m trying to hear.
Form: I found these screenshots on Simone Reubadengo’s Are.na and they really inspired me. Since an intensely personal project, I don’t mind just having the form given so that aspect is fixed. I want this prototype to focus on me building a high craft, actually working product with high fidelity electronic prototyping. This area is definitely still open to interpretation, below are my initial cad models. The red top pieces would be interchangeable cartridges that would contain MicroSD cards, connecting to an arduino inside the device through pogo pins when they are inserted.
Additionally, I want to test my ability to interpret something fairly abstract such as these forms into a fully working electronic device.
For the final, I plan on continuing my work on the new tab screen.
Though most of its functionality was working for Project 4, the projects are not actually scrolling and dynamically updating the navigation. I definitely want to make the time aspect of the project actually work. Additionally, it would be great if I could add an archive for navigating through old drawings.
I also want to network this project so people can create a new room, connect with other people, and actually use this new tab screen. Ideally this will project will live on as a chrome extension.
Playing with the guts of machine learning models to create a conversational design partner.
For my RA work with the Archaeology of CAD project I am recreating Nicholas Negroponte’s URBAN5 design system. Built in the 1970’s, its purpose was to “study the desirability and feasibility of conversing with a machine about an environmental design project.” For my final project, I would like to revisit this idea with a modern machine (i.e. a machine learning model).
Most applications of ML are focused on automatically classifying, generating, stylizing, completing, etc. I would like to create an artifact that frames the interaction as an open-ended conversation with an intelligent design partner.
In its early stages, machine learning functioned as a black box. It developed an understanding of the world in its subconscious. Just like us, it had trouble articulating it’s intuitions. As we work on explainability, we develop tools that allow the machine learning model to communicate it’s understanding.
This project investigates this area through a drawing program with a chat bot powered by the mixed4d GoogLeNet hidden layer. As you draw, it will calculate the difference in the mixed4d layer between your drawing and a design intent (which could be an image or set of images) and then return the neurons with the greatest difference. Google provides an API for visualizing these neurons. This will produce a set of high level abstractions that represent what your picture might be missing (given your intent). These images will be shown through various Tracery.js prompts. The purpose is to make it feel like a conversation with the machine instead of an insistence that you do what the machine tells you. I could also add some stochasticity or novelty checks to keep the suggestions fresh.
This piece takes new machine learning interpretability technology, applies the idea of comparing high level abstraction vectors, and frames it as a conversation with a machine. It proposes an interaction with machines as partners instead of ‘auto’-bots that do everything for us, make all our decisions, free us from work, and control our fate.
I plan to further develop my DDR-inspired drawing game for the final project. In particular, I hope to clean up the prototype’s mechanics and incorporate a Photoshop-like interface for creating playable custom “step-charts”. If I have time, I also hope to prototype additional ideas that I have planned, such as paint fill combos, curved lines, or a polar coordinate system. At the exhibition, I will display the game and allow visitors to create their own “step-charts” or play others’.
Part of my proposed final project involves cleaning up my prototype. This can be broken down into 1) exploring better line renderer options, 2) discretizing the drawing to a grid in to promote usability and playability, and 3) revamping the internal representations and structures to better promote custom levels. A stretch goal is improving the GUI visuals.
The core of my final project will be the level creation tool. This tool will directly reference popular image editors such as Photoshop. The minimal implementation for this would simply be drawing lines between points on the grid and transforming these lines to a “step-chart” for the game. However, I hope to include additional functionality such as a draw speed parameter that determines the minimum length of a line that can be drawn in the tool. Another nice feature would be viewing the game sequence on the side, either as a playback or as a snapshot if the user traces the line with their cursor. The inclusion of this level creator also necessitates creating a navigable menu, preserving created levels, and supplying an interface to select levels.
Lastly, as a far stretch goal, I hope to explore some mechanics I have in mind. The most developed of these would be a paint fill combo mechanic. If the player successfully follows a sequence of colored arrows, the enclosed shape formed by the line drawn by those arrows will be filled with the color of the arrows. Another possibility is creating a polar coordinate game mode.
Change of plans. Trying to make custom shader code for Unreal has proven a mild nightmare. I’ll keep working on that slowly but my final project will just be fixing up my drawing assignment. I’d like to make the comparison algorithm less bad and maybe add a clickable UI instead of all keyboard presses and also fix whatever bug is making it crash any mac computer I run it on.
For my final project I’d like to work on a project I’ve been doing as an Independent Study. I’m implementing the paper, “Art Directed Watercolor Stylizations of 3D Animations in Real-time”. Currently only a Maya implementation exists, so I’ve been working to make and release code so that people can use the watercolor style on their games. I’ve mostly completed a C++ implementation, and I’d like to make an implementation for Unreal Engine as my final project for this class (I got permission from my advisor, Jim McCann already). I’m also in the process of documenting my project so far and making a poster for Meeting of the Minds.