@good_egg_bot is a twitter bot that makes eggs.

The good_egg_bot is a benevolent twitter bot that will try its best to make the egg(s) you desire. Twitter users can tweet requests for certain colors, types, sizes, and amounts of egg to get an image of eggs in response. I made this project because I wanted to have a free and easily accessible way for people to get cute and customizable pictures of eggs. I was also excited by the prospect of having thousands of unique egg pictures at the end of it all.

I was inspired by numerous bots that I have seen on twitter that make generative artwork, such as the moth bot by Everest Pipkin and Loren Schmidt, and the Trainer Card Generator by xandjiji. I chose eggs as my subject matter because they are simple to model through code, and I like the way that they look.

I used nlp-compromise to parse the request text, three.js to create the egg images, and the twitter node package to respond to tweets. I used headless-gl to render the images without a browser. Figuring out how to render them was really tricky, and I ended up having to revert to older versions of packages to it to work. The program is hosted on AWS so the egg bot will always be awake. I found Dan Shiffman’s Twitter tutorials to be really helpful, though some of it is outdated since Twitter has changed their API.

This project has a lot of room for more features. Many of people asked for poached eggs, which I’m not really sure how to make but maybe I’ll figure it out. Another suggestion was animating the eggs, which I will probably try to do in the future. Since it took so much effort to figure out how to render a 3D image with headless-gl, I think I should take advantage of the 3D-ness for animations. I like how, since the bot is on twitter, I have a record of how people are interacting with the bot so I can see what features people want. In my mind, this project will always be unfinished as there are so many ways for people to ask for eggs.


Here is a video tutorial on how to use the good_egg_bot:


‘wide-awake’ sleeping mask

or ‘Sleep with one eye open’…’sleep walking’? …’seeing through the eyes of the computer?’ see as computers do? digitally enabling the unconscious?

technically —

I’m thinking of having a live performance of someone napping in the studio wearing a custom iPhone holding sleeping mask.

The tentative technical plan is to have a native app running full screen on the phone tracking people watching the sleeping person through the front view camera surreptitiously turned on the entire time. The eyes on display would react to the environment’s people and optical flow…..potentially….either way…we will see…. Apple’s core ML has some finicky aspects to it.



I plan to build a system that allows anyone to create their own constellations in virtual reality (VR), and easily save and share their creations with other people. Users would be able to select stars from an interactive 3D star map of our Galaxy that consists of the 2,000 brightest stars in the Hipparcos Catalog, and connect any of the stars together to form a unique custom constellation shape from Earth’s perspective. When the constellation is saved and named, users would be able to select and access detailed information on each star. They would also be able to break away from Earth’s fixed viewpoint to explore and experience their constellation forms in 3D space. The constellation would be added to a database of newly created constellations, which could be toggled on and off from a UI panel hovering in space.

The saved constellation data would also be used to generate a visualization of the constellation on the web, which would provide an interactive 3D view of the actual 3D shape of the constellation, with a list of its constituent stars and detailed information of each star. The visualization may potentially include how the constellation shape would have appeared at a certain point in time, and how it has changed over the years (e.g. a timeline spanning Earth’s entire history).

The core concept of the project is to give people the freedom to draw any shape or pattern they desire with existing stars and create a constellation of personal significance. The generated visualization would be a digital artifact people could actively engage with to uncover the complex, hidden dimensions of their own creations and make new discoveries.

A sketch of the concept below:

A rough sketch of the builder and visualization functionalities
Interactive 3D star map I built in VR
Exploring constellation forms in 3D space, and accessing their info

tli-telematic reflection

The first thing I learned is that many of my classmates have terrible handwriting.

Kidding. More seriously, the feedback framed my vandalism cairn in a way I hadn’t expected while I was implementing it at 3AM right before the deadline. For one, many people referred to it as a game even though I hadn’t fully intended for it to be a game. People also emphasized the collaborative aspect much more than I had actually thought about during implementation. I also received useful execution feedback, such as technical things to fix, usability improvements, and further exploration of this idea.

gray – telematic crit reflection

Perhaps the overall thrust of the piece was lost or unapparent when I showed three as-yet disconnected interactional elements, without the unification that would make the overall experience legible to the initial viewing in crit. Many of the critiques identified that the completed piece would likely be more understandable once all of its elements are unified.

The categories in the DAIE sequence are almost more useful for me in reading their collation than perhaps the writer themselves. Simple sections such as Description make apparent if I had communicated the actual content fully, whereas the act of writing those uneditorialized observations might be less of an opportunity for their own syntheses.

Sometimes certain fields were left blank, as if those were sacrificially difficult to complete amidst the conversation and the time constraint.

Some of the comments questioned if the piece was more an instrument to create melodies or a tool for exploring a prewritten melody. I built it for both, though I think that the method of granular synth allows a prewritten melody to be played as an “instrument”, as if the spatial structure and arrangement of the available notes afford the same ability of create melodies, even though their spatial pattern isn’t chromatic.

I received some good feedback that the sort of discrete structure-per-note situation with pianos etc might easily apply to this piece with multiple separate volumes with the full continuous mapping of each granulator, but each volume being discrete.

dechoes – visualization/manufactory

Infinite Cities Generator

This project is based on Italo Calvino’s book Invisible Cities, a novel which counts the tales of the travels of Marco Polo, told to the emperor Kublai Khan. “The majority of the book consists of brief prose poems describing 55 fictitious cities that are narrated by Polo, many of which can be read as parables or meditations on culture, language, time, memory, death, or the general nature of human experience.” (Thanks wikipedia)

What interested me about this novel, was how much it could be assimilated to generative storytelling and big datasets. I noticed as I read on, how closely the author was following specific rule sets, and how those same rules could be used to generate a vast amount of new stories. I was fascinated by the complexity, detail and visual quality of each city that Calvino created and decided to create more of my own.

I started by decomposing the structure of his storytelling and separated his individual texts into multiple categories, such as Title, Introduction, Qualifiers, Actions, Contradictions and Morals. I sampled actual thoughts,  sentences and names from his book but also added my own to the mix. I programmed my Infinite Cities Generator in p5.js using Kate Compton’s Tracery (Thanks Kate!).

Over the course of the next few weeks, I would like to complexify my rule sets as well as create generative maps for each new city, as a way to offer a visual escape into them. In addition to that, I would like to generate pdfs and actually print the book as a way to have a physical and believable artifact by the end of the project.

Below are a couple samples of the kind of stories my Infinite Cities Generator can create:

In addition to this project, I have been working on a 3D map experience, retracing all the places I have walked to in an entire week while dealing with grief. I walk when I have things to deal with or think through, and that week I walked an average of 2h a day. I’m thinking of displaying this instead of/or in addition to the Infinite Cities Generator. It would be displayed on the LookingGlass as a 3D video playing in real time, with the camera traveling on the exact paths I did.



And in addition to THAT, I have been slaving over my thesis project Dedications I-V, a volumetric documentary on storytelling in the context of progressive memory loss. It will be taking the form of five individual chapters on memory, with five different protagonists. Although I can’t really show it just yet, this is where all of my energy has been put into.


(i’m overcompensating because i haven’t produced anything real in this class yet — whoopsie)



My Tricorder is called ‘Damera’. It is striving to be a perfect recreation of the iOS default camera app, except for one thing: It only takes pictures of dogs. I find myself taking pictures of all kinds of things, some good and some bad, but myself and a few others in the studio found it might make the world a slightly more wholesome if all that was allowed to be photographed were dogs. Whenever I scroll through the gallery on this app, I definitely feel a lot calmer than when I do on my normal photos app.

I have a slight obsession with redrawing interfaces and I love to add a weird twist to them. For that I chose the absurd camera button, going back and forth between a prohibitive sign indicating that all non-dog photos are not allowed and a nice, happy Corgi. Unfortunately, there is still a lot of React Native troubleshooting to go to replicate a lot of the UI elements that would make it funny such as having a carousel of options such as ‘people’, ‘pano’, ‘dog’ etc. Additional attention to detail is required on the typographic elements as well as app icon if I want to make it a perfect recreation of the original Camera.

As far as technical implementation goes, the most interesting thing for me was learning about coreML on iOS. Apple distributes a mobilenet trained on imagenet on their developer downloads page that’s extremely fast and able to identify a somewhat hilarious number of dog breeds. I use this model against the camera feed and check its outputs. If it matches any of the known classifications, I enable the shutter button.

[Video Documentation and Gallery Coming Soon]

jamodei – Interactive Manufactory – Weird Shapes in Public – check in


Packing and Cracking – Getting Weird Shapes Out in Public

NC Gerrymandered Districts
NC’s Gerrymandered Districts ready for laser cutting into coasters (and other weird shape interactions)


I will be taking my pass for this project. Nonetheless, here is where my research for this project is currently at, and where I am interested in taking these ideas in the future.

I am quite busy with tech rehearsals for my video/projection design of Atlas of Depression  in the School of Drama (Which is open April 17-19).

Atlas of Depression tech still
Tech process photo! Using ISF shaders to create affective landscapes and manipulate live video feeds, too.


‘Packing and Cracking’

I am working on creating/writing/producing an interactive, map-based theater project called ‘Packing and Cracking’ that confronts and explores the harsh realities of gerrymandering in my home state of North Carolina. My collaborator, Rachel Karp, and I have describe this project as:

“A multimedia mapmaking event, ‘Packing and Cracking’ explores redistricting–and the widespread manipulation of redistricting known as gerrymandering–in America today. ‘Packing and Cracking’ focuses on redistricting in that state, whose maps have been so racially and partisanly manipulated that it has led to the state no longer being considered a democracy. Set on a theater-sized map of North Carolina, with the audience arranged across it to match the state’s population demographics and distribution, ‘Packing and Cracking’ uses cutting-edge redistricting software and North Carolina’s particular redistricting story to draw and redraw district lines around audience members in real time, demonstrating how easy and precise districting can be and how little the people affected are involved.”

Weird Shapes

The main idea behind this project is to put the weird, gerrymandered shapes that constitute these districts into everyday objects that people can interact with. The first impulse was to make these odd shapes visible so that discussion around them and what they were could happen. I was inspired by this project that does this with jewelry.   At first I wanted to use Shapeways to mass produce a cheaper, and more distributable version of this project – or one where people could upload their own districts. After our discussion just replicating this project was not interesting enough on its own, and I moved on to the idea of mixing failure with these weird shapes. I did, however, make and order a cheaper version of this necklace with a an engraved hashtag that arrives tomorrow.

NC-6 Necklace
“A diamond is forever, but a district lasts a decade.”

Currently, my interest is in creating a website where people can order a variety of household/everyday objects cut out in gerrymandered shapes. The hope is that the shapes of these objects will make the use of the object result in failure – and hopefully draw attention to the immense complexity surrounding gerrymandering via humorous failure. I want to begin the process of having people place their own voter disenfranchisement – as a result of gerrymandering – into their own body through their performance with the failed objects. For example, I am hoping to to cut the first image in this post out as a set of drinking cup coasters in proportional scale to each other. This would result in some of the very compact districts being useless as a coaster, and some of the large districts with odd holes in them silly to use, too.  I am in the process of getting the laser cutter training at the school of drama, and will make this particular item available on the my Glitch-based site via the Ponoko API. Other ideas for failed objects include:

  • Silicon oven mitts that make it hard not to burn yourself because of the gerrymandered districts.
  • Weird shaped pillows that make it hard to sleep.
  • Disposable tissues that make it hard to blow your nose.
  • Custom cut sticky-not pads that make it hard to take notes.
  • Tote bags that are not good for holding items.







For the manufactory assignment, I want to create a software that will let people laser cut linkage toys that can be operated by a single motor/hand crank. I might use a pass for this and complete it for my final since I think there is a lot to implement and my knowledge of this subject is very limited.

I have created app using planck.js and p5.js that allows me to place pins/connectors and paste images over linkages. I still have a long way to go in terms of designing preset linkages, making a user interface, and exporting parts to be laser cut.

Here’s a creature I made. He moves via a motor underneath his head.


Clearly I am struggling