@good_egg_bot is a twitter bot that makes eggs.

The good_egg_bot is a benevolent twitter bot that will try its best to make the egg(s) you desire. Twitter users can tweet requests for certain colors, types, sizes, and amounts of egg to get an image of eggs in response. I made this project because I wanted to have a free and easily accessible way for people to get cute and customizable pictures of eggs. I was also excited by the prospect of having thousands of unique egg pictures at the end of it all.

I was inspired by numerous bots that I have seen on twitter that make generative artwork, such as the moth bot by Everest Pipkin and Loren Schmidt, and the Trainer Card Generator by xandjiji. I chose eggs as my subject matter because they are simple to model through code, and I like the way that they look.

I used nlp-compromise to parse the request text, three.js to create the egg images, and the twitter node package to respond to tweets. I used headless-gl to render the images without a browser. Figuring out how to render them was really tricky, and I ended up having to revert to older versions of packages to it to work. The program is hosted on AWS so the egg bot will always be awake. I found Dan Shiffman’s Twitter tutorials to be really helpful, though some of it is outdated since Twitter has changed their API.

This project has a lot of room for more features. Many of people asked for poached eggs, which I’m not really sure how to make but maybe I’ll figure it out. Another suggestion was animating the eggs, which I will probably try to do in the future. Since it took so much effort to figure out how to render a 3D image with headless-gl, I think I should take advantage of the 3D-ness for animations. I like how, since the bot is on twitter, I have a record of how people are interacting with the bot so I can see what features people want. In my mind, this project will always be unfinished as there are so many ways for people to ask for eggs.


Here is a video tutorial on how to use the good_egg_bot:


Hi! For my final project, I want to complete my manufactory assignment. Here is my midpoint writeup.

So far, I have created a software with some basic linkages and the ability to paste drawings on top of them. I have yet to add functionality for these linkages to be exported as a vector file that can be laser cut. I also want to connect to the Ponoko API, so those without access to a laser cutter will be able to order and assemble their own linkage toys. My project also needs an interface that will allow users to assemble the custom linkage toys. I am planning on creating a “dress-up game” type interface, where users can choose from a preset collection of body parts that will snap on to the linkages.



For the manufactory assignment, I want to create a software that will let people laser cut linkage toys that can be operated by a single motor/hand crank. I might use a pass for this and complete it for my final since I think there is a lot to implement and my knowledge of this subject is very limited.

I have created app using planck.js and p5.js that allows me to place pins/connectors and paste images over linkages. I still have a long way to go in terms of designing preset linkages, making a user interface, and exporting parts to be laser cut.

Here’s a creature I made. He moves via a motor underneath his head.


Clearly I am struggling




Living Mushtari is a 3D printed wearable accessory that serves as a microbial factory. The shape of the object is designed using generative algorithms based off of biological growth and recursion. It is intended for the wearer to “be able to trigger the microbes to produce a particular substance – for example a scent, a color pigment, or fuel.” I recognized the pieces from the 3D printed fabric Nervous System video we watched in class. The pieces are clearly not intended for everyday use since they are stiff and uncomfortable, which was the point made in the fabric video. Now I understand that they are this way because they need to hold liquids filled with living organisms. I wonder if the same technology could be applied to something smaller and jewelry-like. I don’t really understand why they chose to make a strange looking crotch cover.


Miguel Nobrega made a series of generative isometric drawings that I like, called possible, plausible, potential. They are printed using a plotter. I like how the drawings look like buildings, and by looking close you can see how the plotter marked each line individually. Even though the drawings are modeled in 3D and printed in 2D, the plotter gives it them an illustrated effect that I really enjoy.


For this assignment, I made a squishy character that fountains paint out of its body. The world is inhabited by ink creatures, which the character can consume to change the color of lines it produces. 

Consuming several of the same colored creature in a row will increase line thickness. By jumping, the character can fill an enclosed area.

I went with a CMYK color scheme because I liked the idea of ink guy as a sentient printer. I used PixelRender to create a pixel art effect, because my method for drawing lines looked pixelated and I wanted the entire program to match that. 

I don’t think my project is technically interesting, but I definitely learned a lot while making it. I have been pretty intimidated by Unity in the past so it was nice to experiment with the software. My main struggle with the assignment was coming up with an idea. 

One of the games that I was inspired by is Peach Blood, where you also run around eating things that are smaller than you. I was also told that my program was similar to Splatoon, which I’ve never played but it looks cool.

(music by Project Noot

I drew a face for the character but you can’t actually see it while drawing. whoops!


This is the best drawing I made with my program; it is an intellectual cat.


Some early sketches.



The first version of my program took my face and made it smooth. I thought this was pretty cool, and my friend Eileen Lee told me it reminded her of noppera bo monsters (faceless ghosts). However, I realized that I tailored the program very specifically to my own face. It wasn’t smooth enough when I tested it on people with a visible nose bridge, but by trying to remove the nose bridge I ended up erasing my glasses. It was also hard to erase things like cheekbones, since it might end up blurring into things outside of the face and produce the wrong colors. Pretty much, it worked well for me because my face is flat to begin with. :’)

I made a second version using a similar idea, but instead of choosing areas to smooth based on facial landmarks, the user can select which areas need to be “corrected” using colored markings. I was heavily inspired by the Peter Campus performance  where he paints green paint on his face and uses chroma key to show another image of his face underneath. I think this version is better for a performance because you can slowly increase the amount of smoothing on your face.

I made this project because I was thinking about having insecurity about your face, and also our weird obsession with “nice” or “smooth” skin. When I see my relatives, they usually greet me by commenting on my face or skin. Things like this often make me wish I could hide my face completely.

All of the blending was achieved with OpenCV seamless clone, and the face tracking in the first version was done with dlib. Special thanks to that school cancellation that gave me 5 more days to work on this!


The Sandbox of Life by Sensebellum is an installation that uses sand, computer vision, and projection mappings to illuminate a sandbox with different imagery depending on the height of the sand. Users can sculpt the sand using their hands or brushes. The sandbox projects in different modes, including earth terrain, lasers, and even Game of Life cells that emerge from sand boundaries. I am interested in this project because it requires a touch input and produces a visual output, but playing with sand is much more sensory than, for example, touching a screen. There is a fluidity to the sand that creates very interesting projections. I also like how the project includes several different modes, since it’s repurposing the technology to create a variety of experiences. In general, I think projects involving projection mapping are pretty cool! I enjoy the combination of digital and physical that makes art feel more involved.


Proposition 2: Critical Play Can Mean Toying with the Notion of Goals, Making Games with Problematic, Impossible, or Unusual Endings.

I am the most interested in this proposition because I think it is really important to examine and reconsider goals. I’ve been thinking about goals a lot recently, and I’ve realized that achieving the goals you set in your mind won’t always have the expected result. A lot of conventional games have clear-cut goals and win/lose situations, but they rarely examine grayer outcomes. By having an unusual ending or possibly no ending at all, interactive works can encourage us to think about the world and our goals in a more complex and truthful way.  The instantaneous output and replayability granted by games and interactive technology makes it easier for us to observe these situations and ponder the consequences of our actions.


Through The Dark  by Hilltop Hoods is an interactive music video where the viewer can navigate through a 3d space by scrolling up and down, or using the accelerometer on a phone. I admire this project because it has nice transitions with a good story, and the interaction is simple yet satisfying. Being able to interact with the music video adds another layer to the story by creating two worlds: one light and one dark. I also like how the project is available on the web, since that makes it much more accessible.

The project was created in a collaboration with musician Dan Smith and Google Play Music. I found this project by looking through the three.js featured projects. The project description says that “new tools were developed to bridge traditional animation methods and WebGL,” but nothing more detailed than that. I think that experimentation with interactivity in music videos is really interesting, and I’ve seen a couple artists use things like 360 recording to make their work more interactive.



For my 2D physics interaction, I made a squishy frog head that drops through platforms controlled by the mouse. A collision with each platform causes the frog to play a different note. It was created with Processing, Daniel Shiffman’s Box2D-for-Processing, and the SoundCipher library. 

I struggled with coming up with an idea for this assignment because I wanted to make good use of the physics library, but wasn’t sure how to make something interesting and fun to interact with. At first the frog was just a rigidbody, but I took inspiration from this video to make a circle out of distance joints instead. I think the squishiness really improved the assignment, and I’m glad that it allowed me to play with the physics a bit more.