I am taking a pass on this assignment, but I am working on an idea for the final. It is a system that boils down long passages of text into a series of simple illustrations.

It is somewhat similar to TransProse by Hannah Davis, except for pictures. There are projects that generate illustrations for objects, or translate a sentence into an image or scene. But, this system combines a number of technologies to convert arbitrarily large amounts of text into arbitrarily large illustrative images.

Someone could make an illustration mural of their favorite story, or create a wallpaper from a corpus of Aesop’s fables. They could write a story and see it illustrated live. A newspaper could be converted into comic strips. A room could be completely covered with cryptic stories generated by a machine learning algorithm (something like discovering a secret room in a tomb with a language you vaguely understand).


gray – check-in

much time spent on getting Apple Watch haptics functioning. Weirdness with OSC transmission. Works with phone hotspot.

Found a good visual distance communication, a circle enclosing on finger with radius equal to finger distance.


the overall goal is to allow the participant to draw out paths correspondent to a ambient song sound files, such that passing the hand through the now-floating and independently static paths replays that section of the sound file via a granulation script. Allows multiple limbs to be maneuverable “ears”.

geebo – tricorder check-in

I’m working on a camera that uses computer vision to constrain what can be captured to something very specific. I also thought that this would be a good excuse to learn CoreML and take advantage of the Neural Engine inside iPhone to run the models at good frame rates.

I first started by creating an application that can only takes pictures of dogs, but I want to move the classification to a much more niche and specific topic. One area that I’m especially interested in taking this specific camera is being able to determine if a photo that I’m about to take will do well on a specific sub-reddit or andys.world. My next step is to scrape some of these social websites and compare images that are upvoted to the front page vs those that are not and see if I can build a camera app that only will take photos of things that would end up being upvoted.

Here you can see a react native app I’ve prototyped that runs the MobileNet on CoreML.

ngdon – telematic check-in


I’m making a multiplayer online game where everything is made of emojis.

I think there are enough emojis in unicode now so I can craft an entire world out of them.

See in-progress videos for details:

Another interesting aspect is that the game will look very different on different OS. For example, left is mac, right is windows:

Jackalope-Telematic Checkin

I’m making personalized chatbox things for a few of my friends. Mainly I was thinking about how you can set a messenger chat emoji, and people kind of spam that and use it just to show you acknowledge something. So I’ll make messenger chat emoji’s but they’ll be buttons with very specific messages on each.

I wonder if this would be better as predictive text suggestion things instead of always the same buttons on the screen? Also now that I’m making these, I realize some of these things I feel kind of embarrassed to have people read. Also progress is slow because I decided to finally learn some CSS html things instead of just implementing it myself like a dirty hack as I usually would.


I can calculate people’s heart rate from someone’s finger over the phone’s rear camera with flash turned on. So can I make a program for people to send, receive, and exchange heart beats.

There’s at least plenty of wordplay to twist around with this —> “Holding your heart in the palm of my hand,” Vulnerability, dark humor (Golan imagined someone on their death bed, I can also see it as something you use while having “heart to heart”, or an alternative to holding hands for the remote couple?), and then I wonder…would it be strangely intimate to hold a stranger’s heart?

Kyle Machulis’s shared the thumb kiss app, and it was delightfully simple and elegant. If I could get some of that Taptic Engine action here with the above heart program that’d be fun on the technical side too.

WIP demos

Heart monitor from camera feed from Marisa Lu on Vimeo.

Started setting this up as a native app talking to a node.js server.

I wanted to stay within a single view with no other pages or hidden hamburger menus, but how do I make sure to stay communicative?

I occasionally gave my work in progress apps to unsuspecting classmates to see if they could figure out either the intent of the app or how to use it. Most of the compositional UI changes grew organically from that as opposed to well defined and designed spec sheet because I was wary of what I’d be able to achieve my first time in swift.

One of the bigger UX changes came with a larger compromise in battery level, but haha, I think it’s worth it — when your finger is on the camera, the system automatically knows and begins to read heart beat with the flashlight automatically toggled on.

Finger on and off from Marisa Lu on Vimeo.


Proposed Project: Seeing Watermark

Thanks to Zachary Rappaport’s ongoing Water Walks project, I recently learned about the importance of watersheds to our own health and wellbeing and to that of the environment.

Think about a hill with water rolling down it on both sides. The water that rolls down one side of the hill is a watershed. The water that rolls down the other side is another watershed. A watershed is, technically, an area in which all the water (streams, pipes, ponds…) flows to the same place. It might collect there, like when water forms a pond at the bottom of Schenley Park, or it might move on and be a part of a larger watershed, like when the water from Squirrel Hill ultimately flows into the Allegheny.

As you can see below, Allegheny County has many large watersheds.

Visualizing the flow of water through a community is a powerful tool to raise awareness of water issues, because this flow is mostly invisible to us, except when we cross a bridge or turn on our faucet, and even then, we don’t really see a holistic picture of where our water comes from and where it goes.

The stakes are also pretty high: Millvale, on the north side of the Allegheny, is at the bottom of its watershed, named Girty’s Run. Because there’s been a lot of development upstream from Millvale, including a lot of parking lots, water has had an easier time flowing down into the town. This has been disastrous for them, since the town frequently floods. Helping citizens understand how development causes harm to a watershed and the people living in it might change policy and plans around construction.

Ann Tarantino’s project, Watermark, in Millvale, was one such effort to visualize the flow of water. It’s an abstraction of Girty’s Run as a painted blue line that runs through the town center and down to the river.

It runs through stores and streets alike, as a reminder of the powerful presence of Girty’s Run, even under the pavement.

Okay, my project:

Watermark was removed last year. I want to bring it back in AR. And to expand on the idea of showing who’s downstream of who, I want to develop a feature that lets you pick two or more points on a map of Pittsburgh and draws a line flowing down your respective watersheds and shows where your flows of water meet:

This begins to let you visualize how the runoff from your house, for example, affects a point along Nine-Mile-Run.

Just like Tarantino’s work, my lines will only be abstractions of the literal flow of water. Here’s my plan in this picture: On the far right you can see how Tarantino’s blue line is very expressive, designed to playfully attract attention. I want to imitate that when I calculate a flow line for each watershed in Allegheny County, beginning with a line of best fit, and adding some randomness, while keeping the line within the GPS-defined boundaries of the watershed:

Some technical problems foreseen:

To calculate the flow of water along the major rivers, I won’t use GPS data, I’ll just create a series of points by hand.

Also, the watershed API lists watersheds with multiple sections. Here’s a heat map of the watersheds with the most sections:

The Allegheny River basin, in this dataset, has something like 13 sections:

I think I can break these up and treat them each like their own distinct area and calculate a line of best fit for each of them.

Technical Update:

I’ve begun building this project in Unity, using the powerful Mapbox SDK.

aahdee – checkin

I’m working on a website that’ll be hosted on github or aws that allows people to compose paragraphs for a gothic horror book. Through a LSTM trained on 19th century gothic horror novels, they’re first prompted with a few generated sentences, and then they can continue writing as they wish. I’m considering letting the LSTM try to predict their words and they can follow if they wish.

I’m having some issues with preloading the LSTM model such that people don’t have to wait a long time to use the website. Otherwise it seems to be okay. This is one of my first times building a website and it doesn’t seem to be too difficult.

sheep – checkin

I’m working on a chat room system where saying a certain letter of the alphabet will deduct you 200 points. When users connect they are assigned one of two letters: either “i” or “e.”  A leaderboard will display who has the most points. You have a secret phrase you can only tell to people who have the same letter as you, though no one else is allowed to hear it. If you tell a person who has the same letter as you your secret, you both get 100 points. If you tell a person who has a different letter, they get 300 points and you lose 100 points. Reaching 0 points resets you.

Right now, letter assignments, checking if the letter is in your message, points, and usernames are all being stored server side, and your health is constantly being console.logged.