Waterline encourages responsible water use by drawing a line of where water would flow if you kept pouring it on the ground in front you.

Waterline is an app that shows you where water would go if you kept pouring it on the ground in front of you. Its goal is to demonstrate all the areas that you affect with your daily use of water, to encourage habits that protect the natural environment and the public health of everywhere downstream of you. In Pittsburgh, for example, the app maps the flow of water downhill from your current location to the nearest of the city’s three rivers, and from there continues the line downstream. The app provides a compass which always points downstream, making your device a navigation tool to be used to follow the flow of water in the real world. By familiarizing you with the places that you might affect when your wastewater passes through them, Waterline encourages you to adjust your use of potential pollutants and your consumption of water to better protect them.

Waterline was inspired by the Water Walks events organized by Zachary Rapaport that began this Spring and Ann Tarantino’s Watermark in Millvale this past year. Both of them are trying to raise awareness about the role water plays in our lives, with a focus on critical issues that affect entire towns in Pittsburgh.

I developed Waterline in Unity using the Mapbox SDK. Most of the work of the app went into loading GeoJSON data on the Pittsburgh watersheds and using that data to determine a line by which water would flow from your current location to the nearest river. Waterline uses an inaccurate way of determining this line, so that right now it functions mostly as an abstract path to the river.

Waterline’s success, I believe, should be measured in its ability to meaningfully connect its users to the water they use in their daily life and to encourage more responsibility for their use of water. Ideally, this app would be used as a tool for organizations like the Living Waters of Larimer to quickly educate Pittsburgh residents about their impact, through water, on the environment and public health, so that the organization can have a stronger base for accomplishing its policy changes and sustainable urban development projects. I think this project is a successful first step on that path of quickly visualizing your impact on water, but it still needs to connect people more specifically to their watershed and answer the question of what can I do now to protect people downstream of me.

Thank you Zachary Rapaport and Abigail Owen for teaching me about the importance of water and watersheds in my life.

From my sketchbook, visuals and identifying technical problems to solve:

From my sketchbook, preliminary visual ideas for the app:

 From my sketchbook, notes on Ann Tarantino’s Watermark:


Interactive Art Final Project Proposal

I propose to complete my watershed visualization project that I began but did not critique for Project 2. My project’s conceit is to generate a line in AR that visualizes the flow of water out of the given Pittsburgh watershed that you are standing in. It might look something like this:

As I’ve gotten feedback about this project, I’ve come to some important insights:

  • Watersheds are interesting because they contain themselves: A watershed is an area in which all the water flows to one point. The Allegheny and the Monongahela watersheds are a part of the Ohio watershed, which is a part of the Mississippi. What areas are too small to be watersheds? Where does this cycle end?
  • It’s important to connect this visualization to actionable information about protecting our watersheds. A raw visualization is interesting, but not enough to affect change.
  • Some aspects of the visualization can be connected to the viewer so that it feels more actionable. For this reason, I think it’s important that the visualization is grounded in the way that water flows from the point the viewer is standing on, as opposed to defining the borders of the watershed more generally. In this way, the viewer might understand their impact on their environment more easily.


My technical accomplishments so far:

  1. I have built an AR app to iPhone.
  2. I have loaded the Mapbox SDK in Unity, and I can represent my current location on a zoomable, pan-able map.
  3. I have loaded the Watershed JSON data from an API, and can represent its geometry on the Mapbox map.


What I still need to do:

  1. Draw a line of best fit, representing the flow of water, through the watershed containing the viewer’s current location, and add some randomness to it so that it’s a squiggly line, but still within the real bounds of the watershed.
  2. Draw a line that connects the current location of the viewer to this line of best fit through their watershed, and add some randomness to that.
  3. Create an AR object pointed in the direction of that line.
  4. Model that AR object so that it appears to be a line on the groundplane.
  5. Model that AR object so that it extends in the direction of the line, and curves based on the curves of the line, at approximately the appropriate distance from the viewer.
  6. Test so that the line can be followed and updates accordingly.


Proposed Project: Seeing Watermark

Thanks to Zachary Rappaport’s ongoing Water Walks project, I recently learned about the importance of watersheds to our own health and wellbeing and to that of the environment.

Think about a hill with water rolling down it on both sides. The water that rolls down one side of the hill is a watershed. The water that rolls down the other side is another watershed. A watershed is, technically, an area in which all the water (streams, pipes, ponds…) flows to the same place. It might collect there, like when water forms a pond at the bottom of Schenley Park, or it might move on and be a part of a larger watershed, like when the water from Squirrel Hill ultimately flows into the Allegheny.

As you can see below, Allegheny County has many large watersheds.

Visualizing the flow of water through a community is a powerful tool to raise awareness of water issues, because this flow is mostly invisible to us, except when we cross a bridge or turn on our faucet, and even then, we don’t really see a holistic picture of where our water comes from and where it goes.

The stakes are also pretty high: Millvale, on the north side of the Allegheny, is at the bottom of its watershed, named Girty’s Run. Because there’s been a lot of development upstream from Millvale, including a lot of parking lots, water has had an easier time flowing down into the town. This has been disastrous for them, since the town frequently floods. Helping citizens understand how development causes harm to a watershed and the people living in it might change policy and plans around construction.

Ann Tarantino’s project, Watermark, in Millvale, was one such effort to visualize the flow of water. It’s an abstraction of Girty’s Run as a painted blue line that runs through the town center and down to the river.

It runs through stores and streets alike, as a reminder of the powerful presence of Girty’s Run, even under the pavement.

Okay, my project:

Watermark was removed last year. I want to bring it back in AR. And to expand on the idea of showing who’s downstream of who, I want to develop a feature that lets you pick two or more points on a map of Pittsburgh and draws a line flowing down your respective watersheds and shows where your flows of water meet:

This begins to let you visualize how the runoff from your house, for example, affects a point along Nine-Mile-Run.

Just like Tarantino’s work, my lines will only be abstractions of the literal flow of water. Here’s my plan in this picture: On the far right you can see how Tarantino’s blue line is very expressive, designed to playfully attract attention. I want to imitate that when I calculate a flow line for each watershed in Allegheny County, beginning with a line of best fit, and adding some randomness, while keeping the line within the GPS-defined boundaries of the watershed:

Some technical problems foreseen:

To calculate the flow of water along the major rivers, I won’t use GPS data, I’ll just create a series of points by hand.

Also, the watershed API lists watersheds with multiple sections. Here’s a heat map of the watersheds with the most sections:

The Allegheny River basin, in this dataset, has something like 13 sections:

I think I can break these up and treat them each like their own distinct area and calculate a line of best fit for each of them.

Technical Update:

I’ve begun building this project in Unity, using the powerful Mapbox SDK.


Part 1: Skies Painted with Unnumbered Sparks

Source image

This is a city-block-sized installation by sculptor Janet Echelman and media artist Aaron Koblin. (Aaron’s been a part of many a crowdsourced art project, including the Johnny Cash project, the Sheep Market, and This Exquisite Forest.) Viewers complete the work by choosing animations from their smartphones to add to the graphics projected on Echelman’s suspended fiberwork piece.

VAN_Echelman_TEDphotoBret Hartman.jpg

This piece involves participants gathered in one place, unified under the impressive lightshow going on, but their contributions all travel independently to the projectors. If there’s the potential for participants to organize so that they can color the piece as a collective, it doesn’t seem to be happening in the project’s documentation.

Considering telematic art, I’ve been looking for crowdsourced projects that amount to more than just an exquisite corpse (I want something more goal-oriented, like e-Nable’s crowdsourced 3d-printed hands or EteRNA’s modelling). I think Unnumbered Sparks has the potential to be an exquisite corpse, but invites interactions from audience members that can be so independent from each other that it doesn’t need to be anything more than a more general interactive artwork.

What sets this project apart from some other audience-completed installations is that it compresses the results of the contributions into a single object visible to all the participants at once (vs. scrolling through The Sheep Market, or even zooming in to actually see details on r/place), and it limits contributions to the present (vs. the accumulation of drawings in Gutai’s Please Draw Freely, or of clay figures in Urs Fischer’s The Imperfectionist).

Part 2: Watermark


This is Ann Tarantino’s 2017 work in Millvale, right here in Pittsburgh. It visualizes the flow of water through the town along Girty’s Run, the Allegheny tributary and name of the watershed that provides drinking water to this part of Pittsburgh.

I learned about this visualization project on Saturday while attending the Water Walks Luncheon in East Liberty, a discussion on the watershed issues challenging Pittsburgh communities. Millvale frequently experiences severe flooding that’s been highly destructive to the town. It’s because it sits at the bottom of the watershed, and with urban development built over the streams and ponds on higher elevation, rainwater and snowmelt now wash over parking lots and roadways and right into Millvale homes. Sometimes 6 feet of it.

[cut output image]

This kind of visualization project is so critical because where our water comes and goes is really invisible to us. But it matters, as much a house or a business at risk of flooding does. Building an understanding among Pittsburgh communities of how water flows into Millvale could help drive policy that invests more in ways to channel water out of the town, and in the protection and even re-exposure of the Girty’s Run watershed.


Facetime Comics

This project seeks to update Microsoft’s Comic Chat for transcribing video chat (Facetime-non specific).

I’ve developed it to generate a comic book that features cartoons based on myself and my girlfriend. (Although, as in the above example, it can just be me in the comic.)

The software lays out characters, sizes them, lays out speech bubbles, and poses the characters dynamically.

The basis for this project looks like this:

Microsoft deployed this feature for typed web chats in 1996. They also produced a paper documenting that project and how they accomplished some of its technical features, like using routing channels to layout speech bubbles:

Which, I’ve been able to more or less implement myself:

The Microsoft project also tried to attain a semantic understanding of its conversations, and respond accordingly. While it and my program respond to speaker input based on a limited library of words (waving, for example, when someone says “Hi”), a deeper understanding of conversational meaning was something the Microsoft team could not accomplish and that I failed to realize as well. I do think it’s possible today, however, given the availability of wider libraries for programmatically generated language, to respond much more deeply to the spoken words in a conversation.

My character, responding to the spoken word, “love.”

My other inspiration for this work was Scott McCloud’s Understanding Comics, particularly his chapter on “Closure.”

We infer that the attacked happened in the space in between the panels.

McCloud considers the space in between panels, and how we read that space and infer what’s happening, as a unique quality of comics. He calls this “closure.” McCloud says that it’s present in video too, but at 24fps, the space in between “panels” is so little that the inferences we make between them are completely unconscious. Because of this, I think comics are a fitting transcription for video chat, as opposed to straight recording, because by limiting the frames shown, they open up the memory of the conversation to new interpretations.

In evaluating my project, I wanted to implement a lot more. I wanted, for example, to base the emotions expressed by my characters on realtime face analysis or a deeper understanding of the meaning of the text. I also didn’t get to variable panel dimensions, and this is a small sign, for me, that I didn’t get past just recreating Microsoft’s project. It assumes an Internet-comic aesthetic right now, and I wish it had more refinement, and maybe more specificity to my style; there’s a little Bitmoji in the feeling of the character sheet above, and I don’t know how I feel about that.

From my sketchbook.


This project exchanges pixel information with your friend (in place of or in conjunction with video chat) and slowly draws your faces to the screen.

I’ve been thinking about different ways of intervening in the traditional videochat setup, and how the screen, camera, and computer as mediators can improve and complicate our connection with another.

Initially, I was developing a mask — the Photoshop kind — that hid your friend from you during a Skype call until you aligned noses with respect to your screen.

The setup for this exchange involved asking my brother to download CamTwist and Skype and configure them for the OpenProcessing sketch. Even after all that, the results were very laggy. In a way, it was a success, because I certainly had made our communication more complicated.

I moved on to this drawing project because I thought it was a bit more performative. In the current state, the program runs very slowly as it draws you and your friend’s faces to the screen. For this reason, it requires both you and your friend to hold still for a few minutes each time it renders your face. Again the communication is complicated, this time for the sake of a sort of drawing.

This program uses ml5js to find the position of your eyes and nose, and approximates other facial features from them. It prioritizes selecting pixel values from the areas around these features when drawing.


Music Box Village

The village holds many musical structures, like the house that produces a choir-like sound when you pull on ropes attached to spinning electrical fans. These many structures offer visitors the opportunity to explore the village’s sounds collaboratively, to see what rhythm or cacophony they can produce together. For professional performers, the village poses the question of how to adapt to any concert venue, how their skills apply to the space, what sounds can they make there, and how they are visible to the public.

While the music box village offers the aesthetic of ruggedness, and offers the opportunity for a communal, spontaneous gathering of amateur musicians, I think it’s clear by the creator’s decision to host live events in it that it is ideally a site for professional performances. One of the aspects of this that I like is that, after seeing professional performers and watching them leave, audience members can revisit the site and try to recreate the same sounds on their own.


Different forms of play has been something that I respect a lot in game art, mainstream video games, and indie board games. Over the summer my cousin, who still plays Pokemon GO, was telling about the collaborative catching of Pokemon that the game required; strangers in his small city were required to gather and meet each other in order to accomplish the non-violent goal of catching a Pokemon. In this case, and in others, I was charmed by how Nintendo, with its traditionally combative and competitive themes, had pushed successfully for such a low-conflict, collaborative feature to one of its games.

In my own practice, I want to explore collaboration and non-violence as options for games and for interactions. I think there is continually meaning to be gotten out of working together, and that collaborations often hold novel and welcome surprises for their players.


Sumo Wrestlers – Wooden Mechanical Toy

There’s a museum for pre-war mechanical toys in Nara that I spent a lot of time in when maybe I should have been looking at some famous temples. The staff there lay out a few dozen, very old toys on their tables and let you play with them for free. My favorite is a pair of sumo wrestlers, always locked in competition.

Here’s me, trying them out.

Many of these toys surprised me because they contained unanticipated motions, and asked me to approach them differently than how I remember my childhood toys, and entertained me so much when I expected them to be dull.

(This is a wind-up bell-ringer without any gears or springs; it resets with sand, like an hourglass.)

I’ve been trying a lot recently to make simpler work, and for me that involves considering artifacts like these sumo wrestlers as the bases for interactive projects. An object like this was probably the result of many iterations on the same concept; it’s unlikely that there’s a single creator to whom this toy idea can be attributed. Rather, it takes iteration, and a lot of real, physical play-testing to arrive at something that’s amusing even as it’s plain simple.




Alphabet Blocks & Balloons

These alphabet blocks and balloons reveal when they’ve been arranged into a word. The user types on their keyboard to create blocks and balloons for each letter.

With this project, I wanted to develop a playful application that would be appealing to children and that had the potential to be broadened to encompass several languages at once. This 2-D physics game, where blocks can be brought into the world with the keyboard and floated away on balloons, does that, to some extent. The list of English words that it pulls from could be broadened to include other languages that use Roman alphabet letters as well, allowing the discovery not only of words you didn’t know were there, but of words you didn’t know. Unfortunately, one of my major limits on adding these other dictionaries right now is optimization, and as I go through my English dictionary very frequently in my code, I slow down my app considerably.


Here are some pages from my sketchbook: