I can calculate people’s heart rate from someone’s finger over the phone’s rear camera with flash turned on. But what should I do with this heart rate?
^ I find this completely….un-compelling. What’s a simple but elegant use of heart rate?
I’m wary of using it as a way to create visuals, because as often seems to be the case, the system is more creative than whatever traditional form of art it produces.
Kyle Machulis’s shared thumb kiss app makes me wonder if meditating to the hear beat of your partner might be an interesting experience. Get some of that Taptic Engine action here with the above heart program. There’s something elegant about holding someone’s heart in the palm of your hand. Ok. Maybe more cheesy, but there’s plenty of angles to take this.
Space Time camera by Justin Bumstead is a small, real time slit scanning device that anyone can use to create their own space time images. The system consists of a small, wide angle camera, some simple buttons and potentiometers, and a raspberry pi running processing.
The system is battery operated and portable, allowing people to preview and edit their own images and videos in real time.
In 2012 Adam magyar used this diy camera built from industrial slit scanner parts and a medium format camera lens. He then used these to capture striking photos
The part of the project that is most interesting to me is the interface described on the camera. I unfortunately haven’t managed to find any photos of the project, but I think that the kind of interface used to move the slit could allow for new interpretations of the ‘time space’ concept made in real time by the photographer.
“Magyar wrote a program that would operate the scanner “with a special user interface that was optimized for the project,” which allowed him to preview the compositions before he began to scan the scenes.”
My game is called BUMPmap, and the works I am interested in are Bumpnet and No Man’s Sky. I am interested in allowing players to rename countries, invading one another’s spaces rapidly. However, there can only be four players at a time, and every time a new player joins, the first on the queue gets kicked.
No MAn’s Sky is a space exploration game which uses procedural generation to generate 18 quintillion planets. Player start on a random planet and ideally make their way to the center of the galaxy. However, they can also choose to explore other parts of the world. Nearly all parts of the galaxy are made through procedural generation using deterministic algorithms and number generators from a single seed number. Not much data is stored on the game’s servers, as the proximity of the player to a location is what generates its appearance based on the algorithm. I think that many of the criticisms people had with the project (that it was boring, that the multiplayer was artificial since you couldn’t see one another, lack of interesting generated content) have been met in the form of updates, which have come out over the last year. I think the commitment to making a massive explorable world which you can get lost in is what makes me attracted to this project. Also initially, the game was panned by audiences. Many “gamers” felt they had been cheated somehow, that the game was a scam, which kept me from buying it. They wanted combat and looting in a game about exploring and discovery. The ability to name your planet, and the creatures on it, and for it to be permanently named that, is a bold and optimistic decision. The game itself was influenced by the science fiction works of the 1970s and 80s, especially the look and feel, as well Neal Stephenson’s belief that it was mainstream to make a dystopian story. The studio behind it, Hello Games, and its founder Sean Murray, wanted to tell an optimistic and uplifting experience.
BumpNet is made by Jonah Brucker-Cohen as a successor to BumpList, except this time, it’s a public wireless network with a cap of how many clients it can have. The process involved using a consumer wireless router and modifying it to put people in competition with one another to get access to wifi. When a client joins, they will see the name of the person and machine they bumped off the queue, similar to Bumplist. There is a login screen where they enter their information and can then use the internet— eventually getting kicked themselves. I like this project because clients need to be geographically close to one another, allowing for non-violent, productive physical confrontations as well. I think that Bumplist has a more interesting metric for bumping people, rather than time with which they entered however.
I’m not sure yet what my project will be about, so I have found projects related to two different directions I am considering.
1. Real World Third Person Perspective VR
This is not an art project as much as a technical experiment, but it’s similar to something I am considering involving streaming a camera feed to a VR headset. Here, the user wears a backpack with a stereoscopic camera mounted a few feet above them. They wear a VR headset that displays what the camera sees, and controls the camera with servos based on the headset’s position.
This gives them a videogame-like 3rd person perspective of themselves and the world. I find this idea very interesting, because I often see myself in videos and in my memories as different than I do in the moment, and because people behave differently when they can see themselves (like in mirrors) than when they can’t. I’d be curious as to how people feel about wearing this, and how it affects their interactions with people (besides the obvious effects of wearing a backpack and VR headset around…).
I’m not sure these people were interested in those questions (and from this video, I’m not sure they even got their idea fully working), but I love the concept and it’s a really cool technical experiment too.
2. King’s Cross Phone-In
For my second project, I wanted to go really old-school, or at least more analog than a lot of the web-based telematic artworks we’ve been looking at. “King’s Cross Phone-In” is a performance art piece (kind of a flash mob) orchestrated by Heath Bunting in 1994. He posted the numbers to the public phones in King’s Cross Station on his website with the following message:
At the specified time, the station was flooded with calls, causing an orchestra of phones ringing. Bunting spoke with several people on the phone, as did many people in the station. Others didn’t know how to react. People were interacting with strangers from different sides of the world, and (maybe I’m imagining the whole thing rather romantically) it must have been a really beautiful experience, demonstrating the power technology has to connect people before smart phones with wifi took over the world. Bunting did a lot of early internet art projects that I really like, but I especially appreciate this one’s use of a different technology–landline phones–to bring the artwork out of the net and into real life.
Living Mushtari is a 3D printed wearable accessory that serves as a microbial factory. The shape of the object is designed using generative algorithms based off of biological growth and recursion. It is intended for the wearer to “be able to trigger the microbes to produce a particular substance – for example a scent, a color pigment, or fuel.” I recognized the pieces from the 3D printed fabric Nervous System video we watched in class. The pieces are clearly not intended for everyday use since they are stiff and uncomfortable, which was the point made in the fabric video. Now I understand that they are this way because they need to hold liquids filled with living organisms. I wonder if the same technology could be applied to something smaller and jewelry-like. I don’t really understand why they chose to make a strange looking crotch cover.
Miguel Nobrega made a series of generative isometric drawings that I like, called possible, plausible, potential. They are printed using a plotter. I like how the drawings look like buildings, and by looking close you can see how the plotter marked each line individually. Even though the drawings are modeled in 3D and printed in 2D, the plotter gives it them an illustrated effect that I really enjoy.
This is an installation of a living room that is usually found in sitcoms. People were allowed to sit and talk while a machine learning algorithm that was trained against audio recordings of stand-up comedians listened into their conversations. This algorithm was trained to know what phrases were said by comedians before their audience laughed, so if the algorithm heard something that was considered to be “funny”, a laugh track would be played throughout the room in response.
The main concept of this project was a room that didn’t narrate a story to you, but a room that narrated your story as you interacted with it. As one entered the closed, dark space, they can use a flashlight to listen to a narrative that described their actions. Due to the eerie nature of the narrative and the small dark setting, one feels like they have a strong connection to the narrator and the events that it describes.
HeadLight is a mixed-reality display system that consists of a head-mounted projector combined with spatial tracking and depth inference.
HeadLight uses a Vive tracker to track the head pose of the wearer. Combined with the depth information of the space in front of the viewer, this enables the wide-angle projector mounted on the viewer’s head to projection-map the room and the objects within it, with a working 3D illusion from the viewer’s point of view.
Speed of Light is short film created using two pico projectors and a movie camera. The pico-projectors are used to project animations of the things in the movie onto the surfaces of a room. By moving these animations, the sharp & jenkins turn the room into a movie set for these mixed-reality “characters.” This piece could have come out of just experimenting with stock footage on a black background, or through extensive planning and creation of the animations to match up to the scenes.
GVBeestje is a sticker used to activate a game on the Amsterdam transportation system (the GVB). It consists of a set of stickers of a beest, that invites the viewer to play by using the parallax between the fore- and background to position the beest by moving their head and eat the people the bus is going past.
In all of these projects, there is exploration of some non-traditional ways of activating a space, much in the way the “AR” or “MR” does. The GVBeestie is successful in operationalizing the latent interaction of parallax the rider experiences daily into a game using nothing more than a sticker. Speed of Light is an interesting concept, however the film itself is relatively uninteresting. The idea of it arising out of play, or creating some tool allowing one to play in this way is more exciting than the film itself. the HeadLight is an ugly and cumbersome device, with sub-par tracking (even in the official documentation video). The single-user nature of it is interesting, as is the notion of augmenting space in an egocentric way that other people can see, having their space be overridden.
Dion McGregor was a prolific sleep talker whose nighttime musings were so complex and bizarre that his friend made it his lifelong project to record his sleep talking. Decades later, the tapes resurfaced in the form of a mixtape!
“Dion never actually intended the world to hear his sleep mumbles, instead they were recorded by his songwriting partner Michael Barr, who was fascinated by them. And now the album has now been re-released, 50 years on, by Torpor Vigil Records, along with more recordings of Dion’s sleep stories.” – Dan Wilkinson @ vice
“When Milt Gabler of Decca was interviewed, he called the album ‘one of the biggest flops I ever put out!'” – Dan Wilkinson @ vice
The idea of remixing sleep talking is related to how I want to take sleep movements / noises and remix them into a conversation. I’m inspired by the idea of finding meaning from meaningless speech. However, I understand why every song/album created from this sleep talk was a flop… it’s only compelling to a point.
“The act of writing has always been an art. Now, it can also be an act of music. Each letter you type corresponds to a specific musical note putting a new spin on your composition. Personalize your writing by choosing between six unique moods. Each mood changes speed, filter and color to each letter’s musical note. Easily import text written in other writing applications with a copy and paste interface. When you’ve finished writing, share it and download an audio version with a click of a button! Whether it’s a message, essay, story, or poem explore a new way of writing. Make music while you write.”
Not super related, but it takes text and turns it into music. I’d like to do a little bit of the opposite! Sound to text! I’m inspired by this simple interface and the clean and natural integration of the concept into this interface, although the amount of time one spends interacting with this project would be minimal in any situation because the interaction is rather one-dimensional and quick (I tried the demo in the article).
It was really hard to find good art that was related to my idea.. does anyone have any suggestions?
This is a city-block-sized installation by sculptor Janet Echelman and media artist Aaron Koblin. (Aaron’s been a part of many a crowdsourced art project, including the Johnny Cash project, the Sheep Market, and This Exquisite Forest.) Viewers complete the work by choosing animations from their smartphones to add to the graphics projected on Echelman’s suspended fiberwork piece.
This piece involves participants gathered in one place, unified under the impressive lightshow going on, but their contributions all travel independently to the projectors. If there’s the potential for participants to organize so that they can color the piece as a collective, it doesn’t seem to be happening in the project’s documentation.
Considering telematic art, I’ve been looking for crowdsourced projects that amount to more than just an exquisite corpse (I want something more goal-oriented, like e-Nable’s crowdsourced 3d-printed hands or EteRNA’s modelling). I think Unnumbered Sparks has the potential to be an exquisite corpse, but invites interactions from audience members that can be so independent from each other that it doesn’t need to be anything more than a more general interactive artwork.
What sets this project apart from some other audience-completed installations is that it compresses the results of the contributions into a single object visible to all the participants at once (vs. scrolling through The Sheep Market, or even zooming in to actually see details on r/place), and it limits contributions to the present (vs. the accumulation of drawings in Gutai’s Please Draw Freely, or of clay figures in Urs Fischer’s The Imperfectionist).
Part 2: Watermark
This is Ann Tarantino’s 2017 work in Millvale, right here in Pittsburgh. It visualizes the flow of water through the town along Girty’s Run, the Allegheny tributary and name of the watershed that provides drinking water to this part of Pittsburgh.
I learned about this visualization project on Saturday while attending the Water Walks Luncheon in East Liberty, a discussion on the watershed issues challenging Pittsburgh communities. Millvale frequently experiences severe flooding that’s been highly destructive to the town. It’s because it sits at the bottom of the watershed, and with urban development built over the streams and ponds on higher elevation, rainwater and snowmelt now wash over parking lots and roadways and right into Millvale homes. Sometimes 6 feet of it.
This kind of visualization project is so critical because where our water comes and goes is really invisible to us. But it matters, as much a house or a business at risk of flooding does. Building an understanding among Pittsburgh communities of how water flows into Millvale could help drive policy that invests more in ways to channel water out of the town, and in the protection and even re-exposure of the Girty’s Run watershed.
This is the first part of my looking outwards. It’s an experimental animation made with stop motion. Her other works are done with clay, and this one is likely done with clay as well but I’m not certain. Overall I’m not a huge fan of her work. I think it’s just not something I particularly enjoy watching as someone who prefers narrative over abstraction and also dislikes watching videos out of impatience, but there are really interesting movements and moments in them that I find inspiring for the telematic art project. This project of hers seems less polished overall than her other pieces, such as Ocean Blues #1, but this one has more relevant movements.
2. Flower gifs by Anna Taberko
These are gifs made by Anna Taberko (https://www.instagram.com/anna.taberko/). I like these for similar reasons as the first piece I listed, juicy movements. There’s good easing motions too. I guess my critique for this is then that the colors used are okay but not great. These are made frame by frame using Photoshop and After Effects. Their entire body of work (on instagram at least) just seems to be these zoetropic looping gifs, mostly of flowers, which is cool and good for branding I guess, but a bit boring after a few.