conye – FinalProposal

The Concept: Travels from the World Wide Web

I plan on completing my manufactory project, which is a chrome extension that allows you to generate physical, mailable postcards from your internet adventures. “We live on the internet, so why not share your travels?”

Early prototypes / ideas:

To do:

  • add in direct mailers
  • add in paypal so i don’t go b r o k e
  • email receipts of transaction
  • address book
  • address verification
  • post to twitter  / have a global homepage
  • impose NSFW filter
  • make a better documentation video

jaqaur – Project 4 Feedback

The feedback I received on Rebus Chat is very interesting, especially because I will be continuing to work on this project for the final exhibition. A few themes I heard a lot were:

  1. It turns communication into a game – This is something I am glad people said, because it was part of my goal. Rebus Chat is not for making communication of big ideas any easier, but for giving people access to fun puzzles to solve as part of their regular communication. There were mixed opinions about just how much of an explicit “game” this app should be, and I personally don’t think I want to go full-out with points, stats or other game-y elements like that. But I do want to encourage people to craft interesting puzzles!
  2. It is related to emojis and/or hieroglyphics – Both of these things were mentioned by people in McLuhan’s Tetrad. I find the connection to emojis particularly interesting, because in some ways they are very alike (both are images sent as messages), but they also are fundamentally different; emojis generally represent the thing they depict (be that happiness, money, pizza, etc) whereas rebus images represent the sound of the thing they depict. That’s part of why I am intentionally avoiding using emojis as the images in this project–I don’t want people to start using them literally.Hieroglyphics, on the other hand, are more closely related. There are many kinds of hieroglyphics, but often each pictogram does relate to a particular syllable or sound, and sometimes the images even come from depictions of one-syllable words. I guess Rebus Chat is kind of like a modernization of hieroglyphics, putting them into a messaging application.

jaqaur – Final Proposal

For my final project, I intend to complete my Rebus Chat app that I began for the telematic project (http://golancourses.net/2019/jaqaur/04/10/jaqaur-telematic/). It will be basically the same as I described in that post, but I have changed part of the Rebus-ification pipeline and also solidified my plans for implementing the rest of the app.

The new Rebus-ification pipeline will not use Rita or The Noun Project‘s API (though I will still use icons from The Noun Project). I will use The CMU Pronouncing Dictionary to get IPA pronunciation for everything users type. I will also manually create a dictionary of 1000-2000 images, along with their pronunciation (these will be short words whose pronunciation is quite common). Then, I will try to match substrings of the users’s typed syllables with syllables I have in my dictionary. I will insert the images in place of the corresponding words.

In terms of other implementation details, I am using Android Studio along with Firebase to help with authentication and data storage (which I highly recommend). I have big dreams for this project; I hope I can finish it all in time!

Alexa Practice Skill Feedback

A summary of the feedback I got for the Alexa Practice Skill:

  1. Most people think it’s very successful in being practical and useful
  2. The skill can offer more feedback to musicians for improvement
  3. The flow of practice can be improved
  4. There should be more documentation
  5. The tutoring aspect is good for all-age

dorsek – FinalProposal

For the final project in this course, I will be putting finishing up the piece I have been working on for the senior show: the work itself is a digital compilation of dreamscapes based off several dreams I had during a 3-month period of insomnia. It is interactive, and you can transition through/interact in different dreamscapes through the use of the tobii 4c eye tracker (i.e. your gaze is a subtle, almost unrecognizable controller of sorts…) Much like real dreams, the interaction will be very distorted, sometimes triggering responses that distance you from what you intend (for example when you look at something it stops moving or doing whatever interesting interaction it was before)…

I will be putting it into the Ellis but since there are two eye trackers, I hope I will be able to use one for the class exhibition as well on a separate NUC.

At the moment I am compiling all of the dream sequences into processing (from P5.JS so that they are compatible with the eye-tracker…) and putting the finishing touches on the interactions.

dorsek – project4Feedback

much of the the feedback I received during critique revolved around various other interventions I could include and every sense were quite helpful in generating ideas for how to cintuniue this project if I so please. I was glad to have it looked at in this somewhat unfinished state because the quality of feedback was restricted to conceptual questions of what the world can ask in and of the world especially if existing in a context outside of class.

I also got some useful feedback which cemented my original feelings/intent with regards to the documentation of the project; that is to say suggestions which confirmed my initial intuition to use two people interacting over this medium as opposed to myself (so as to communicate the idea better) including making the interactions or documentation a bit more “dad-specific” (as Josh put it) so as to communicate the inspiration a bit better as well.

 

Perhaps I got lucky seeing as how I was first and thus might have received the brunt force of people’s energy, but the amount of feedback I received for the project was large. Though there wasn’t a lot of content, people really exoanded on the concept of the piece and provided various suggestions in addition to food for thought with regards to the societal and relastional implications it makes/reflects on in today’s relationships.

dorsek – 04 Telematic

This project took quite some re-working from the start and in the end made for a very effective challenge for me with regard to learning how to navigate through a difficult backend/not well-documented issue of plumbing. It the process itself involved learning how to rip pixels from a screen and send the data over a local server into a program that would then commiunicate them to processing in a format that it could both understand, and replicate.

In the end, because I spent so much time on learning and working through the issue of trying to get the video data into processing, I had to Wizard of Oz the final interaction; seeing as how for some reason the pixel data was nit being recignized as video or photo readable by the face detecting library available to processing (OpenCV). Other than that small bug, the eye tracking data from the Tobii 4c, the video feed from Skype, and the sound, were all working perfectly well.

Beyind all of that however, I was able to successfully construct an imagined future wherein the potential for monitoring the gaze during video conferencing was realized and also allowed for a bit of play with th intervention I posed; a sort of meter that humorously rates the quality of eye contact you make with the form of the other person (and which would certainly work had it not been for the openCV issues…)

 

 

much of the the feedback I received during critique revolved around various other interventions I could include and every sense were quite helpful in generating ideas for how to continue this project if I so please. I was glad to have it looked at in this somewhat unfinished state because the quality of feedback was restricted to conceptual questions of what the world can ask in and of the world especially if existing in a context outside of class.

kerjos-FinalProposal

Interactive Art Final Project Proposal

I propose to complete my watershed visualization project that I began but did not critique for Project 2. My project’s conceit is to generate a line in AR that visualizes the flow of water out of the given Pittsburgh watershed that you are standing in. It might look something like this:

As I’ve gotten feedback about this project, I’ve come to some important insights:

  • Watersheds are interesting because they contain themselves: A watershed is an area in which all the water flows to one point. The Allegheny and the Monongahela watersheds are a part of the Ohio watershed, which is a part of the Mississippi. What areas are too small to be watersheds? Where does this cycle end?
  • It’s important to connect this visualization to actionable information about protecting our watersheds. A raw visualization is interesting, but not enough to affect change.
  • Some aspects of the visualization can be connected to the viewer so that it feels more actionable. For this reason, I think it’s important that the visualization is grounded in the way that water flows from the point the viewer is standing on, as opposed to defining the borders of the watershed more generally. In this way, the viewer might understand their impact on their environment more easily.

 

My technical accomplishments so far:

  1. I have built an AR app to iPhone.
  2. I have loaded the Mapbox SDK in Unity, and I can represent my current location on a zoomable, pan-able map.
  3. I have loaded the Watershed JSON data from an API, and can represent its geometry on the Mapbox map.

 

What I still need to do:

  1. Draw a line of best fit, representing the flow of water, through the watershed containing the viewer’s current location, and add some randomness to it so that it’s a squiggly line, but still within the real bounds of the watershed.
  2. Draw a line that connects the current location of the viewer to this line of best fit through their watershed, and add some randomness to that.
  3. Create an AR object pointed in the direction of that line.
  4. Model that AR object so that it appears to be a line on the groundplane.
  5. Model that AR object so that it extends in the direction of the line, and curves based on the curves of the line, at approximately the appropriate distance from the viewer.
  6. Test so that the line can be followed and updates accordingly.

lumar–Project4feedback

The feedback was good to get. It’s evident that the concept clearly communicated, but I do need to work on having a more specific framing or tone towards the experience; whether the connected hearts takes on a sentimental sincere angle, or dark humor alá Kawara and his series I am still alive. It’s hard to gauge which to lean into, but it’s clear from the feedback that a stronger stance would make the project less banal.

The other interesting extrapolation would be whether the experience could be better contextualized physically as well; imagining the phone inserted into a stuffed animal or some other more form appropriate manifestation.