Category: Assignment-09-Documentation

melanie – documentation (in progress)

Holy late post, Batman.

Soul Searching is a game about soul searching, delivering the metaphor through maze navigation.


The project spawned from a story idea I had long ago, of a person whose soul was shattered into multiple pieces. The original form was lost and is now trying to return to the person, collecting its fragments along the way. The gameplay focuses on maze navigation, but you can’t see the entire maze at one time.

I took much inspiration from the games I’ve played: Dear Esther, Sense of Connectedness, Thomas Was Alone, etc. I wanted to create something familiar (maze solving) while presenting it in an unfamiliar way.


Technicalities-wise, I took much of it from algorithms online and programs people already wrote on openprocessing; there’s practically an entire culture out there that obsesses over maze generation, trying to figure out the best way to generate mazes, already making creative games with it, and diverging from traditional mazes.

Unfortunately, the program is suffering from a big bug for the moment. I’ve been trying to rat it out for the past several days. It’s slow going and I feel really bad that I don’t have much to show because of that stupid bug. Needless to say I’ll still be working on this throughout the break until I finish it because I’ve spent too much time on it to stop. Proper documentation will go up once it’s done. Sorry!



The Rainbox is a box which produces the sound of rain when a user is nearby.

This project is a simple Arduino setup that involves the use of a servo motor and a rainstick. The Arduino is connected to a flex sensor that is intended to be hidden under a pillow or a mattress. When a person lies down on the resting place, the servo turns 180 degrees, and the rainstick attached to it will also turn. This causes the rainstick to simulate the sound of rain for a few seconds until all the beads in the stick reach the bottom. The servo then turns back 180 degrees and the process repeats until the user leaves the resting spot or thirty minutes after the flex sensor was first activated. All of this is enclosed within a box, along with a blue LED light which emits a soft glow through a hole in the box.
The concept behind this project arose as a response to one of my long lasting personal issues – the inability to sleep in silence. Maybe it is a symptom of a generation that grew up on television, but the lack of any sensory input used to be very unsettling to me, and it would cause my mind to wander into uncomfortable and frightening places. The sound of rain and the glow of muted television often helped me in these. The rainbox was designed to substitute all of this.
All in all, I ended up creating a working prototype, but it is nowhere near a form that I would want to present in public. The box was intended to make the rainstick echo for a more rich sound, but it ended up just being bulky. Having the setup more exposed but pleasing to look at is a goal for this project. A tighter documentation would help a lot in presentation. The light is also something to experiment with, as people thought it would be more distracting than comforting.

6 4 2

Ticha-Voice Box Documentation


The ‘Voice Box’ is a musical instrument (of sorts) that receives audio input from the microphone and performs real-time pitch changes with a custom glove-controller. It can be used as both a personal listening device and a means of communication: the user has the option to either speak directly into the microphone and have their altered voice projected from the speaker, or plug in headsets and listen to the distorted noises of the world around them.

Inspiration / Critical Reflection
The project was inspired by a number of things that were not necessarily directly related to each other. Initially when I wanted to make simple piano gloves I was actually inspired by my frequent practice of tapping on tables or chairs that I developed as a result of not having ready access to a piano. In order to manifest this habit, I decided to create a portable instrument that allowed other people to listen to the sounds I hear in my head. But soon I discovered that a number of people have made instruments like these in the past, so instead of being a personal project it turned into a re-implementation of what has already been done countless times. So I decided to refocus my scope of inspiration in an effort to create something that was more novel. When I stumbled across Adafruit’s Wave Shield and Voice Changer project I immediately had my heart set on making a device that distorted voices in some way. I was initially aiming to create gloves that allowed a person to autotune their voice in real-time and make them sound like Imogen Heap, but given the limited time I had and my lack of understanding of how sound frequencies work I had to keep things relatively simple. Thus instead of a real-time autotuner, I built a real-time pitch-shifter.

The Voice Box surprisingly became a device that had some personal value as well, as its concept revolves around the difficulty to understand others and their difficulty to understand me. As I was testing the final product, I become engrossed in puppeteering other people’s voices and speaking in voices that were hardly decipherable – and it was then that I realized these gloves had created a wall between myself and society. Using these gloves turned into a very self-reflective experience, as it caused me to exhibit strange control freak behaviors and made me think about why I was able to extract so much enjoyment out of exercising power over others.

Technical Details
Electrodes are placed around the joints of each of my fingers so that whenever I bend one of them I would cause the electrodes to make contact – triggering a switch that creates the voice pitch-shifting effect. Essentially the electrodes behave like normal momentary switches, but were specifically designed to function without having to make contact with an external surface/object. This allows for an ease of use and enables user to make the more natural gestures common in playing keyboard instruments and typing.

Some technical hurdles I had to overcome: Although using electrodes seems to be a conceptually simple idea, they were surprisingly difficult to implement properly. I initially only had a pull-up resistor for each finger (to prevent short circuiting), but when I tested it out I noticed that the Arduino was not correctly interpreting the digital input data; namely, when the electrodes made contact with each other the input was read as 1’s, but when they were separated the input was just a jumbled mess of 0’s and 1’s. To overcome this issue I had to add pull-down resistors to explicitly make the ‘open’ and ‘closed’ states distinct. But however annoying the resistor handling was, I think the greatest technical hurdle I overcame was getting the pitch shifting to actually work. Adafruit’s original voice changer project uses a potentiometer to make pitch shifts, and because that is an analog input it is not possible to change your voice in real-time (running two analog inputs concurrently is beyond the capacity of an Arduino). So I theorized that while it’s not possible to dynamically change pitch using an analog input, it could technically be possible with multiple digital inputs. Luckily my theory was correct, and making things work just required some simple modifications to Adafruit’s original code.

(Sorry for not using Fritzing – there are too many parts to the device and I felt it would be much easier for me to show what’s going on with photos)

/* Code adapted from ADAVOICE, an Arduino-based voice pitch changer */


SdReader  card;  // This object holds the information for the card
FatVolume vol;   // This holds the information for the partition on the card
FatReader root;  // This holds the information for the volumes root directory
FatReader file;  // This object represent the WAV file for a pi digit or period
WaveHC    wave;  // This is the only wave (audio) object, -- we only play one at a time
#define error(msg) error_P(PSTR(msg))  // Macro allows error messages in flash memory

#define ADC_CHANNEL 0 // Microphone on Analog pin 0

// Wave shield DAC: digital pins 2, 3, 4, 5
#define DAC_CS_PORT    PORTD
#define DAC_CS         PORTD2
#define DAC_CLK        PORTD3
#define DAC_DI_PORT    PORTD
#define DAC_DI         PORTD4
#define DAC_LATCH      PORTD5

uint16_t in = 0, out = 0, xf = 0, nSamples; // Audio sample counters
uint8_t  adc_save;                          // Default ADC mode

// WaveHC didn't declare it's working buffers private or static,
// so we can be sneaky and borrow the same RAM for audio sampling!
extern uint8_t
buffer1[PLAYBUFFLEN],                   // Audio sample LSB
buffer2[PLAYBUFFLEN];                   // Audio sample MSB
#define XFADE     16                      // Number of samples for cross-fade
#define MAX_SAMPLES (PLAYBUFFLEN - XFADE) // Remaining available audio samples

// Keypad/WAV information.  Number of elements here should match the
// number of keypad rows times the number of columns, plus one:
const char *sound[] = {
  "startup" };                      // Extra item = boot sound

int button6State = 0; 
int button7State = 0; 
int button8State = 0; 
int button9State = 0; 
int button11State = 0; 

//////////////////////////////////// SETUP

void setup() {
  uint8_t i;


  // The WaveHC library normally initializes the DAC pins...but only after
  // an SD card is detected and a valid file is passed.  Need to init the
  // pins manually here so that voice FX works even without a card.
  pinMode(2, OUTPUT);    // Chip select
  pinMode(3, OUTPUT);    // Serial clock
  pinMode(4, OUTPUT);    // Serial data
  pinMode(5, OUTPUT);    // Latch
  digitalWrite(2, HIGH); // Set chip select high

  // Init SD library, show root directory.  Note that errors are displayed
  // but NOT regarded as fatal -- the program will continue with voice FX!
  if(!card.init())             SerialPrint_P("Card init. failed!");
  else if(!vol.init(card))     SerialPrint_P("No partition!");
  else if(!root.openRoot(vol)) SerialPrint_P("Couldn't open dir");
  else {
    PgmPrintln("Files found:");;
    // Play startup sound (last file in array).
    playfile(sizeof(sound) / sizeof(sound[0]) - 1);

  // Optional, but may make sampling and playback a little smoother:
  // Disable Timer0 interrupt.  This means delay(), millis() etc. won't
  // work.  Comment this out if you really, really need those functions.
  TIMSK0 = 0;

  // Set up Analog-to-Digital converter:
  analogReference(EXTERNAL); // 3.3V to AREF
  adc_save = ADCSRA;         // Save ADC setting for restore later

  for(int i = 6; i < = 9; i++) {
    pinMode(i, INPUT);
  pinMode(11, INPUT);

  while(wave.isplaying); // Wait for startup sound to finish...
  startPitchShift(700);     // and start the pitch-shift mode by default.

//////////////////////////////////// LOOP

// As written here, the loop function scans a keypad to triggers sounds
// (stopping and restarting the voice effect as needed).  If all you need
// is a couple of buttons, it may be easier to tear this out and start
// over with some simple digitalRead() calls.

void loop() {
  button6State = digitalRead(6);
  button7State = digitalRead(7);
  button8State = digitalRead(8);
  button9State = digitalRead(9);
  button11State = digitalRead(11);

  if (button6State == HIGH) { //thumb   

  else if (button7State == HIGH) { //pointer   
  else if (button8State == HIGH) { //middle    
  else if (button9State == HIGH) { //ring   

  if (button11State == HIGH) { //pinky


//////////////////////////////////// HELPERS

// Open and start playing a WAV file
void playfile(int idx) {
  char filename[13];

  (void)sprintf(filename,"%s.wav", sound[idx]);
  Serial.print("File: ");

  if(!, filename)) {
    PgmPrint("Couldn't open file ");
  if(!wave.create(file)) {
    PgmPrintln("Not a valid WAV");

//////////////////////////////////// PITCH-SHIFT CODE

void startPitchShift(int pitch) {
  // Right now the sketch just uses a fixed sound buffer length of
  // 128 samples.  It may be the case that the buffer length should
  // vary with pitch for better results...further experimentation
  // is required here.
  nSamples = 128;
  //nSamples = F_CPU / 3200 / OCR2A; // ???
  //if(nSamples > MAX_SAMPLES)      nSamples = MAX_SAMPLES;
  //else if(nSamples < (XFADE * 2)) nSamples = XFADE * 2;

  memset(buffer1, 0, nSamples + XFADE); // Clear sample buffers
  memset(buffer2, 2, nSamples + XFADE); // (set all samples to 512)

  // WaveHC library already defines a Timer1 interrupt handler.  Since we
  // want to use the stock library and not require a special fork, Timer2
  // is used for a sample-playing interrupt here.  As it's only an 8-bit
  // timer, a sizeable prescaler is used (32:1) to generate intervals
  // spanning the desired range (~4.8 KHz to ~19 KHz, or +/- 1 octave
  // from the sampling frequency).  This does limit the available number
  // of speed 'steps' in between (about 79 total), but seems enough.
  TCCR2A = _BV(WGM21) | _BV(WGM20); // Mode 7 (fast PWM), OC2 disconnected
  TCCR2B = _BV(WGM22) | _BV(CS21) | _BV(CS20);  // 32:1 prescale
  OCR2A  = map(pitch, 0, 1023,
  F_CPU / 32 / (9615 / 2),  // Lowest pitch  = -1 octave
  F_CPU / 32 / (9615 * 2)); // Highest pitch = +1 octave

  // Start up ADC in free-run mode for audio sampling:
  DIDR0 |= _BV(ADC0D);  // Disable digital input buffer on ADC0
  ADMUX  = ADC_CHANNEL; // Channel sel, right-adj, AREF to 3.3V regulator
  ADCSRB = 0;           // Free-run mode
  ADCSRA = _BV(ADEN) |  // Enable ADC
  _BV(ADSC)  |        // Start conversions
  _BV(ADATE) |        // Auto-trigger enable
  _BV(ADIE)  |        // Interrupt enable
  _BV(ADPS2) |        // 128:1 prescale...
  _BV(ADPS1) |        //  ...yields 125 KHz ADC clock...
  _BV(ADPS0);         //  ...13 cycles/conversion = ~9615 Hz

  TIMSK2 |= _BV(TOIE2); // Enable Timer2 overflow interrupt
  sei();                // Enable interrupts

void stopPitchShift() {
  ADCSRA = adc_save; // Disable ADC interrupt and allow normal use
  TIMSK2 = 0;        // Disable Timer2 Interrupt

ISR(ADC_vect, ISR_BLOCK) { // ADC conversion complete

  // Save old sample from 'in' position to xfade buffer:
  buffer1[nSamples + xf] = buffer1[in];
  buffer2[nSamples + xf] = buffer2[in];
  if(++xf >= XFADE) xf = 0;

  // Store new value in sample buffers:
  buffer1[in] = ADCL; // MUST read ADCL first!
  buffer2[in] = ADCH;
  if(++in >= nSamples) in = 0;

ISR(TIMER2_OVF_vect) { // Playback interrupt
  uint16_t s;
  uint8_t  w, inv, hi, lo, bit;
  int      o2, i2, pos;

  // Cross fade around circular buffer 'seam'.
  if((o2 = (int)out) == (i2 = (int)in)) {
    // Sample positions coincide.  Use cross-fade buffer data directly.
    pos = nSamples + xf;
    hi = (buffer2[pos] < < 2) | (buffer1[pos] >> 6); // Expand 10-bit data
    lo = (buffer1[pos] < < 2) |  buffer2[pos];       // to 12 bits
  if((o2 < i2) && (o2 > (i2 - XFADE))) {
    // Output sample is close to end of input samples.  Cross-fade to
    // avoid click.  The shift operations here assume that XFADE is 16;
    // will need adjustment if that changes.
    w   = in - out;  // Weight of sample (1-n)
    inv = XFADE - w; // Weight of xfade
    pos = nSamples + ((inv + xf) % XFADE);
    s   = ((buffer2[out] < < 8) | buffer1[out]) * w +
      ((buffer2[pos] << 8) | buffer1[pos]) * inv;
    hi = s >> 10; // Shift 14 bit result
    lo = s >> 2;  // down to 12 bits
  else if (o2 > (i2 + nSamples - XFADE)) {
    // More cross-fade condition
    w   = in + nSamples - out;
    inv = XFADE - w;
    pos = nSamples + ((inv + xf) % XFADE);
    s   = ((buffer2[out] < < 8) | buffer1[out]) * w +
      ((buffer2[pos] << 8) | buffer1[pos]) * inv;
    hi = s >> 10; // Shift 14 bit result
    lo = s >> 2;  // down to 12 bits
  else {
    // Input and output counters don't coincide -- just use sample directly.
    hi = (buffer2[out] < < 2) | (buffer1[out] >> 6); // Expand 10-bit data
    lo = (buffer1[out] < < 2) |  buffer2[out];       // to 12 bits

  // Might be possible to tweak 'hi' and 'lo' at this point to achieve
  // different voice modulations -- robot effect, etc.?

  DAC_CS_PORT &= ~_BV(DAC_CS); // Select DAC
  // Clock out 4 bits DAC config (not in loop because it's constant)
  DAC_DI_PORT  &= ~_BV(DAC_DI); // 0 = Select DAC A, unbuffered
  DAC_DI_PORT  |=  _BV(DAC_DI); // 1X gain, enable = 1
  for(bit=0x08; bit; bit>>=1) { // Clock out first 4 bits of data
    if(hi & bit) DAC_DI_PORT |=  _BV(DAC_DI);
    else         DAC_DI_PORT &= ~_BV(DAC_DI);
  for(bit=0x80; bit; bit>>=1) { // Clock out last 8 bits of data
    if(lo & bit) DAC_DI_PORT |=  _BV(DAC_DI);
    else         DAC_DI_PORT &= ~_BV(DAC_DI);
  DAC_CS_PORT    |=  _BV(DAC_CS);    // Unselect DAC

  if(++out >= nSamples) out = 0;

Starry Night

This is a music visualizer that simulates the starry night sky.


A music-loving friend of mine once told me he missed seeing the stars at night after coming to Pittsburgh. The idea for this project came as an idea for a present for that friend. I liked the idea of a portable, personal set of stars that could be charmed to life by playing music. The stars react to new notes being played, and the aurora appears at certain volume of music and duration of continuous music. (This may not seem very obvious in the video at the beginning because I wasn’t playing the notes hard enough. Also pardon my rustiness on piano – I haven’t really played in 2 years.)

The end product uses an Arduino Mega 2560, with an Electret Mic Amplifier for sound input, and loads of LEDs for display. Frequency analysis utilizes code from Adafruit’s Piccolo (, which uses Elm-Chan’s FFT (Fast Fourier Transformation) library.

The creation of this project was a long and arduous process for me. My initial idea was to have a box filled with blue origami stars (, with white LEDs hidden inside white origami stars scattered around in the box. However, I quickly ran out of material for making the blue origami stars, and so replaced it with black cardstock and tissue paper. The end result of the stars adhere to my original idea in terms of visuals and functionality. The end product still has the white LEDs hidden inside white origami stars, and you just can’t tell clearly because they are now covered by black tissue paper. The white origami stars make the light of the white LEDs spread a little bit, and if you look carefully, the spread is in the shape of 5-pointed stars. I also wanted more white LED stars, but was limited by the number of PWM pins on the board (and later, space for the wires).

I also wanted to actually learn how to use the FFT library to implement more accurate frequency measurement, for picking out very roughly which notes are being played. It turned out that this is actually quite difficult due to harmonics, and it was hard to understand how to use the library partly due to poor documentation, so I ended up working with code from Adafruit for frequency analysis. A lot of testing was done to get it more suited for piano music. After getting the stars to work the way I wanted them to, I reflected on I could make it appear more interesting/visually appealing. The easy answer was “colors”, so I tried to implement something that appears similar to auroras. The source of the auroras are a number of LEDs. The ideal way to do this would be to use a LED strip (like this one, but since this was late into the project, I didn’t have time to get one.

Physically putting this together was also very hard and time-consuming. I had a lot of trouble getting the connections for all the LEDs to work. I had to basically tear my project apart several times because the conductive copper tape wasn’t effective for LEDs, or wires broke, or solder wasn’t strong enough, etc. In the end my breadboard had almost every single slot filled. Then more things fell apart as I was trying to get everything to fit inside a small box. I didn’t realize all those wires would take up so much space.

Weird, but useful tidbits I’ve learned about Arduino:
– variables with types that don’t match won’t raise an error while compiling, but would cause weird things when run
– error in uploading program to Mega board can sometimes be fixed by unplugging a few pins

In the end, I was fairly satisfied with the final product. The stars worked almost as well as I hoped they would. I just wish I was able to show off the craftsmanship that went into this project more. If I get up enough energy, I’d replace the RGB LEDs with an RGB strip. It would be difficult though, because I’d literally have to tear apart my project again, both physically and coding-wise. I enjoy watching it as someone else is playing the piano. Too bad I can’t really watch it while playing at the same time, since I have to watch the keyboard, haha.

[I just realized I accidentally named this the same as that famous van Gogh piece. Ugh. Need better naming skills.]

Code, if you’re interested. It’s messy and long and uncommented:

/* Starry Night -
a music visualizer that simulates the starry night sky.

Parts of the code are written by Adafruit Industries.  Distributed under the BSD license. 
See for original source.

Additional code written by Jun Huo.


#ifdef __AVR_ATmega32U4__
 #define ADC_CHANNEL 7
 #define ADC_CHANNEL 0

int16_t       capture[FFT_N];    // Audio capture buffer
complex_t     bfly_buff[FFT_N];  // FFT "butterfly" buffer
uint16_t      spectrum[FFT_N/2]; // Spectrum output buffer
volatile byte samplePos = 0;     // Buffer position counter

  peak[8],      // Peak level of each column; used for falling dots
  dotCount = 0, // Frame counter for delaying dot-falling speed
  colCount = 0, // Frame counter for storing past column data
  sparkCount = 0;
  col[8][10],   // Column levels for the prior 10 frames
  minLvlAvg[8], // For dynamic adjustment of low & high ends of graph,
  maxLvlAvg[8], // pseudo rolling averages for the prior few frames.
  colDiv[8];    // Used when filtering FFT output to 8 columns

PROGMEM uint8_t
  // This is low-level noise that's subtracted from each FFT output column:
  noise[64]={ 8,6,6,5,3,4,4,4,3,4,4,3,2,3,3,4,
              2,2,2,2,2,2,2,2,2,2,2,2,2,3,3,4 },
  // These are scaling quotients for each FFT output column, sort of a
  // graphic EQ in reverse.  Most music is pretty heavy at the bass end.
    255, 175,218,225,220,198,147, 99, 68, 47, 33, 22, 14,  8,  4,  2,
      0,   0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
      0,   0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
      0,   0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0 },
  //New filter, tuned for general piano music
  col0data[] = {  3,  1, 
     130, 190,  20}, 
  col1data[] = {  5,  2,
           20, 160, 140, 20,  5},
  col2data[] = {  7,  4,
                     40, 170, 160,  80,  20,  10,
       5,   1},
  col3data[] = {  9,  5,
                           5,  20,  90, 170, 120,  
      60,  20,  10,   2,   1},
  col4data[] = { 13,  7,
                                     4,  10,  60, 120, 
     160, 170, 160, 110,  60,  20,  10,   5,   3,   1},
  col5data[] = { 19, 10,
       3,   8,  20,  50, 110, 150, 170, 180, 170, 140,
      90,  60,  40,  20,  10,   5,   2,   1,   1},
  col6data[] = { 23, 14,
                           1,   3,   8,  15,  35,  60,
     100, 120, 150, 170, 180, 185, 160, 130, 100,  50,
      20,  10,   5,   2,   1,   1,   1},
  col7data[] = { 35, 18,
                                               1,   2,
       3,   5,  10,  20,  30,  60,  90, 120, 120, 160,
     180, 185, 165, 140, 120, 100,  80,  60,  45,  35,
      25,  20,  15,  10,   7,   5,   3,   2,   2,   1,
       1,   1,   1},
  // And then this points to the start of the data for each of the columns:
  *colData[] = {
    col0data, col1data, col2data, col3data,
    col4data, col5data, col6data, col7data };
int nPins = 8;
int ledPins[] = {6,7,8,9,10,11,12,13};
float brightness[] = {0,0,0,0,0,0,0,0};
float fadeFactor = 2.0;

boolean newSpark = false;

int rgb1[] = {44,45,46};
int rgb2[] = {3,4,5};
float color1[] = {0,0,0};
float color2[] = {0,0,0};

int vol[60];

int auroraWait = 0;
int waitThreshold = 0;
int auroraThreshold = 0;
int auroraCount = 0;

boolean playAurora = false;

void setup() {
  uint8_t i, j, nBins, binNum, *data;

  memset(peak, 0, sizeof(peak));
  memset(col , 0, sizeof(col));
  memset(vol, 0, sizeof(vol));

  for(i=0; i<8; i++) {
    minLvlAvg[i] = 0;
    maxLvlAvg[i] = 512;
    data         = (uint8_t *)pgm_read_word(&colData[i]);
    nBins        = pgm_read_byte(&data[0]) + 2;
    binNum       = pgm_read_byte(&data[1]);
    for(colDiv[i]=0, j=2; j complex #s
  samplePos = 0;                   // Reset sample counter
  ADCSRA |= _BV(ADIE);             // Resume sampling interrupt
  fft_execute(bfly_buff);          // Process complex data
  fft_output(bfly_buff, spectrum); // Complex -> spectrum

  // Remove noise and apply EQ levels
  for(x=0; x> 8);

  int oldPeakSum = 0;
  int newPeakSum = 0;
  int highestPeakIndex = -1;
  int highestPeak = -1;
  // Downsample spectrum output to 8 columns:
  for(x=0; x<8; x++) {
    int oldPeak = peak[x];
    data   = (uint8_t *)pgm_read_word(&colData[x]);
    nBins  = pgm_read_byte(&data[0]) + 2;
    binNum = pgm_read_byte(&data[1]);
    for(sum=0, i=2; i maxLvl) maxLvl = col[x][i];
    if((maxLvl - minLvl) < 8) maxLvl = minLvl + 8;
    minLvlAvg[x] = (minLvlAvg[x] * 7 + minLvl) >> 3; // Dampen min/max levels
    maxLvlAvg[x] = (maxLvlAvg[x] * 7 + maxLvl) >> 3; // (fake rolling average)

    // Second fixed-point scale based on dynamic min/max levels:
    level = 10L * (col[x][colCount] - minLvlAvg[x]) /
      (long)(maxLvlAvg[x] - minLvlAvg[x]);

    // Clip output and convert to byte:
    if(level < 0L)      c = 0;
    else if(level >  8) c = 8; // Allow dot to go a couple pixels off top
    else                c = (uint8_t)level;

    if(c > peak[x]) peak[x] = c; // Keep dot on top

    y = 8 - peak[x];
    int newPeak = peak[x];
    if (newPeak>0 && newPeak>highestPeak) {
      highestPeakIndex = x;
      highestPeak = newPeak;
  if (oldPeakSum< (newPeakSum-2) && newSpark==false) {
    newSpark = true;
  int currVolSum = 0;
  for (int v=59; v>0; v--) {
    vol[v] = vol[v-1];
  vol[0] = abs(512-int(ADC));
  int currVolAvg = currVolSum/60;
//  int currVolume = abs(512-int(ADC));
  int newColor0 = 0, newColor1 = 0, newColor2 = 0;
  if (currVolAvg>64) {if (highestPeakIndex==0) {
    newColor0 = 80;
    newColor1 = 20;
    newColor2 = 20;
  } else if (highestPeakIndex==1) {
    newColor0 = 80;
    newColor1 = 60;
    newColor2 = 20;
  } else if (highestPeakIndex==2) {
    newColor0 = 70;
    newColor1 = 70;
    newColor2 = 20;
  } else if (highestPeakIndex==3) {
    newColor0 = 30;
    newColor1 = 100;
    newColor2 = 30;
  } else if (highestPeakIndex==4) {
    newColor0 = 20;
    newColor1 = 90;
    newColor2 = 90;
  } else if (highestPeakIndex==5) {
    newColor0 = 20;
    newColor1 = 60;
    newColor2 = 90;
  } else if (highestPeakIndex==6) {
    newColor0 = 20;
    newColor1 = 20;
    newColor2 = 100;
  } else {
    newColor0 = 60;
    newColor1 = 20;
    newColor2 = 90;
  if (newSpark==true) {
    int led = random(nPins);
    float b = 255.0;
    brightness[led] = b; 
    newSpark = false;  
  int starsLight = 0.0;
  for (int s=0; s0.0) auroraWait++;
  else auroraWait-=(waitThreshold/20);
  if (starsLight>0.0 && auroraWait>=waitThreshold){
    playAurora = true;
  } else {
    playAurora = false;

  // Every third frame, make the peak pixels drop by 1:
  if(++dotCount >= 3) {
    dotCount = 0;
    for(x=0; x<8; x++) {
      if(peak[x] > 0) peak[x]--;
  if (++sparkCount>=50) {
    sparkCount = 0;
    if (newSpark==true) newSpark = false;
  if (++auroraCount>=30) {
    auroraCount = 0;
    if (playAurora && newColor0+newColor1+newColor2>0) {
      color2[0] = color1[0];
      color2[1] = color1[1];
      color2[2] = color1[2];
      color1[0] = float(newColor0);
      color1[1] = float(newColor1);
      color1[2] = float(newColor2);
  if (color1[0]+color1[1]+color1[2]>0.0) {
  for (int p=0; p= 10) colCount = 0;
  for (int q=0; q<3; q++) {
    float newFade = float(fadeFactor)/5.0;
    float br1 = color1[q];
    br1 = (br1-newFade< =0) ?0: (br1-=newFade);
    color1[q] = br1;
    float br2 = color2[q];
    br2 = (br2-newFade<=0) ?0: (br2-=newFade);
    color2[q] = br2;

ISR(ADC_vect) { // Audio-sampling interrupt
  static const int16_t noiseThreshold = 4;
  int16_t              sample         = ADC; // 0-1023

  capture[samplePos] =
    ((sample > (512-noiseThreshold)) &&
     (sample < (512+noiseThreshold))) ? 0 :
    sample - 512; // Sign-convert for FFT; -512 to +511

  if(++samplePos >= FFT_N) ADCSRA &= ~_BV(ADIE); // Buffer full, interrupt off

Arousal vs. Time

Arousal vs. Time from Miles Peyton on Vimeo.

Arousal vs. Time: a seismometer for arousal, as measured by facial expressions.


One way to infer inner emotional states without access to a person’s thoughts is to observe their facial expressions. As the name suggests, Arousal vs. Time is a visualization of excitement levels over time. The more you deviate from your resting expression, the more excited you are presumed to be. An interesting context for this tool is in everyday social interactions. Watching the seismometer while talking to a friend can generate insights into the nature of that relationship. It might reveal which person tends to lead the conversation, or who is the more introverted of the two. Watching a conversation unfold in this visual manner is both soothing and unsettling.


Arousal vs. Time is the latest iteration in a series of studies. After receiving useful feedback on my last foray into face tracking, I decided to rework the piece to include sound, two styrofoam heads, and text for clarity. Daito Manabe’s and Kyle McDonald’s face-related projects – ”Face Instrument”, “Happy Things” – informed the sensibility of this work.

“Face Instrument” – Daito Manabe


“Happy Things” – Kyle McDonald


A casual conversation between myself and a friend was recorded on video and in XML files. I wrote the two software components of this artwork – the seismometer and the playback mechanism – in openFrameworks 0.8. I used the following three addons:

  1. ofxXMLSettings – for recording and playing back face data
  2. ofxMtlMapping2D – projection mapping
  3. ofxFaceTracker – tracking facial expressions


The set The set

The projection mapping on the styrofoam heads was carried out on two laptops with two pico projectors. I stored facial data in XML files, and recorded video and audio with an HD video camera and an audio recorder.

The audio file was manipulated in Ableton Live to obscure the content of the conversation. I used chroma keying in Adobe Premiere to remove the background of the video, such that the graphs would seem to emerge from behind the heads, and not from  some unseen bounding box. Finally, the materials – a video file, two XML files, and an audio file – were brought together in a second “player” application, also built in openFrameworks.


Regarding a conceptual impetus for this project, I keep thinking back to a point professor Ali Momeni made when I showed an earlier version of this project during critique. He questioned not my craft, but my language:  the fact that I used the word ”disingenuous” to describe my project. I’m still don’t have a satisfying response to this, just more speculation.

Am I trying to critique self-quantification by proposing an alienating use of face tracking? Or am I making a sincere attempt to learn something about social interaction through technology? The ambivalence I feel toward the idea of self-quantification leads me to believe that it is worthwhile territory for me to continue to explore.

Stillness (Assignment 7/9)


stillness cover photo

I made a projection of virtual butterflies which will come land on you (well, your projected silhouette) if you hold still, and will fly away if you move.


This semester, a friend of mine successfully lobbied for the creation of a “Mindfulness Room” to be created in one of the dorms on campus. The room is meant to be a place where students go to relax, meditate, and, as the name implies, be more mindful.

For my final project, I wanted to create something that was for a particular place, and so I chose the Mindfulness Room. Having tried to meditate in the past, I know it can be very challenging to clear your mind and sit entirely still for very long. So, the core of this project was to make something that would make you want to be still (and that would also fit in with the overall look and feel of the room.)

Technical Aspects

Some of the technical hurdles in this project:

  • Capturing a silhouette from a Kinect cam image. I tried to DIY this initially, which didn’t go well. Instead, I ended up finding this tutorial about integrating a Kinect and PBox2D. I fixed the tutorial code so that it would run in the most recent version of Processing and with the most recent version of the SimpleOpenNI library.
  • Integrating assorted libraries: SimpleOpenNIblobDetectionPBox2DToxicLibs, standard Java libraries. I almost certainly didn’t actually need to use all of them, but figured that out too late.
  • Dealing with janky parts of those libraries (e.g., jitteriness in the blobDetection library, fussiness of SimpleOpenNI). Using the libraries made my project possible, but I also couldn’t fix some things about them. I did, however, manage to improve blob detection from the Kinect cam image by filtering out all non-blue pixels (the Kinect highlights a User in blue).
  • Trying to simulate butterflies flying—with physics. Trying to simulate a whimsical flight path using forces in PBox2D had only ok results. I think it would be easier to create their paths in vanilla Processing or with another library, (though that might make collision detection far more challenging.)
  • Finding a computationally cheap way to do motion tracking. When I tried simple motion tracking, my program ate all my computer’s memory and still didn’t run. I ended up taking the Kinect/SimpleOpenNI provided “Center of Mass” and using that to track motion, which worked pretty well for my purposes.

Critical Reflection

As I worked on this project, I was unsure throughout that all the pieces (butterflies, kinect, etc.) would come together and/or work well. I think they came together fairly well in the end. Even though the project right now doesn’t live up to what I imagined in my head at the beginning, it still does what I essentially wanted it to do—making you want to stay still.

When people saw the project, their general response was “that’s really cool”, which was rewarding. Also, the person in charge of the Mindfulness room liked it enough that she wanted me to figure out how to make it work there long term. (Which could be really logistically difficult, in terms of setup and security because the room is always open and unsupervised, and drilling into the walls to mount things isn’t allowed.)

So, though there’s a list of things I think should be better about this project (see below), I think I managed to my concept simplistically, and well given that simplicity.

Things that could be better about this:

  • Butterflies’ visual appeal. Ideally, the wings would be hinged-together PBox2D objects. And antennae/other details would add a lot.
  • Butterflies movement. Could be more butterfly-like.
  • Attraction to person should probably be more gradual/a few butterflies at a time.
  • Code cleanliness: not good.
  • Ragged edge of person’s silhouette should be smooth.
  • Better capture of user. Sometimes the Kinect refuses to recognize a person as a User, or stops tracking it. This could have to do with how I treat the cam image, or placement, or lighting, or just be part of how I was doing Kinect/SimpleOpenNI. After talking with Golan, I think ditching OpenNI altogether and doing thresholding on the depth image would work best.






Inspired conceptually by websites like Kitten War and classic games like “Would You Rather?” and technically by projects like Post-Circuit Board by the Graffiti Research Lab, This or That is an electronic voting poster that allows passersby to vote on two different options as chosen by other strangers.


The poster consists of a voting button and seven-segment display on each side, as well as a reset button, all of which are controlled by a single ATTiny84. There are no wires on the poster besides the alligator clips connecting to the power–all the traces were made with copper tape.

While we are becoming more interconnected digital, electronics are becoming more and more personal–our laptops and cellphones are not devices that are meant to be shared physically, and we even get physically anxious when they’re out of our reach for too long. This or That is a “public” electronic, its charm and fun comes from its communal usage.

Screen Shot 2013-11-18 at 9.45.17 PMOlder iteration (I’ve since learned the art of making pretty traces!).


At one point I traded my coin cell battery in for a sweet LiPo battery that someone had lying around.
chargin’ a poster whaaat

Code & Circuits


I have a .ai file that needs cleaning that has both the poster text + lightly drawn traces that you can use to create a poster for your own, so bear with me! 🙂 Here’s a Fritzing diagram for now:






MIT Media Lab’s High Low Tech group has a fantastic tutorial on programming ATtinies. For reference, on an ATtiny84:

  • Physical Pin 9 –> SCK
  • Physical Pin 8 –> MISO
  • Physical Pin 7 –> MOSI
  • Physical Pin 5 –> RESET

My code below has a diagram included, and each physical pin number has been labelled:

#include <tiny_ledbackpack .h>

                       +  | 1  14 | -
                          | 3  12 | RIGHT VOTE BUTTON [PIN 1]
                          | 4  11 | 
                          | 5  10 |
                          | 6   9 | SCLOCK
                    SDATA | 7   8 | 

Tiny_7segment lMatrix = Tiny_7segment();
Tiny_7segment rMatrix = Tiny_7segment();

int lCounter = 0;
int rCounter = 0;
const int lButton = 10;
const int rButton = 2  ;
const int rstButton = 0;
boolean rBoo = false;
boolean lBoo = false;

void setup() {  

void loop() {
  if (digitalRead(lButton) == LOW) {
    lBoo = true;
  else if ((digitalRead(lButton) == HIGH) && (lBoo == true)) {
    lCounter += 1;
    lBoo = false;

  if (digitalRead(rButton) == LOW) {
    rBoo = true;
  else if ((digitalRead(rButton) == HIGH) && (rBoo == true)) {
    rCounter += 1;
    rBoo = false;

  if (digitalRead(rstButton) == LOW) {
    lCounter = 0;
    rCounter = 0;

a    lCounter = 0;
  } else if (rCounter == 9999) {
    rCounter = 0;

If you have any questions regarding construction, feel free to email me at maddyvarner (at) gmail (dot) com (preferably with the subject line “This or That”).


Arduinolin is a project designed to investigate the evolution of material possessions based on electronic trends by taking a traditional object and recreating a new object which is not only a modern, electrified version of the former but also extends the object using the capabilities of digital media.arduinolin


I decided upon a violin as my “traditional object” of choice, mainly because I thought it would be reasonable to use a stringed object as the ability to reprogram the touch sensors on the gloves to play different pitches when activated supports my concept. I immediately researched the evolution the violin and the electronic violin, which is all documented in this previous blog post.


The concept was inspired by a conversation I had with my father one night. I was on the phone with him and he was talking about an app he had just purchased for his iPhone. What caught my attention was that he had actually paid over $3.oo for it – I am not in the habit of downloading an application unless it is free. That set me thinking, how many apps have you purchased? How much money have you spent on virtual material? Is it worth it? How will very highly valued items which gain value as they age be transferred into the electronic world, and will that transfer ever be successful?

There is also an ecological argument accompanying the evolutionary argument which entails comparing the carbon footprint of apps and actual instruments, and how this could all eventually be handled by a single piece of technology.

Technical Aspects

The bulk of the work in this project was in figuring out how I was going to wire touch sensitive capacitors and make them individually interactive to the human touch. I had considered going with something like a pressure sensor which would return a different value depending where on the strip pressure was applied but turned this down in favor of using materials which could easily be translated onto conductive fabric for the purpose of making the final product wearable. In retrospect, I greatly regret not pursuing the first option, which would have made for a much smoother transition between instruments and a greater degree of musicality.

Additionally, I was unsure of how I was going to wire everything together onto the glove. The palm of the hand alone features twenty four wires, all interwoven into the glove itself using conductive thread. In the end I did use jumper cables to transfer data from the individual pins to the glove, then touched the end of the jumper cable to the thread. From there, all the Arduino does is loop through each pin then for each pin loop through each note in an array and if the pin corresponds to the note, play the note. I also have some “fun” touchpads at the moment which loop through a series of notes.

Critical Reflection

There are many existing versions of this project ranging in degrees of professionalism from those hobbyist who work out of the garage to commercially marketed products (typically designed to help one learn an instrument). I see that there are few limits to the extent to which I could improve upon my project, at least in theory. However, there are two things I would like to improve upon more than anything else.

Firstly, I am greatly irritated by the fact that although I have two gloves, the left hand representing the violin and the right hand representing the bow, the right hand does not actually need to do anything for the violin to work. Although in principle placing an accelerometer on the right glove would not be difficult, convincing the accelerometer to communicate with the arduino might have been something of a challenge perhaps involving a wireless shield.

Secondly, I am not impressed by the sound quality at all. I understand that it is very possible to synthesize sounds using MaxMSP which would be a far more rewarding result than the current buzzes provided by the Piezo element. It would be also be very rewarding to have a proper headphone jack to output the audio. I enjoy the personal experience my current edition supplies, but would certainly enhance this wherever possible. (Sticking a Piezo buzzer in one’s hat does not necessarily result in the best audio.)

Revolving Games


This crafty measuring device is meant to draw attention to the daily usage of revolving doors at Carnegie Mellon’s University Center building. It logs the time, proximity, and rpm data, but also incites a little competitive spirit on its free voltage.

This project revisited our previous class assignment that utilized seven segment displays to capture an interesting measurement. In my original idea, I wanted to choose a unique and fun way to portray numbers, and what better way to do that than with rankings? The reason I chose revolving doors as my subject matter was more or less because I was interested in the calculations involved with an accelerometer.

But as I developed my idea in this assignment, I wanted to convey more useful information about my subjects, the revolving doors. The research changed its direction from “interesting calculations” to bringing attention to those mundane doors that we pass through without a second thought. And I have to thank Maddy Varner and Golan Levin for reminding me that an extra seven segment display and data logging shield were just the things I needed to accomplish this.

That said, the actual wiring of all these new devices, as well as figuring out their libraries were the most technical aspect of the project. Through this process, I came to understand JUST HOW INVALUABLE neat soldering can be. But in the end, the effort was definitely worth it. (See below for fritzing and code). I had some technical difficulties along the way (I seem to have jynxed technology a lot this semester), but I find the data that came from my 7 hours of installation really valuable. The animated GIF below shows the plotted data from the data logging shield, and there are clear patterns of usage for these doors. (Click the image.)


Facts and Figures:
124 people used the door in those 7 hours
The highest score was 40 rpm
There were 4 notable mishaps

Of course, I won’t forget to address the most exciting–and hazardous–part of this project: the participants. I may have underestimated the competitive spirit of college students, because I felt fear watching some of them. My project installation time was cut short because the UC staff asked me not to display the high score portion, and I personally thought the goings could get worse since it is finals week. On a side note, I was extremely happy with how the magic arm stabilized the box. The whole contraption was incredibly sturdy.

In conclusion, I am very satisfied with this project. Although video editing is not my strong point, I did enjoy watching over my project and seeing people have fun and expressing a genuine interest.

Supported in part by a microgrant from the Frank-Ratchye Fund For Art at the Frontier


Sparkfun Arduino Uno
Adafruit ADXL345 Triple Axis Accelerometer
Adafruit Assembled Data Logging Shield
Adafruit IR Distance Sensor (10-80 cm)
Adafruit 4 Digit 7 Segment Displays (0.56″)
9V Batteries

Fritzing Diagram:

Revolving Games_bb

//Revolving Games by Michelle Ma
//note the code may have been altered due to
//the WordPress syntax

#include "RTClib.h"
#include "Adafruit_LEDBackpack.h"
#include "Adafruit_GFX.h"

Adafruit_7segment matrix1 = Adafruit_7segment();
Adafruit_7segment matrix2 = Adafruit_7segment();
Adafruit_ADXL345 accel = Adafruit_ADXL345(12345);

RTC_DS1307 RTC; // Real Time Clock

const int chipSelect = 10; //for data logging
const int distancePin = 0; //A0 for IR sensor

const int threshold = 100; //collect data when someone near
const float radius = 0.65; //radius of door in meters
const float pi = 3.1415926;

File logfile;

int highScore;

void setup() {
  if (!RTC.isrunning()) {
    RTC.adjust(DateTime(__DATE__, __TIME__));
  highScore = 0;

void createFile() {
  char filename[] = "LOGGER00.CSV";
  for (uint8_t i=0; i<100; i++) {
    filename[6] = i/10 + '0';
    filename[7] = i%10 + '0';
    if (!SD.exists(filename)) {
      logfile =, FILE_WRITE);
  if (!logfile) {
    Serial.print("Couldn't create file");
  Serial.print("Logging to: ");
  if (!RTC.begin()) {
    Serial.println("RTC error");
  logfile.println("TimeStamp,IR Distance,Accel (m/s^2),RPM");

void loop() {
  DateTime now =;
  sensors_event_t event; 
  float distance = analogRead(distancePin);
  float acceleration = event.acceleration.z;
  float rpm;
  if (distance > threshold) {
    rpm = computeRpm();
  } else {
    rpm = 0; //I chose not to display the scores
  }          //of people in other booths
  if (rpm > highScore) {
    highScore = rpm;
  logData(now, distance, acceleration, rpm);
  serialData(now, distance, acceleration, rpm);
  writeMatrices(int(rpm), int(highScore));
  logfile.flush(); // Write data to SD

float computeRpm() {
  float velocity = computeVelocity();
  float result = abs(60.0*velocity)/(2.0*pi*radius);
  return result;

float computeVelocity() {
  float acceleration;
  float sum = 0;
  int samples = 100;
  int dt = 10; //millis
  for (int i=0; i

Fish in a Pond Documentation

Dave Final Project – Fish in a Pond


A fish which can learn what kind of melody the user likes via machine learning and plays them.

I was inspired by my research professor’s project “Simstudent”, in which a human student walks a computer Simstudent through the steps of algebra problems, from which the Simstudent will learn via machine learning. While I was testing it, I found I greatly enjoyed teaching and watching my Simstudent succeed on the problems that once baffled it. Thus, I started off looking for ways to use machine learning as the backend of my project. Music seemed like a good idea, so I went for it, despite having zero experience.

I used Weka’s implementation of the ADTree algorithm as my backbone. I represented a melody as an array of 10 notes, which is limited to 7 pitches, as recommended by Professor Richard Randall. The user can either rate a melody played by either the fish or the user as favorable or unfavorable, from which the fish learns. My thoughts on what the frontend looks like revolved around a fish, because they don’t seem like creatures that will likely be playing music, so I implemented it as such.

In hindsight, music is probably not the best thing for me to do;. I do not have the skills to hear the musical structures in the melodies that were created, and along with the fact that whether melodies are good or bad to a user is highly subjective, caused me to be unable to statistically confirm whether the algorithm is robust. I did however have a musically gifted friend play around with it, and after around 15 trials, he claimed that the fish picked up on a structure that he had played. I also managed to train my fish to know that it must play the last note at a low pitch, which if that means good melodies in my heart, then I am successful.

Source code can be found in this repository: