# miyehn-Mocap

Finally….

For reference, below is the video from which Claire helped me get OpenPose data.

My sketches are basically just me labeling coordinate points and calculating array indices so I’m not going to show them here. But I did have the idea in mind from the beginning, which is I want to make abstractions of skating figures, especially the “speedy” feel of it. i.e. how how the skater dances and moves smoothly on ice.

The OpenPose data was nice but imperfect. First of all the data file was huge, like, 46MB (yes one single json file). So I first wrote a simple program to break it up into 9 segments, each containing poses for ~1000 frames. Then when I first visualized the raw data it looked like this:

There are body joints not recognized and are just zero vectors. There are also times when the audiences are recognized so we see flashes of figures in the back. So I wrote a second small program, which does the following:

• pick out the 3 largest recognized poses in each frame, and choose the one with best spacial continuity compared to the pose chosen from last frame by the same rules.
• linearly fill in the “holes” in data. For example, if the x coordinate of a joint from frame 5 to 10 are 6, 8, 0, 0, 0, 12, after running the program they’ll become 6, 8, 9, 10, 11, 12.
• Save the processed data in a slightly different structure for easier later access.

After such process the visualization looks much cleaner:

Although there are still some imperfections, at that point I think I’m just going to stop…. In the more than 8000 frames I can already find a long enough segment for this assignment. Perhaps some time later when I have more time I’ll find a better way to fix the glitches.

Below are examples of poses that are still problematic. In the first one there’s probably a very obvious audience (or coach) that’s recognized as the skater. In the second one… I know it’s hard to recognize all the joints when the skater is doing fancy spins.

Then I wrote another small program (……), which analyzes the original video frames and gets a pretty good set of data of camera movement from the video. It basically takes the part of frame inside the box on top but doesn’t overlap with the box containing the skater, then compares the same region in the previous frame, but do so with different offsets, ranging from (-20px, -20px) to (20px, 20px), and find the offset that produces the least average color difference. Then it saves this particular offset into a json file.

I excluded most of the ice because there’s not much color difference there to give me an accurate indication of motion. I also excluded the olympics model, the score indicator on the top left corner (not shown in this screenshot), and the crazily moving figure that can distract the program.

The 4th program finally visualizes the data that I spent so long obtaining and cleaning up. I smoothed out the motion vectors and coordinates for joints, connected the joints with some more lines to make it look less like a plain stick figure, and added trails according to the obtained motion vectors to the skater’s hands and feet to indicate her motion. Since the figure is still a little shaky even after smoothing out, I added some randomness to the trails by her feet, too. So they at least look coherent.

Below are my utility code. They can’t be run because they need to load json data, but they’re what I had. (they overlap, so it looks like a lot but actually not too bad)

Code for splitting that huge json file

Code for visualizing raw OpenPose data

Code for removing audiences and fixing glitches

Code for extracting camera motion from original video

And below is my code for smoothing out data and displaying them. (again it overlaps with the code in the links above)

```  JSONArray data; PVector[][] POSES; JSONArray loadedMotion; PVector[] motion;   int frames = 280; int current; int offset = 100;   PVector[][] rrs; float[] opacity; int ind; int len = 30;   float x = 0; float y = 0; void setup(){ size(640, 360); data = loadJSONArray("clearZeros-01.json");   loadedMotion = loadJSONArray("motion12.json"); smoothMotion(); saveMotion(); fixMotion();   POSES = new PVector[data.size()][18]; for(int i=0; i<data.size(); i++){ JSONArray pose = data.getJSONArray(i); for(int j=0; j<18; j++){ x = pose.getJSONArray(j).getFloat(0); y = pose.getJSONArray(j).getFloat(1); POSES[i][j] = new PVector(x, y); } } rrs = new PVector[4][len]; opacity = new float[len]; for(int i=0; i<rrs.length; i++){ for (int j=0; j<len; j++) { rrs[i][j] = new PVector(); if(i==0)opacity[j] = 0; } }   frameRate(20); fill(20); stroke(0);   }   void updateTail() { int i=prev(ind); int j=0; while (i!=ind) { opacity[i] = map(j, len, 0, 0, 140); i=prev(i); j++; } }   int prev(int i) { if (i==0) return len-1; else return i-1; } int next(int i) { if (i==len-1) return 0; else return i+1; }   void saveMotion(){ motion = new PVector[loadedMotion.size()]; for(int i=0; i<loadedMotion.size(); i++){ JSONArray m = loadedMotion.getJSONArray(i); PVector mm = new PVector(m.getFloat(0), m.getFloat(1)); motion[i] = mm; } }   void fixMotion(){ for(int i=50; i<80; i++){ if(motion[i].y>5) motion[i].y = 0; if(motion[i].x>5) motion[i].x = motion[i-1].x; } for(int i=230; i<280; i++){ if(motion[i].y>5) motion[i].y = motion[i-1].y; }   }   void trail(float x, float y, int index){ updateTail(); rrs[index][ind] = new PVector(x, y); int tmp = prev(ind); int counter = 0; PVector acc = new PVector(0,0); if(index<2){ while (tmp!=ind) { float x0 = rrs[index][tmp].x; float y0 = rrs[index][tmp].y; if(current-counter>=0) acc = PVector.add(acc, motion[current-counter]); fill(0, opacity[tmp]); noStroke(); ellipse(x0+acc.x/2, y0+acc.y/2, 2, 2); tmp = prev(tmp); counter++; } } else { while (tmp!=ind) { float x0 = rrs[index][tmp].x; float y0 = rrs[index][tmp].y; if(current-counter>=0) acc = PVector.add(acc, motion[current-counter]); stroke(0, opacity[tmp]); if(current-counter>=0 && motion[current-counter].y<11 && counter%2==0) line(x0+acc.x, y0+acc.y, x0+acc.x+motion[current-counter].x*3*random(0.5,3), y0+acc.y+motion[current-counter].y*3+random(-20,-5)); tmp = prev(tmp); counter++; } } }   void draw(){ scale(0.667); background(245);   strokeWeight(1); current = (frameCount-1)%frames; PVector[] pose = POSES[current+offset]; stroke(0, 180); display(pose); trail(pose[8].x/2, pose[8].y/2, 0); trail(pose[11].x/2, pose[11].y/2, 1); trail(pose[16].x/2, pose[16].y/2, 2); trail(pose[17].x/2, pose[17].y/2, 3); ind = next(ind); //displayLoadedMotion(); }   void displayLoadedMotion(){ stroke(255,0,0); line(width/2, height/2, width/2+motion[current].x*2, height/2+motion[current].y*2); ellipse(width/2+motion[current].x*2, height/2+motion[current].y*2, 5, 5); }   void smoothMotion(){ for(int i=3; i<loadedMotion.size(); i++){ JSONArray m0 = loadedMotion.getJSONArray(i-3); JSONArray m1 = loadedMotion.getJSONArray(i-2); JSONArray m2 = loadedMotion.getJSONArray(i-1); JSONArray m3 = loadedMotion.getJSONArray(i); float sumX = m0.getFloat(0)+m1.getFloat(0)+m2.getFloat(0)+m3.getFloat(0); float sumY = m0.getFloat(0)+m1.getFloat(1)+m2.getFloat(1)+m3.getFloat(1); JSONArray m4 = new JSONArray(); m4.setFloat(0, sumX/4); m4.setFloat(1, sumY/4); loadedMotion.setJSONArray(i, m4); } }   void display(PVector[] A) { //connect(A[0], A[1]); connect(A[1], A[2]); connect(A[2], A[3]); //connect(A[3], A[4]); connect(A[2], A[5]); connect(A[5], A[6]); connect(A[5], A[9]); connect(A[6], A[7]); connect(A[7], A[8]); connect(A[5], A[12]); connect(A[5], A[13]); connect(A[5], A[9]); connect(A[9], A[10]); connect(A[10], A[11]); connect(A[12], A[13]); connect(A[12], A[14]); connect(A[13], A[15]); connect(A[14], A[16]); connect(A[15], A[17]);   connect(A[6], A[12]); connect(A[9], A[13]); connect(A[12], A[15]); connect(A[13], A[14]); connect(A[7], A[9]); connect(A[6], A[10]); connect(A[8], A[6]); connect(A[9], A[11]); connect(A[12], A[16]); connect(A[13], A[17]); connect(A[1], A[3]); connect(A[1], A[5]); connect(A[3], A[5]); }   void connect(PVector a, PVector b) { line (a.x*0.5, a.y*0.5, b.x*0.5, b.y*0.5); }```