Alan

09 May 2013

logo_AlanTitle: AQI China / Author: Han HuaAbstract: Real time visualization of air quality in China.Showing you air pollution n major cities in China.http://tranquil-basin-8669.herokuapp.com/

Introduction
The project is aimed to show people in China what air pollution is like every day in China against what government claims. The project demonstrates realtime visualization of air quality in major cities in China and divided by air pollutants. Data sources are from PM25.in by Bestapp.us. They converged data from diverse platforms, such as US Embassy in Beijing, EPA, Ministry of Environmental Protection of China, etc.
The project is hosted here on Heroku.
Github Repository: https://github.com/hhua/PM2.5

logo_tweet_Alan
An astounishing sand storm happened this spring in Beijing and devoured everything in sight. The PM2.5 reaches beyond range of certain standards for air quality measurement to 516, in which the highest measuable value is 500. What is more astonishing is that government was trying to conceal real number of these important data. They even critized foreign news agencies and government departments of spreading “fake news” to China’s public. Of course, in this campaign, the government lost all of its trust from people. However, that is not enough. The public have the right to know real data, and key data with relation to life and death of people should be spread freely.
Thanks to BestApp.us lab in Guangzhou, China, I am allowed to launch this project to collect air quality data of 76 major cities in China and provide a better way of visualizing air quality data in China and spread it to more people in China.
Data sources of this project are merged by BestApp lab.The are specifically collected from US Embassy in Beijing, EPA and Department of Environment of China, etc.Data could be with come flaws within a certain level.A pressure should be always on the shoulder of government why air pollution is so severe and how much economic growth did cost in the past 30 years.
pic2pic1
Implementation
The project is totally implemented with Javascript. The first version is implemented in D3.js. The later version I changed it into heatmap.js and leaflet.js. The server is using Node.js and fetches data every 30 minutes.
Map visualization is zoomable. All visualization can be viewed based on different air pollutants. There is also a table for people who wants to know specific data for cities.

Joshua

25 Apr 2013

Screen Shot 2013-04-24 at 1.48.20 PM

Looking to 3d print and then investment cast.

Marlena

15 Apr 2013

I still think I have some hurdles. I’ve gotten a lot done, though.

Here’s what I have: clouds, basic flocking, some models for fish and the ship, and a couple miscellaneous other items and scripts. I’ve been focusing mostly on new modeling techniques, researching clouds and shaders, and animation.

Here’s what I still need to do: more complex flocking, finished models, a few spawning scripts, and more complex animations. Mostly bulk work I think; I need to put in the hours and it’ll get done.

Sam

15 Apr 2013

screenshot_apr15Most of the last week’s work on GraphLambda has been spent porting from the Processing environment to Eclipse and implementing various under-the-hood optimizations.Accordingly, the visible parts of the application look very similar to the last incarnation.The main exception here is the text-editing panel, which now provides an indication that it is active, supports cursor-based insertion editing, and turns red when an invalid string is entered.

Accomplishments

  • Eclipse!
  • Mysterious crash on long input strings has vanished
  • Drawing into buffers only when there is a change so I’m not slamming the CPU every frame
  • Working entirely from absolute coordinates now
  • Real insertion-based editing of the lambda expression, complete with cursor indicator
  • Context switching from drawing to text editing
  • “Tab” switching between different top-level expressions

The biggest issue that still remains is distributing the various elements of the drawing so that the logical flow of the expression is clear.Once this is done, the drawing interface must be implemented, including a method of selection highlighting.

TODOs

  • Overlap minimization
  • Lay in the rest of the user interface
  • Selection and selection highlighting
  • Implement tools
  • Named inclusion of defined expressions
  • Pan and zoom drawing window

Keqin

15 Apr 2013

This is my skech for the final project.20130415_010854I’m making a physical prototype for the device on the hand.It looks like1

The next step I need to make a feedback prototype to give feedback for people and write code for the feedback system.

Anna

14 Apr 2013

Since you last encountered me, I’ve been working on figuring out TUIO and Control P5, and have a few basic things working, but I still haven’t gotten two basic issues out of the way: 1) How do I make a construct that reliably holds the object IDs for the active fiducials, 2) How do I prevent more than 3 characters being selected at the same time?Both of these issues seem like they should be simple, and solved problems, but I can’t find anything useful on the internet.

In any case, check out some awesome screenshots from my recent tinkering!Screen Shot 2013-04-15 at 12.21.01 AMScreen Shot 2013-04-14 at 11.06.39 PM

Joshua

14 Apr 2013

The goal is to ‘grow’ a mesh using DLA methods: a bunch of particles moving in a pseudo-random walk (biased to move down) fall towards a mesh.  When a particle, which has an associated radius, intersects a vertex of the mesh that vertex moves outward in the normal direction by a small amount. Normal Direction means perpendicular to the surface (this is always an approximation for a mesh)  Long edges get subdivided. Short edges get collapsed away.  The results are rather spikey.  It would be better if it was smoother. Or maybe it just needs to run longer.

Ideas for making smoother:

  1. have the neighbors of a growing vertex also grow, but by an amount proportional to the distance to the growing vertex. One could say the vertices share some nutrients in this scenario, or perhaps that a given particle is a sort of  vague approximation of where some nutrients will land.
  2. let the particles have no radius. Instead have each vertex have an associated radius (a sphere around each vertex), which captures particles.  This sphere could be at the vertex location, or offset a little ways away along the vertex normal. This sphere could be considered the ‘mouth’ of the vertex
  3. maybe let points fall on the mesh, find where those points intersect the mesh, and then grow the vertices nearby. Or perhaps vertices that share the intersected face.

Ideas for making Faster:

  1. Discretize space for moving particles about. This might require going back and forth between mesh and voxel space (discretized space is split up into ‘voxels’ – volumetric pixel)
  2. moving the spawning plane up as the mesh grows so that it can stay pretty close to the mesh
  3. more efficient testing for each particle’s relationship to the mesh (distance or something)

blenderDLA_mesh

This one came out pretty branchy but its really jagged and the top branches get rather thin and platelike. This is why growing neighbor vertices (sharing nutrients) might be better.

Michael

14 Apr 2013

Screen Shot 2013-04-14 at 9.59.09 PM

I’ve switched focus a bit since the group discussion on project ideas.  I’m still focusing on the SDO imagery and displaying multiple layers of the sun time lapse, but I’ve decided that a more interesting approach to the project is to explore how people process different images of the sun when viewed through different eyes.  The first technical challenge to this is to create a way to view multiple Time Machine time lapses side by side.  I’ve managed to learn the Time Machine API and I’ve reworked a few things as follows:

A) Two time lapses display side-by-side at a proper size to be viewed through a stereoscope

B) Single control bar stretched beneath two time lapse windows

C) Videos synchronize on play and pause.  (Synchronization on time change or position change still results in jittery performance)

The synchronization needs a bit of work still, and then comes the time to work on the interface a bit more to support changing the layers in an intelligent and intuitive way.  I need to figure that out a bit and make some sketches.

Keqin

09 Apr 2013

Now there are many Kinect or Computer Vision systems.However, these two systems have a very virtual experience.I think I’m going to give some real feeling experience to the user who uses these systems.Also there are some feedback thing for Kinect/CV system.such as haptic phantom here is a video:

But I think it is a little limited for users.They must hold a pen to feel virtual things in the virtual world.So I am thinking to make a more natural feed back for them.The basic idea is to make a wearable device on the back of the hand and when push to something.It will give you some feedback.But it may not be stop your movement.It may just tell you: hey there’s something in the front of you.I’m thinking to use motor to be the engine for feedback part and kinect to detect people’s movement.Just a simple haptic thing for kinect and later maybe more complicated exoskeleton thing which Golan told me.