Kyin and Weija – Final Documentation

 

A Handprint & Footprint Recognizer

Introduction:

We created a handprint and footprint recognition system, by applying machine learning techniques to data from a Sensel Morph pressure interface.

We (Kyin and weija, sophomores at CMU) were really inspired by the potential of what the SenselMorph, a pressure sensor module, could do. With the SenselMorph we will be able to identify not only the physical shapes and surfaces of the objects that touch the sensor, but also the distribution of pressure.  With this idea in mind, we wanted to investigate how “identifiable” certain aspects of our body can be from others. The project was in part an exploration of machine learning, as well as what kind of new use cases we could project onto it.

Creating the Classifier 

Tools

  • Sensel Morph – The Sensel Morph is a pressure interface that is still in its early stages of production. It is a relatively small tablet that has approximately 20,000 pressure-sensitive sensors on the surface. More information regarding how the sensel morph is used can be seen on their website.
  • Pytorch – Pytorch is a python library used for training neural nets. A neural network is basically a probabilistic model that can be fed data and return predictions. It is trained on data, which in our case is the images of hands/feet with the person’s name as its label. With the training data, a neural net can “guess” new test images. PyTorch specifically is really nice, in that it has several options to optimize training the data using graphics cards. This allowed us to greatly reduce the overall time it took us to train the model.
  • OpenFrameworks – This was what ultimately tied our project together. There is a robust github repository with a senselmorph open frameworks addon. We bootstrapped onto this source code and adapted it to save train data, and run the neural net in real time.
  • MeteorJS – For our website, we used a combination of MeteorJS and p5.JS to create our probabilistic visual. Meteor is a great web framework for fast and rapid prototyping.

Training

In order to train a robust neural net, we needed several hundreds of photos. However, since the event we are recording is fairly short, repeating the event over and over wasn’t too hard. To further expedite this process, we made it so that our script can automatically save images when it detects that the event has “ended”. Across four people, we gathered about 400 sample images, and after removing poor samples, we ended with about 300. In hindsight, we probably should have trained it on much more data samples, but for the purposes of the assignment, 300 was more than sufficient.

With Pytorch, we were able to implement a convolutional neural network in Python and build a model using the 1200 train images that were collected. Here is more theoretical and technical information on Convolutional Neural Networks.

Special thanks to our lovely volunteers! (Photos shared with permission)

(From left to right: Jason, Kristin, Rohan, Changning)

In addition to the hand and footprints, we wanted to create an interactive exhibit for the final presentation. Since we can’t have our four trained people stand around a stand for hours, we decided to retrain our neural net on inanimate objects for viewers to try to classify themselves.

Results

Here are the outputs from some preliminary testing. When training a neural net, there is a concept called “epochs”, which is essentially how long we want to train the data for. If we overtrain the data, we suffer from something called over fitting, which is basically when the model is too hard coded on just recognizing the training data, and will fail to recognize any other external picture that isn’t identical to any of its training data. Therefore, to get the best results, we had to play around with how long we want to train the data. Overall, the results were around 85% –  90% accuracy, which is over our expectations.

We’ve noticed our accuracy actually drops when ran on the inanimate objects. Even though the pressure classes of the objects are vastly different, intuitively, we figured that since they are so apparently different, the neural network should be able to be much more accurate than the hand and foot prints, which look much relatively closer to each other than the objects:

(From left to right: Speaker, Mug, Car)

As we can see, the differences between the bases of the three objects are very different, and to the human eye, it seems as if they should easily be differentiated with respect to each other. On the contrary, beacuse of the lack of distinct features (edges, color gradients, corners, etc). the nerual net can only identify objective visual data of the images, and since these images lack significant features, it actually can’t tell the difference between the images as well as other images such as hand and foot prints.

Conclusion & Application

Once we were able to classify hands/feet, we were able to confirm our hypothesis that every hand is unique and are identifiable just like fingerprint. As we worked on this project, we’ve realized that thsi could have many more applications than just classifying hand and feet. Some other use cases include: butt classification for sit-down establishments like restaurants and barber shops, front door mat classifiers to identify visitors, and so on. We are optimistic from how accurate our foot and hand classifications ended up, and we definitely plan on thinking of more use cases for it.