Gesture Recognition Using Intel Real Sense Camera (Team CS 33)

Python
Machine Learning
Computer Vision

We created a way to classify American sign language gestures as letters in real time using an Intel Real Sense Camera. Our project will help people who use sign language communicate with a wider audience, including those who might not understand sign language. Our project uses an Intel Real Sense light coded depth camera, model SR305, to capture gestures. We have built an extensive machine learning model based on a transfer learning approach from the ResNet-18 convolutional neural network model. Our newly trained convolutional neural network will take these gestures and output a classification in real time to its best ability. Our interface allows for a user to capture gestures and see the classifications for the gestures that they have input, such that the user would be able to construct words and sentences to communicate with others who can read the screen. We'll be doing a live demo and answering questions about our project at the June 5th College of Engineering Virtual Expo. The link to our zoom room is included below. For a more in depth understanding of our work, refer to our GitHub repo, also listed below. Project Partners: Eduardo X. Alban, Satoshi Suzuki, and Po-Cheng Chen at Intel

0 Lifts 

Artifacts

Name Description
Zoom Meeting Link Meeting Link for June 5th 10 am to noon.   Link
Expo Project Slides These are our expo slides that explain our project at a high level.   Link
GitHub Link A Link to our project code on GitHub   Link