Traffic and Pedestrian Tracking

Web Applications
Machine Learning
Computer Vision

We collaborated with the City of Portland to create a system which takes live traffic camera footage and can automatically extract useful data from it. This was accomplished by using object detection and tracking models to be able to detect and track relevant objects such as cars, pedestrians, motorcycles, and other objects. We then developed a user-friendly web interface which displays live output from the detection and tracking models. Screenshots of this system in action can be seen below as well as more details about both the models and the web interface. The object detection model utilized YOLO-V3 pretrained on the MS-COCO dataset. This allowed us to be able to detect all the categories of objects we needed and offered fast performance even on edge devices. The object tracking model took advantage of a LSTM (long short term memory) model which was developed by Chanho Kim. It allows for tracking of objects even after short obscuration. We were successfully able to run these models sequentially, meaning that the output of the object detection model could be fed directly into the object tracking. Work began to allow these models to run concurrently as would be required for real-time operation and we hope this work will be continued next year. The data flow starts by running the Node server, which is connected to multiple clients as can be seen in Figure 2. The server then runs a Python script to process the video feed as a background task. As the Object Detection/Tracking model is processing the video feed, two main types of data are being extracted and sent to the server: the frames and the objects detected within them. Each frame is then sent to the server to be broadcast in real-time to all available clients. UI Interface Design: We decided to use MongoDB because it is a NoSQL database, which provides more flexibility for both current and future development. The objects detected in the frame have some key pieces of information that we are storing in our database: the Class Name (i.e., car, person), Confidence (i.e., how confident the AI is that it classified the object correctly), Bounding Box coordinates/dimensions, Timestamp, Camera, ID. Overall, this goal of this project was to help the City of Portland utilize deep learning and a straightforward web interface to acquire a new tool with which to analyze current traffic conditions and patterns around the city. Future goals for this project include further data analysis to incorporate features like collision prediction as well as making the system more robust and accessible to City of Portland users.

0 Lifts 

Artifacts

Name Description
Code Github Organization This is where all of the code for the UI and API are located. The model code cannot be publicly shared as requested by Chanho Kim, the author of the tracking code.   Link