Latest News

 

Artificial Intelligence and Supercomputers to Help Alleviate Urban Traffic Problems

Published on December 11, 2017 by Aaron Dubrow

Look above the traffic light at a busy intersection in your city and you will probably see a camera. These devices may have been installed to monitor traffic conditions and provide visuals in the case of a collision. But can they do more? Can they help planners optimize traffic flow or identify sites that are most likely to have accidents? And can they do so without requiring individuals to slog through hours of footage?

Researchers from the Texas Advanced Computing Center (TACC), the University of Texas Center for Transportation Research and the City of Austin believe so. Together, they are working to develop tools that allow sophisticated, searchable traffic analyses using deep learning and data mining.

A new deep learning tool uses raw traffic camera footage from City of Austin cameras to recognize objects – people, cars, buses, trucks, bicycles, motorcycles and traffic lights – and characterize how those objects move and interact.

At the IEEE International Conference on Big Data this month, they will present a new deep learning tool that uses raw traffic camera footage from City of Austin cameras to recognize objects – people, cars, buses, trucks, bicycles, motorcycles and traffic lights – and characterize how those objects move and interact. This information can then be analyzed and queried by traffic engineers and officials to determine, for instance, how many cars drive the wrong way down a one-way street.

"We are hoping to develop a flexible and efficient system to aid traffic researchers and decision-makers for dynamic, real-life analysis needs," said Weijia Xu, a research scientist who leads the Data Mining & Statistics Group at TACC. "We don't want to build a turn-key solution for a single, specific problem. We want to explore means that may be helpful for a number of analytical needs, even those that may pop up in the future."

The algorithm they developed for traffic analysis automatically labels all potential objects from the raw data, tracks objects by comparing them with other previously recognized objects and compares the outputs from each frame to uncover relationships among the objects.

The team used the open-source YOLO library and neural network developed by University of Washington and Facebook researchers for real-time object detection. (According to the team, this is the first time YOLO has been applied to traffic data.) For the data analysis and query component, they incorporated HiveQL, a query language maintained by the Apache Software Foundation that lets individuals search and compare data in the system.

Once researchers had developed a system capable of labeling, tracking and analyzing traffic, they applied it to two practical examples: counting how many moving vehicles traveled down a road and identifying close encounters between vehicles and pedestrians.

Once researchers had developed a system capable of labeling, tracking and analyzing traffic, they applied it to two practical examples: counting how many moving vehicles traveled down a road and identifying close encounters between vehicles and pedestrians.

The system automatically counted vehicles in a 10-minute video clip, and preliminary results showed that their tool was 95 percent accurate overall.

Understanding traffic volumes and their distribution over time is critical to validating transportation models and evaluating the performance of the transportation network, said Natalia Ruiz Juri, a research associate and director of the Network Modeling Center at UT's Center for Transportation Research.

"Current practice often relies on the use of expensive sensors for continuous data collection or on traffic studies that sample traffic volumes for a few days during selected time periods," she said. "The use of artificial intelligence to automatically generate traffic volumes from existing cameras would provide a much broader spatial and temporal coverage of the transportation network, facilitating the generation of valuable datasets to support innovative research and to understand the impact of traffic management and operation decisions."

In the case of potential close encounters, researchers were able to automatically identify a number of cases where vehicles and pedestrians were in close proximity. None of these represented real-life dangers, but they demonstrated how the system discovers dangerous locations without human intervention.

"The City of Austin is committed to ending traffic fatalities, and video analytics will be a powerful tool to help us pinpoint potentially dangerous locations," said Jen Duthie, a consulting engineer for the City of Austin and a collaborator on the project. "We can direct our resources toward fixing problem locations before an injury or fatality occurs."

The researchers plan to explore how automation can facilitate other safety-related analyses, such as identifying locations where pedestrians cross busy streets outside of designated walkways, understanding how drivers react to different types of pedestrian-yield signage and quantifying how far pedestrians are willing to walk in order to use a walkway.

In addition to developing the model and testing its effectiveness, the team evaluated the performance and scalability of the object identification system using different hardware and parameter settings. These included Intel Xeon Phi, Intel Skylake, and several types of NVIDIA GPU processors.

"Deep learning has quickly become a very hot topic in recent years for computer science researchers and the technology industry," said Xu. "However, its adoption for other fields has lagged behind. This is partly due to its very high computational requirements in both the training and prediction stages. Thus, utilizing high-end hardware and high-performance computing resources is crucial for providing practical solutions for complex real-world problems, especially those involving large-scale data."

The project shows how artificial intelligence technologies can greatly reduce the effort involved in analyzing video data and provide actionable information for decision-makers. Frameworks like the one proposed by the researchers can expedite research traditionally based on manual video data analysis, and promote further work on video data application and integration.

"The highly anticipated introduction of self-driving and connected cars may lead to significant changes in the behavior of vehicles and pedestrians and on the performance of roadways," Ruiz Juri said. "Video data will play a key role in understanding such changes, and artificial intelligence may be central to enabling comprehensive large-scale studies that truly capture the impact of the new technologies."


The team built a website where the public can view examples of their detection, tracking and query tool for traffic analysis. To learn more visit: http://soda.tacc.utexas.edu/


Story Highlights

Researchers from the Texas Advanced Computing Center (TACC), the University of Texas Center for Transportation Research (CTR) and the City of Austin have developed a tool that uses artificial intelligence to recognize objects in raw traffic camera footage and characterize how those objects move and interact.

This information can then be analyzed and queried by traffic engineers and officials to improve the safety and performance of the city's transportation network.

The work will be presented at the IEEE Big Data conference this month.


Contact

Faith Singer-Villalobos

Communications Manager
faith@tacc.utexas.edu | 512-232-5771

Aaron Dubrow

Science And Technology Writer
aarondubrow@tacc.utexas.edu

Jorge Salazar

Technical Writer/Editor
jorge@tacc.utexas.edu | 512-475-9411