Latest News

 

Solving Science and Engineering Problems with Supercomputers and AI

Published on January 25, 2018 by Aaron Dubrow

Artificial intelligence represent a new approach scientists can use to interrogate data, develop hypotheses, and make predictions, particularly in areas where no overarching theory exists.

Artificial intelligence has been used to discover the exact interventions needed to obtain a specific, brand-new result in a living organism. Pigment cells over a tadpole's left eye became cancer-like; those over the right eye remained normal. [Credit: Patrick Collins, Tufts University]

Traditional applications on supercomputers (also know as high-performance computers [HPC]) start from "first principles" — typically mathematical formulas representing the physics of a natural system — and then transform them into a problem that can be solved by distributing the calculations to many processors.

By contrast, machine learning and deep learning — two subsets of the field of artificial intelligence — take advantage of the availability of powerful computers and very large datasets to find subtle correlations in data and rapidly simulate, test and optimize solutions. These capabilities enable scientists to derive the governing models (or workable analogs) for complex systems that cannot be modeled from first principles.

Machine learning involves using a variety of algorithms that "learn" from data and improve performance based on real-world experience. Deep learning, a branch of machine learning, relies on large data sets to iteratively "train" many-layered neural networks, inspired by the human brain. These trained neural networks are then used to "infer" the meaning of new data.

Training can be a complex and time-consuming activity, but once a model has been trained, it is fast and easy to interpret each new piece of data accordingly in order to recognize, for example, cancerous versus healthy brain tissue or to enable a self-driving vehicle to identify a pedestrian crossing a street.

In Search of Deep Learning Trainers: Heavy Computation Required

Just like traditional HPC, training a deep neural network or running a machine learning algorithm requires extremely large numbers of computations (quintillions!) – theoretically making them a good fit for supercomputers and their large numbers of parallel processors.

Researchers are using Stampede2 — a Dell/Intel system at the Texas Advanced Computing Center (TACC) that is one of the world's fastest supercomputers and the fastest at any U.S. university — to advance machine and deep learning.

Training a deep neural network to act as an image classifier, for instance, requires roughly 1018 single precision operations (an exaFLOPS). Stampede2 — a National Science Foundation-funded Dell/Intel system at the Texas Advanced Computing Center (TACC) that is one of the world's fastest supercomputers and the fastest at any U.S. university — can perform approximately two times 1016

Logically, supercomputers should be able to train deep neural networks rapidly. But in the past, such training has required hours, days or even months to complete (as was the case with Google's AlphaGo).

Overcoming Bottlenecks in Neural Networks

With frameworks optimized for modern CPUs, however, experts have recently been able to train deep neural network models in minutes. For instance, researchers from TACC, the University of California, Berkeley and the University of California, Davis used 1024 Intel® Xeon® Scalable processors on Stampede2 to complete a 100-epoch ImageNet training with AlexNet in 11 minutes, the fastest that such training had ever been reported. Furthermore, they were able to scale to 2048 Intel Knights Landing nodes and finish the 90-epoch ImageNet training with ResNet-50 in 20 minutes without losing accuracy.

Deep learning experts from TACC collaborated with researchers at the University of Texas Center for Transportation Research and the City of Austin to automatically detect vehicles and pedestrians at critical intersections throughout the city using machine learning and video image analysis. [Credit: Weijia Xu, TACC]

These efforts at TACC (and similar ones elsewhere) show that one can effectively overcome bottlenecks in fast deep neural network training with high-performance computing systems by using well-optimized kernels and libraries, employing hyper-threading, and sizing the batches of training data properly.

In addition to Caffe, which the researchers used for the ImageNet training, TACC also supports other popular CPU- and GPU-optimized deep learning frameworks, such as MXNet and TensorFlow, and is creating an extensive environment for machine and deep learning research.

Though mostly done as a proof-of-concept showing how HPC can be used for deep learning, high-speed, high-accuracy image classification can be useful in characterizing satellite imagery for environmental monitoring or labeling nanoscience images obtained by scanning electron microscope.

This fast training will impact the speed of science, as well as the kind of science that researchers can explore with these new methods.

Successes in Critical Applications

While TACC staff explore the potential of HPC for artificial intelligence, researchers from around the country are using TACC supercomputers to apply machine learning and deep learning to science and engineering problems ranging from healthcare to transportation.

For instance, researchers from Tufts University and the University of Maryland, Baltimore County, used Stampede1 to uncover the cell signaling network that determines tadpole coloration. The research helped identify the various genes and feedback mechanisms that control this aspect of pigmentation (which is related to melanoma in humans) and reverse-engineered never-before-seen mixed coloration in the animals.

They are exploring the possibility of using this method to uncover the cell signaling that underlies various forms of cancer so new therapies can be developed.

Researchers developed a method to automatically identify and classify brain tumors, as well as different types of cancerous regions, using biophysical models of tumor growth and machine learning algorithms. [Credit: George Biros, The University of Texas at Austin]

In another impressive project, deep learning experts at TACC collaborated with researchers at the University of Texas Center for Transportation Research and the City of Austin to automatically detect vehicles and pedestrians at critical intersections throughout the city using machine learning and video image analysis.

The work will help officials analyze traffic patterns to understand infrastructure needs and increase safety and efficiency in the city. (Results of the large-scale traffic analyses were presented at IEEE Big Data in December 2017 and the Transportation Research Board Annual Meeting in January 2018.)

In another project, George Biros, a mechanical engineering professor at the University of Texas at Austin, used Stampede2 to train a brain tumor classification system that can identify brain tumors (gliomas) and different types of cancerous regions with greater than 90 percent accuracy — roughly equivalent to an experienced radiologist.

The image analysis framework will be deployed at the University of Pennsylvania for various clinical studies of gliomas.

Through these and other research and research-enabling efforts, TACC has shown that HPC architectures are well suited to machine learning and deep learning frameworks and algorithms. Using these approaches in diverse fields, scientists are beginning to develop solutions that will have near-term impacts on health and safety, not to mention materials science, synthetic biology and basic physics.

We invite and encourage our university users and industry partners to try out TACC's deep learning and machine learning frameworks and possibly incubate research projects that leverage these powerful, emerging techniques.


This feature is part of a TACC Special Report on Artificial Intelligence. From health and safety to meteorology and cybersecurity, TACC supercomputers are helping researchers apply machine learning and deep learning to basic and applied science. Learn more about TACC's efforts in this rapidly evolving area.

Read more of the AI Report Features


Contact

Faith Singer-Villalobos

Communications Manager
faith@tacc.utexas.edu | 512-232-5771

Aaron Dubrow

Science And Technology Writer
aarondubrow@tacc.utexas.edu

Jorge Salazar

Technical Writer/Editor
jorge@tacc.utexas.edu | 512-475-9411