Latest News


Taming Turbulent Flows With Frontera

Published on September 3, 2019 by Jorge Salazar

Turbulence comes in from the left in this image, hitting the shock, and leaving the domain from the right. This three-dimensional picture shows the structure of enstrophy and colored by local Mach number with the shock at gray. [Credit: Chang-Hsin Chen, TAMU]

Nobel laureate Richard Feynman once called turbulence "the most important unsolved problem of classical physics." That's because the chaotic motion of turbulence can't be neatly solved with equations.

Turbulence is so complicated that scientists today try to simplify it as much as possible but still retain the basic physics of it. One of the simplifications is to assume that the general motion of turbulence, its flow, is incompressible or of constant density. This simplification works as a good approximation of low speed flows, but it falls apart for high speed turbulent flows, which are important for a wide variety of applications and phenomena such as the mixing of fuel in combustion engines of cars, planes, and rockets.

Diego Donzis, professor in the Department of Aerospace Engineering, Texas A&M University, an early user of the Frontera supercomputer.
"On Frontera, we would like to run some of the simulations that will allow us to answer some long-standing and new questions we have about the process of mixing in compressible flows," says Diego Donzis, an associate professor in the Department of Aerospace Engineering at Texas A&M University.

Donzis is an early user of the Frontera system, but he's no stranger to NSF supercomputers. He's developed his group's code, called Compressible Direct Numerical Simulations (cDNS) through Teragrid allocations on the LeMieux system at the Pittsburgh Supercomputing Center; Blue Horizon at the San Diego Supercomputer Center; later through the Extreme Science and Engineering Discovery Environment (XSEDE) on Kraken at the National Institute for Computational Sciences; Stampede1, Stampede2 and now Frontera at TACC. What's more, Donzis and colleagues have scaled cDNS up to a million cores on Department of Energy supercomputers Titan and Mira.

"Only recently, with computers reaching very high levels of parallelism, can we tackle problems in compressible turbulence at conditions that are relevant to applications," Donzis said.

More computing power translates to added detail in computer models, which can solve more equations that capture the interactions between turbulence and temperature, pressure, and density — features not accounted for in incompressible flows.

Frontera will be well-suited for us to run these simulations. Mainly it's the size of Frontera, which will make some of these unprecedented simulations possible.
Diego Donzis, Texas A&M University
"Frontera will be well-suited for us to run these simulations," Donzis explained. "Mainly it's the size of Frontera, which will make some of these unprecedented simulations possible. Also, something attractive to us is that it's based on well-known architectures; well-known components. We can predict, we hope more or less accurately, how the code will behave, even at very large scales on Frontera. We believe that a full-scale, full machine run on Frontera will be very efficient."

Donzis hopes that he will be able to solve some of the long-standing questions that scientists are not able to solve today. "Some of the questions that we are tackling are impossible to solve either from theory or experiment," he said. "Given the size of Frontera, and the way in which it's accessible to scientists of all disciplines, I think it can make a huge difference in how we design new engineering devices and, ultimately, how we understand nature and the world around us."

Another simulated view of turbulence coming from the left, hitting the shock, and leaving the domain from the right. The two-dimensional picture is Q-criterion and the shock is the thin blue line. [Credit: Chang-Hsin Chen, TAMU]

On the Path to Exascale

Another project that Donzis is pushing for on Frontera is developing numerical schemes for exascale computing — the next great frontier for supercomputing on the order of processing power of the human brain. Most computer scientists agree that fundamental changes are needed in programming in order to run efficiently on exascale machines, with a billion processing elements or more. The big obstacle, says Donzis, are bottlenecks in the communication and synchronization between processing elements.

"We are developing numerical schemes that can actually avoid, or significantly mitigate, the cost associated with communication and synchronization among a billion processing elements," Donzis said. "We call these asynchronous tolerant schemes. These are numerical schemes to solve turbulent flows which do not need to wait for messages to be passed between processors." This removes, at a mathematical level, synchronization between processing elements, which can bypass the latency and overhead associated with parallelism at its very highest levels.

Donzis has been researching asynchronous tolerance schemes on the Stampede2 system, and he hopes to continue the work on Frontera. "Although Frontera is not an exascale machine, it will allow us to test some of these developments that we've been doing over the last few years at a scale that we were not able to do before. We have some preliminary tests on Frontera with very promising results. We look forward to running those on the full machine," Donzis said.

Story Highlights

Frontera simulations by Diego Donzis of TAMU investigate mixing process in compressible flows, long considered computationally intractable.

High speed turbulent flows are important in fuel combustion mixing for cars, planes, and rockets.

Frontera's computational capacity and well-known architecture give it an advantage in full-scale, full-machine compressible turbulent simulations.

Asynchronous tolerant schemes developed by Donzis for testing on Frontera to cut processor overhead, key bottleneck for exascale systems.


Faith Singer-Villalobos

Communications Manager | 512-232-5771

Aaron Dubrow

Science And Technology Writer

Jorge Salazar

Technical Writer/Editor | 512-475-9411