Top NSF petascale supercomputer and expert staff accelerate discoveries for nation's scientists

Published on May 27, 2014 by Aaron Dubrow

Take a virtual tour of Stampede, one of the largest supercomputers in the world housed at the Texas Advanced Computing Center at The University of Texas at Austin.

A High Performance First Year

Sometimes, the laboratory just won't cut it.

After all, you can't recreate an exploding star, manipulate quarks, or forecast the climate in the lab. In cases like these, scientists rely on supercomputing simulations to capture the physical reality of these phenomena -- minus the extraordinary cost, dangerous temperatures or millennium-long wait times.

When faced with an unsolvable problem, researchers at universities and labs across the U.S. set up virtual models, determine the initial conditions for their simulations — the weather in advance of an impending storm, the configurations of a drug molecule binding to an HIV virus, the dynamics of a distant dying star — and press compute.

And then they wait as the Stampede supercomputer in Austin, Texas, crunches the complex mathematics that underlies the problems they are trying to solve.

Within minutes, hours or just a few days (compared to the months and years without the use of supercomputers), Stampede returns results by harnessing thousands of computer processors.

Stampede is one of the most powerful supercomputers in the U.S. for open research and currently ranks as the 7th most powerful in the world, according to the November 2013 Top500 List. Able to perform nearly 10 trillion operations per second, Stampede is the most capable of the high-performance computing, visualization and data analysis resources within the NSF Extreme Science and Engineering Discovery Environment (XSEDE).

Stampede went into operation at the Texas Advanced Computing Center (TACC) in January 2013. The system is a cornerstone of the National Science Foundation's (NSF) investment in an integrated advanced cyberinfrastructure, which allows America's scientists and engineers to access cutting-edge computational resources, data and expertise to further their research across scientific disciplines, including engineering and the humanities.

At any given moment, Stampede is running hundreds of separate applications simultaneously. Approximately 3,400 researchers computed on the system in its first year, working on 1,700 distinct projects (adding about 100 new projects per month). The researchers came from 350 different institutions and their work spanned a range of scientific disciplines from chemistry to economics to artificial intelligence.

"It was a fantastic first year for Stampede and we're really proud of what the system has accomplished," said Dan Stanzione, acting director of TACC. "When we put Stampede together, we were looking for a general purpose architecture that would support everyone in the scientific community. With the achievements of its first year, we showed that was possible."

Researchers across the country apply to use Stampede through the XSEDE project. Their intended use of Stampede is assessed by a peer review committee that allocates time on the system. Once approved, researchers are provided access to Stampede free of charge and tap into an ecosystem of experts, software, storage, visualization and data analysis resources that make Stampede one of the most productive, comprehensive research environments in the world. Training and educational opportunities are also available to help scientists use Stampede effectively.

Stampede is in high demand. Ninety percent of the compute time on the system goes to researchers with grants from NSF or other federal agencies; the other 10 percent goes to industry partners and discretionary programs.

"The system is utilized all the time — 24/7/365," Stanzione said. "We're getting proposals requesting 500% of our time. The demand exceeds time allocated by 5-1. The community is hungry to compute." Stampede will be infused with second generation Intel Xeon Phi cards in 2015 and will operate through 2017.

Back to overview >>