Top NSF petascale supercomputer and expert staff accelerate discoveries for nation's scientists

Published on May 27, 2014 by Aaron Dubrow

Built to Handle to Big Data

The power of Stampede reaches beyond its ability to gain insight into our world through computational modeling and simulation. The system's diverse resources can be used to explore research in fields too complex to describe with equations, such as genomics, neuroscience and the humanities. Stampede's extreme scale and unique technologies enable researchers to process massive quantities of data and use modern techniques to analyze measured data to reach previously unachievable conclusions.

Stampede provides four capabilities that most data problems take advantage of:

  • Leveraging 14 petabytes of high speed internal storage, users can process massive amounts of independent data on multiple processers at once, thus reducing the time needed for the data analysis or computation.
  • Researchers can use many data analysis packages optimized to run on Stampede by TACC Staff to statistically or visually analyze their results. TACC staff also collaborates with researchers to improve their software and to make it run more efficiently in this environment.
  • Data is rich and complex. When the individual data computations becomes so large that Stampede's primary computing resources cannot handle the load, the system provides users with 16 compute nodes with one terabyte of memory each. This enables researchers to perform complex data analyses using Stampede's diverse and highly flexible computing engine.
  • Once data has been parsed and analyzed, GPUs can be used remotely to explore data interactively without having to move large amounts of information to less-powerful research computers.

Back to overview >>