TACC's Lonestar6 Supercomputer Gets GPU and Server BoostGrowing demand for AI, power efficiency cited as reasons for the upgradeby: Faith SingerPublished: Nov. 14, 2022 Feature Storyshare this:Longhorn TACC's Executive Director Dan Stanzione stands next to Lonestar6 in the data center. The system was updated with three new racks of Dell R750 servers with a total of 240 NVIDIA A100 40GB Tensor Core GPUs interconnected by NVIDIA Quantum 200Gb/s InfiniBand networking. The Texas Advanced Computing Center (TACC) announced today at SC22 in Dallas that the Lonestar6 supercomputer, which went into full production in January 2022, just got a boost of new servers and GPUs from Dell Technologies and NVIDIA. This system for Texas researchers allows them to compute and compete at the cutting edge of science and engineering."Lonestar6 replaces Longhorn as the biggest GPU resource in TACC's ecosystem, and the primary platform for the growing application of AI among our users," said Dan Stanzione, executive director of TACC and associate vice president for research at The University of Texas at Austin. "With the Lonestar6 upgrade, we continue to meet the growing demand for AI and other GPU-accelerated problems, and to take advantage of the power efficiency in heterogeneous computing."The system was updated with three new racks of Dell R750 servers with a total of 240 NVIDIA A100 40GB Tensor Core GPUs interconnected by NVIDIA Quantum 200Gb/s InfiniBand networking — the world's only fully offloadable, in-network computing platform."Lonestar6 is an example of the innovation and opportunity that supercomputing presents to solve society's most complex challenges while pushing the boundaries of efficiency for accelerated computing," said Ian Buck, vice president of Hyperscale and High Performance Computing at NVIDIA. "NVIDIA accelerated computing equips next-generation systems such as the Lonestar6 with the extreme performance and efficiency to enable AI innovation for the science community."The upgrades to Lonestar6 add about five quadrillion mathematical operations per second — or, in high performance computing (HPC) terminology, five petaflops of computing power. This is in addition to the three petaflops of existing performance from the AMD CPUs in the system.A person would have to do one calculation every second for 150 million years to match what Lonestar6 will compute in just one second. It will enable doctors to design patient-specific cancer treatments, let astronomers peer more deeply into the cosmos than ever before, and help meteorologists forecast our changing climate.Most recently, researchers relied on Lonestar6 to simulate Hurricane Ian for 10 days around the clock to analyze storm surge, flood, wind, and wave information from coastal ocean and inland flooding models.More than 90,000 unique users viewed the data on the Coastal Emergency Risk Assessment website, including emergency managers, first responders, and experts from FEMA, the U.S. Department of Transportation, and many other agencies.Lonestar6 is funded through The University of Texas Research Cyberinfrastructure, a collaboration between TACC and The University of Texas System, and is supported by several Texas academic institutions.Texas' support for advanced computing through the initiative has made the state a national leader in computational science and engineering during the past decade, with benefits from the research enabled cascading to citizens, industry, and society.Stampede2 Extended through June 2023TACC also announced earlier this year that the National Science Foundation (NSF) has extended operational funding for the Stampede2 supercomputer through June 2023.As part of the extension, 448 of the oldest Intel Knights Landing Xeon Phi nodes were upgraded with 224 of the latest generation of Dell PowerEdge R650 two-socket servers utilizing the Intel Xeon Platinum 8380 "Ice Lake" processors. These latest servers will provide more than double the performance and almost triple the memory of the previous nodes to provide additional capability for the Stampede2 system."Stampede2 has now run more than 10 million jobs, and despite its advancing age, it still has the most users of any system we run supporting capacity computing and a broad science mission," Stanzione said. "We felt it was important to not just extend the lifespan, but to also update the hardware to continue to support cutting edge science."Stampede2 has more than 20 petaflops of aggregate performance which still makes it the largest system in terms of delivered time in the NSF-funded ACCESS program, despite being deployed back in 2017."We imagine the new IceLake nodes will continue to run in some system long after Stampede2 retires next year," Stanzione said.Frontera and the Leadership Class Computing Facility (LCCF)TACC's Frontera supercomputer will run through 2025 when the systems for the new Leadership-Class Computing Facility (LCCF), led by a new machine named "Horizon," are expected to come online. TACC will work with new partners to construct a custom-built, co-location facility to house the LCCF's primary systems, the center's first off-campus venture. The LCCF will be hosted at a Switch commercial data center on the Round Rock, Texas campus. TACC announced in May 2022 that the center will be working with new partners to construct a custom-built, co-location facility to house the LCCF's primary systems, the center's first off-campus venture. The system will be hosted at a Switch commercial data center on the Round Rock, Texas, campus. This new space will add more than 15 MW of capacity, bringing TACC's total datacenter footprint to 25 available megawatts of power."The LCCF will launch a new era of computational discovery, including a system that can produce 10x more science than Frontera, and partnerships with leading HPC organizations across the nation.Part of the 10-fold improvement will come through optimizations in code performance. TACC also announced earlier this year the set of 21 codes and 'grand challenge'-class science problems that will receive funding through the "Characteristic Science Applications" program.Among the applications are software for large international experiments like the IceCube Neutrino Observatory; widely used codes from the earthquake and astrophysics communities; and custom codes that explore new approaches to machine learning and black hole modeling.