Frontera

A new NSF-funded petascale computing system

 Frontera Portal

In 2018, the National Science Foundation (NSF) awarded a $60 million grant to the Texas Advanced Computing Center (TACC) to deploy a new petascale computing system, Frontera. Frontera opens up new possibilities in science and engineering by providing computational capability that makes it possible for investigators to tackle much larger and more complex research challenges across a wide spectrum of domains.

Deployed in June 2019, Frontera is the 16th most powerful supercomputer in the world, and the fastest supercomputer on a university campus in the U.S.. Early user access began in June 2019, and the system entered full production in September 2019.

Up to 80% of the available hours on Frontera, more than 55 million node hours each year, will be made available through the NSF Petascale Computing Resource Allocation program.

System Hardware and Software Overview

Frontera has two computing subsystems, a primary computing system focused on double precision performance, and a second subsystem focused on single precision streaming-memory computing. Frontera also has multiple storage systems, as well as interfaces to cloud and archive systems, and a set of application nodes for hosting virtual servers.

Please use the following citation when acknowledging use of computational time on Frontera.

Dan Stanzione, John West, R. Todd Evans, Tommy Minyard, Omar Ghattas, and Dhabaleswar K. Panda. 2020. Frontera: The Evolution of Leadership Computing at the National Science Foundation. In Practice and Experience in Advanced Research Computing (PEARC ’20), July 26–30, 2020, Portland, OR, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3311790.3396656

Primary Compute System

The primary computing system was provided by Dell EMC and powered by Intel processors, interconnected by a Mellanox Infiniband HDR and HDR-100 interconnect. The system has 8,008 available compute nodes.

The configuration of each compute node is described below:

Processors Intel Xeon Platinum 8280 ("Cascade Lake")
  • Number of cores: 28 per socket, 56 per node.
  • Clock rate: 2.7Ghz ("Base Frequency")
  • "Peak" node performance: 4.8TF, double precision
Memory DDR-4 memory, 192GB/node
Local Disk 480 GB SSD drive
Network Mellanox InfiniBand, HDR-100

Subsystems

Liquid Submerged System
Processors Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
  • Number of cores: 16 per socket, 32 per node
GPU 360 NVIDIA Quadro RTX 5000 GPUs
  • 4 GPUs per node
Memory DDR4 Synchronous 2400 MHz, 128GB/node
Cooling GRC ICE ICEraQâ„¢ system
Local Disk 240GB SSD drive
Network Mellanox InfiniBand, FDR
Peak Performance 4PF single precision
Longhorn
Processors IBM POWER9-hosted system with 448 NVIDIA V100 GPUs
Memory 256GB per node (4 nodes with 512GB per node)
Storage 5 petabyte filesystem
Network Infiniband EDR network
Peak Performance 3.5PF double precision; 7.0 single precision

System Interconnect

Frontera compute nodes are interconnected with HDR-100 links to each node, and HDR (200Gb) links between leaf and core switches. The interconnect is configured in a fat tree topology with a small oversubscription factor of 11:9.