Frontera

A new NSF-funded petascale computing system

 Frontera Portal

In 2018, the National Science Foundation (NSF) awarded a $60 million grant to the Texas Advanced Computing Center (TACC) to deploy a new petascale computing system, Frontera. Frontera opens up new possibilities in science and engineering by providing computational capability that makes it possible for investigators to tackle much larger and more complex research challenges across a wide spectrum of domains.

Deployed in June 2019, Frontera is the 5th most powerful supercomputer in the world, and the fastest supercomputer on a university campus. Early user access began in June 2019, with full system production scheduled for late-summer 2019.

Up to 80% of the available hours on Frontera, more than 55 million node hours each year, will be made available through the NSF Petascale Computing Resource Allocation program.

System Hardware and Software Overview

Frontera has two computing subsystems, a primary computing system focused on double precision performance, and a second subsystem focused on single precision streaming-memory computing. Frontera also has multiple storage systems, as well as interfaces to cloud and archive systems, and a set of application nodes for hosting virtual servers.

Primary Compute System

The primary computing system was provided by Dell EMC and powered by Intel processors, interconnected by a Mellanox Infiniband HDR and HDR-100 interconnect. The system has 8,008 available compute nodes.

The configuration of each compute node is described below:

Processors Intel Xeon Platinum 8280 ("Cascade Lake")
  • Number of cores: 28 per socket, 56 per node.
  • Clock rate: 2.7Ghz ("Base Frequency")
  • "Peak" node performance: 4.8TF, double precision
Memory DDR-4 memory, 192GB/node
Local Disk 480 GB SSD drive
Network Mellanox InfiniBand, HDR-100

Subsystems

Liquid Submerged System
Processors360 NVIDIA Quadro RTX 5000 GPUs
Memory128GB per node
CoolingGRC ICEraQâ„¢ system
NetworkMellanox InfiniBand, HDR-100
Peak Performance4PF single precision
Longhorn
ProcessorsIBM POWER9-hosted system with 448 NVIDIA V100 GPUs
Memory256GB per node (4 nodes with 512GB per node)
Storage5 petabyte filesystem
NetworkInfiniband EDR network
Peak Performance3.5PF double precision; 7.0 single precision

System Interconnect

Frontera compute nodes are interconnected with HDR-100 links to each node, and HDR (200Gb) links between leaf and core switches. The interconnect is configured in a fat tree topology with a small oversubscription factor of 11:9.