Sun Constellation Linux Cluster - Decommissioned
Ranger was one of the largest computing system in the world for open science research. As the first of the new NSF Track2 HPC acquisitions, this system provides unprecedented computational capabilities to the national research community and ushers in the petascale science era. Ranger enabled breakthrough science that had never before been possible, and provided groundbreaking opportunities in computational science & technology research, from parallel algorithms to fault tolerance, from scalable visualization to next generation programming languages.
Ranger went into production on February 4, 2008 using Linux (based on a CentOS distribution). The system components were connected via a full-CLOS InfiniBand interconnect. Eighty-two compute racks house the quad-socket compute infrastructure, with additional racks housing login, I/O, and general management hardware. Compute nodes were provisioned using local storage. Global, high-speed file systems were provided, using the Lustre file system, running across 72 I/O servers. Users interacted with the system via four dedicated login servers, and a suite of eight high-speed data servers. Resource management for job scheduling was provided with Sun Grid Engine (SGE).
If you have trouble viewing this video, please visit TACC's YouTube page.
Any researcher at a U.S. institution could submit a proposal to request an allocation of cycles on the system. The request required a description of the research, justification of the need for such a powerful system to achieve new scientific discoveries, and had to demonstrate that the proposer's team had the expertise to utilize the resource effectively.
- 90% of the system was dedicated to XSEDE
- 5% of the system was allocable to Texas higher education institutions
- 5% of the system was allocable to industry through TACC's Science & Technology Affiliates for Research (STAR) Program
To submit a proposal to request an allocation on another system, please visit the XSEDE website.
Researchers at Texas higher education institutions, please contact Chris Hempel.
|Number of Nodes:||3,936|
|Number of Processing Cores:||62,976|
|Total Disk:|| 1.73PB (shared) |