Scalable Collaborative and Remote Visualization Software (ScoreVis)

Purpose

This project is a direct response to the need for next-generation visualization tools for terascale/petascale science that run on large-scale HPC platforms and on dedicated graphics clusters. ScoreVis is targeted at large-scale clusters—with or without graphics processing units (GPUs)—and seamlessly scales across platforms. ScoreVis:

  • takes advantage of the computational power of terascale HPC clusters as well as large, GPU-based graphics clusters, and scales seamlessly from the largest available platforms (e.g., Track2) down to smaller systems deployed at users' local institutions;
  • maximizes the investments in these large scale systems by providing remote and collaborative access to high-end visualization capabilities, making them widely available and effective for any researcher with a reasonably high-bandwidth network connection (e.g., consumer broadband); and
  • provides general-purpose visualization capabilities by supporting any software based on the standard OpenGL Applications Programmers Interface (API), thus enabling most popular visualization applications.

The national research community is experiencing a tremendous surge of high performance computing power that is unprecedented in history. The potential impact of applying this compute power to the most challenging problems in science and society at large is enormous. Science and engineering will now be able to benefit from computing at much larger scales, finer resolutions, and longer time horizons, allowing for increasingly more realistic answers to scientific problems of global interest. This increase in capability, however, inevitably increases the amount of data generated from simulations and consequently, the amount of data that must be analyzed and understood. Historical trends make clear that as computational power increases from gigaflops to teraflops to petaflops, the resulting computed data will grow from gigabytes to terabytes to petabytes. The impact of this increase in the quantity of data to analyze is staggering—and a potential bottleneck that must be overcome to realize the full impact of terascale and petascale HPC systems.

Visualization is one of the most important, and commonly used, methods of analyzing and interpreting data from computing simulations. For many types of computational research, it is the only viable means of extracting information and developing understanding from the simulation data. Current methods that rely on visualization for post-processing the data are becoming obsolete as the data becomes too large to transfer between compute and visualization platforms. In addition, the use of expensive special purpose graphics machines is no longer economically viable at the scales required. However, the ability to interactively manipulate the data remains a desirable, and often necessary, part of the analysis process. Visualization will undoubtedly follow in the footsteps of HPC—distributed clusters leveraging the aggregate power of commodity technologies. Indeed, this trend has already begun, but has not yet provided a comprehensive solution for the national terascale science community.

Funding Source(s)

  • ScoreVis
Kelly Gaither

Director of Visualization
kelly@tacc.utexas.edu | 512-471-8957

Paul A. Navrátil

Manager, Scalable Vis Technologies Group, Visualization Scientist
pnav@tacc.utexas.edu | 512-471-6245