Data Center Heat Meets Its Match in Mineral Oil Submersion System

Green Revolution Cooling teams with TACC to test power-saving solutions

Watch a video about Green Revolution Cooling featuring co-founder Christiaan Best.

Christiaan Best was sitting in a field when the initial idea for his company took root. A friend was telling Best about the cooling systems being installed at the North American Aerospace Defense Command (NORAD), where he worked, and the plans were so inefficient, so counterintuitive, that it started Best on a search for alternatives.

Some months later, enjoying a meal with one of his friends at Magnolia Café in Austin, Texas, he had a eureka moment and sketched an idea for a new data center cooling system on the back of a napkin. The idea involved building a rack that could hold densely packed computer servers in a circulating bath of liquid mineral oil.

42U CarnotJet rack and pump module at TACC.

"Doing submersion in liquid was not something we came up with," admitted Best, CEO and Technical Founder at Green Revolution Cooling (GRC). "People have been submerging electronics for 50 years, and, of course, Cray famously submerged computers in the 70s and 80s."

"Ten years ago, no one cared," Best explained. "Power density was very low, the cost of hardware compared to cooling was much higher, and the number of overall servers was lower. But now, people who run data centers are starting to build dedicated buildings just to move the air though the computers. Where does this madness end? You can't just keep shoving power into a box and expecting it to be cooled by air, which is fundamentally an insulator."

With energy costs rising and society requiring more compute power and data services, Best saw renewed opportunities for liquid cooling technology if it could be turned into a cost-saving product for data centers.

Best and co-founder Mark Tlapak built a prototype in Best's mother's garage and brought investors by as his mother served coffee and tea. The company's breakthrough moment came at the Supercomputing 2009 conference where Green Revolution Cooling's fluid submersion cooling system for HPC was named one of the year's Disruptive Technologies. It won the distinction again in 2010.

Perspective view of servers installed in system.

"We were totally swamped with people. Everyone wanted to see if it really worked," Best recalled. "That marked a turning point. It forced us to get our product development done as quickly as possible."

Around that time, Best and Tlapak met with staff at the Texas Advanced Computing Center (TACC) to talk about the advantages of GRC's fluid submersion system.

"One of our biggest challenges in running data centers is providing power and cooling for these large scale systems," said Tommy Minyard, director of the Advanced Computing Systems group at TACC. "With our current data center operations, we run about 1.4 power usage effectiveness, which means we need 40 percent more power to provide the cooling in these data centers. Air is pretty inefficient in terms of transferring heat from the processors. This is why we're interested in emerging cooling techniques that do a much better job of transferring heat."

In 2009, Green Revolution Cooling was awarded a National Science Foundation (NSF) Small Business Innovation Research program grant to support the benchmarking of its CarnotJet™ dielectric fluid submersion cooling system. In April 2010, GRC deployed the system at TACC's data center on The University of Texas at Austin's J.J. Pickle Research campus.

More than a year and a half into the collaboration, the results of system tests have exceeded expectations on both sides. According to measurements, the installation has consistently used three to five Watts or less of cooling power for every 100 Watts of server equipment used. Moreover, the fluid submersion cooling system has shown reductions in total power usage of approximately 40 percent compared to the same hardware cooled by air, despite the fact that the GRC system was installed on an unconditioned loading dock where the ambient air temperature is sometimes well over 100°F.

Dell servers installed in CarnotJet system.

"We delivered all of the energy savings that we set out to," Best said. "We're achieving long-term efficiency that shows the cooling system uses three to five percent of the total computing power. That kind of efficiency is unmatched, especially when you consider that the computers themselves are using 10 to 15 percent less power." (The additional server power saving occurs because the fans, which air-cool the hardware from the inside, are not needed and are removed before installation.)

In the future, as data centers grow denser and hotter, liquid cooling may be a necessity, rather than an option.

"We want to put 100 kilowatts in a rack, or even more," Minyard said. "We understand that traditional cooling techniques aren't going to work at large scale."

Green Revolution Cooling isn't the only company exploring alternative cooling solutions. Data centers are experimenting with everything from shipping containers to Yahoo's chicken coop design to piping refrigeration to each component. But GRC's technology has quickly been gaining traction.

Example of six-rack system installation.

Only a few years after they launched, Green Revolution Cooling has installed systems at research centers, petroleum companies, and major banks around the world. By March, the company will have installations at five of the Top-100 supercomputing sites internationally. They continue to work with TACC and co-presented a poster at the Supercomputing 2011 conference showcasing the results of their two-year collaboration.

"So much of the datacenter is not focused on the silicon itself," Best said. "By reducing the amount of infrastructure, I think it will allow scientists to spend more money on the silicon itself. And when you give scientists and engineers more powerful tools, theoretically, they should be able to achieve better and more timely results."

In the broader sphere, data centers account for two percent of all energy usage in the United States, and this figure is rising.

Said Best: "If all data centers were able to reduce the amount of energy used on the scale that we're able to achieve in HPC, then I think the world would be a different place."


UT Austin Students, Green Revolution Cooling and TACC Share an Adventure in Fluid-Submersion Cooling at SC'11

When deciding on what type of cluster to bring to the Student Cluster Challenge at SC'11, the leading international supercomputing conference, Team Texas decided to think out of the box. While six other teams from around the world deployed traditional GPU accelerators, GPU/CPU hybrid systems and solid-state drives, this team of six undergraduate students from The University of Texas at Austin brought the first liquid submersion cluster to the competition—thanks to Green Revolution Cooling (GRC) and the Texas Advanced Computing Center (TACC).

Team Texas members from left to right: Craig Yeh, Marcell Himawan, Zach Pringnitz, Isami Romanowski, Michael Teng and Andrew Wiley.

Team Texas went into the competition with the expectation that the move to liquid cooling, in the form of mineral oil, would give them a five to 15 percent increase in computer power while keeping them under the 26 amp limit. And they succeeded!

With the mineral oil cooling solution, all server fans were removed to save power and so the chips could run more efficiently. This allowed them to add more computer nodes to the cluster. Overall, the team saved 10 percent in power. They received the best non-GPU-fueled high-performance Linpack (HPL) score and came in third overall, after Team Taiwan and Team Russia.

It's a tough competition. The students had to design, build and run a cluster on power-hungry benchmarks using unknown data sets. They had 48 hours to run through as much data as they could to produce as many solutions as possible to the problems that the data sets provided.

Through the competition, the members of Team Texas educated conference attendees about GRC's fluid-submersion cooling solution. Many people were surprised to learn that it was mineral oil and not water that kept the processors cool—they didn't know that such a solution existed. It also provided many valuable life lessons.

"This experience gave me the opportunity to see what the industry values and how it operates on a higher level," said Zach Pringnitz, the Team Texas lead and a double major in computer science and electrical engineering. "This competition taught me a lot about designing, building and managing a cluster efficiently, but the exposure to the industry leaders and the insight they gave me was the most valuable aspect of this experience."

Thank you to the following sponsors who made this year's Student Cluster Competition a reality: GRC, TACC, Dell, Intel, Mellanox, and Chevron.

Aaron Dubrow, Science and Technology Writer
January 17, 2012

Share |

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin is one of the leading centers of computational excellence in the United States. The center's mission is to enable discoveries that advance science and society through the application of advanced computing technologies. To fulfill this mission, TACC identifies, evaluates, deploys, and supports powerful computing, visualization, and storage systems and software. TACC's staff experts help researchers and educators use these technologies effectively, and conduct research and development to make these technologies more powerful, more reliable, and easier to use. TACC staff also help encourage, educate, and train the next generation of researchers, empowering them to make discoveries that change the world.

  • Cooling computer servers using air is inefficient and costly. As a result, some companies are returning to an older technology to cool data centers: submerging the hardware in liquid.
  • Over the last year and a half, Green Revolution Cooling has teamed up with the Texas Advanced Computing Center to test and benchmark their CarnotJet™ dielectric fluid submersion cooling system for high performance computing.
  • Results show that significant energy savings can be achieved using their system in a supercomputing environment.

Aaron Dubrow
Science and Technology Writer
aarondubrow@tacc.utexas.edu