Quantcast
Channel: David Scheffler – Dell IT Blog
Viewing all articles
Browse latest Browse all 6

Turning Up the Heat in Data Center Cooling

0
0

How do you cool today’s modern data centers, running increasingly high density and high performance equipment built to manage exploding amounts of enterprise data?  This presents a substantial cooling challenge for data center managers. Fortunately, we at Dell IT have found a way to take the heat off of such cooling demands.

After many months of careful experimentation, we recently determined that using a cold aisle containment approach in our Durham, N.C, data center, we can safely maintain our equipment at 78 degrees F.  This is six degrees warmer than the original design threshold of 72 degrees F. The increase means we can now leverage free-air cooling—air circulated from outside rather than mechanically cooled air—in our data center 80 percent of the time instead of 60 percent. (Think of it as opening a window in your house rather than running the air conditioner.) This will cut our cooling costs by 25 percent.

Turning Up the Heat - Data Center Cooling

A cool experiment   

Our cold aisle effort began as a sort of science project in our Durham data center, which we built from scratch in 2010. Like most legacy data centers, our old data center in Massachusetts, which we were decommissioning, posed continuous problems in keeping equipment from overheating.  So when we had a chance to build a brand new data center in Durham, we decided to set up two of our 20 equipment aisles in a cold aisle containment configuration to explore this increasingly popular cooling option. We wanted to test and prove to ourselves that the approach would work before pursuing it more extensively.

Cold aisle containment—also called hot-cold aisle containment— is based on setting up (server/storage) equipment in alternating rows where hot air is exhausted from components in one direction and cold air is taken in to cool them from the other. The cold aisle is then enclosed in pods to keep the cold air in and the hot air out. That way the temperature of the equipment in the pod can be closely controlled, maximizing energy efficiency. Enclosing the cold aisle also reduces the amount of data center space that has to be cooled.

We worked with our server rack vendor, Panduit, to help us set up the framework for our experimental cold aisles. Once we proved it was an effective strategy, we then had Panduit help us to configure the rest of our Durham center to cold aisle containment. We determined that we could recover the cost of that effort in less than a year.

 Further pushing temperature limits

While we gained substantial energy efficiencies by implementing full cold aisle containment in 2014 (able to use free-air 60 percent of the time), we wanted to see if we could take the data center efficiency to the next level by fine-tuning temperature thresholds for the equipment.

Storage systems have come a long way from the days when they had little tolerance for heat and data centers had to be kept as cold as meat lockers. Having our Vblock converged infrastructure and VMAX storage systems enclosed in pods meant we could use sensors to get precise readings on internal temperatures and humidity levels to refine our cooling needs.

Under the supervision of our Data Center Manager Reynaldo Gonzalez, we slowly raised the temperature in our cold aisle pods a few degrees at a time over a several month period. But first Rey had to figure out how to stabilize humidity and static air pressure in the cold aisle pods to keep air flowing at the right levels. That took his team about a year to resolve.

Stabilizing Humidity and Pressure

From there, we ran trending reports every step of the way to make sure the equipment was unaffected as we raised the temperature. We also had to make sure we stayed at 78 degrees F or below in order to stay within specifications under industry standards set by American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE).

The result is that our data center went from having a 1.6 power usage effectiveness (PUE) to 1.2 PUE. (PUE is a standard measurement of how efficiently a data center uses power.)

Succeeding in reaching this threshold is the latest milestone in our ongoing efforts to make our data centers increasingly energy efficient over the years. In our Massachusetts data center, for example, we pipe water used for data center cooling through an outside loop in the colder months to augment our mechanical cooling.

We are now in the process of establishing cold aisle containment configurations in our other data centers in Massachusetts, in order to gain the same savings.  This process will take several years since we are dealing with existing equipment that needs to be gradually re-oriented to create cold aisles as we make routine equipment change outs.

Beyond that, we are working with Panduit to create “smart” tiles in our data centers that will open and close to adjust cooling aisle air pressure, which must now be done manually.

These new temperature control breakthroughs will help us stay ahead of the industry trend toward storing more and more data in the same data center footprint using high density and high performance data center equipment.

Author information

David Scheffler
David Scheffler
Director, Data Center Services

The post Turning Up the Heat in Data Center Cooling appeared first on Dell IT Blog.


Viewing all articles
Browse latest Browse all 6

Latest Images

Trending Articles





Latest Images