Home Enterprise In the Lab: Fresh Air Cooling

In the Lab: Fresh Air Cooling

by Kevin OBrien

Lowering power usage is a hot-button topic for anyone operating a lab or datacenter these days. Utility costs are always increasing, servers and storage keep growing and there are few signs that "the cloud" is going to do much to help. Regardless of the size of operation, power costs make up a huge portion of an OPEX budget (#4 at StorageReview); including the direct power usage of servers and associated hardware, as well as indirect power usage to cooling everything. During one of my first trips to Dell's headquarters in Round Rock, TX, I had the pleasure of seeing the Dell Fresh Air Hot House, which sparked ideas that would eventually lower the overall power usage of the StorageReview Test Lab.


Lowering power usage is a hot-button topic for anyone operating a lab or datacenter these days. Utility costs are always increasing, servers and storage keep growing and there are few signs that "the cloud" is going to do much to help. Regardless of the size of operation, power costs make up a huge portion of an OPEX budget (#4 at StorageReview); including the direct power usage of servers and associated hardware, as well as indirect power usage to cooling everything. During one of my first trips to Dell's headquarters in Round Rock, TX, I had the pleasure of seeing the Dell Fresh Air Hot House, which sparked ideas that would eventually lower the overall power usage of the StorageReview Test Lab.

For anyone that hasn't seen it, the Dell Fresh Air Hot House is nothing more than sun and rain protection for a mini datacenter operated in the middle of Dell's parking lot. Its goal is to dispel the myth that servers, storage and networking gear really need the perfect 68F ambient temperatures that many people think they need. Chasing such a low temperature goal costs a lot of money and generally requires a much bigger investment in cooling gear versus basic air handlers to keep fresh air moving through an environment. After consulting with the Dell team in charge of that project, I turned to our own lab to look for ways to save money.

During our most recent lab upgrade, the main goal was moving the gear into a space where creative air movement methods allowed us to lower costs in both heating and cooling months. The StorageReview Test Lab doesn't fit into many HVAC equations, since cooling loads are generally calculated within a stable but growing range. Oversizing cooling gear is just as bad as undersizing it. With lab power usage varying depending on project, we've seen swings from 2kW to over 15kW, or in other terms, a 7k BTU load up to over 51k BTU. We have one review unit, for example, that can consume upwards of 6kW+ by itself. Sizing an air conditioner to the max possible load in our lab (24kW currently) would mean installing a 7-ton AC unit. No matter how efficient that model is, it wouldn't be able to operate without severely short-cycling on days where our load is minimal. It would also cost an arm and a leg to install and operate, drawing an incredible amount of power adding to our overall OPEX.

During cold months in Cincinnati, the lab completely heats our 4,100SF building. This means review gear running in winter provides "free" heat to the office, circulating the warm exhaust air back into our main HVAC air handler. In warm months, we operate on a different loop where fresh air is drawn into the lab and hot exhaust is piped out, isolated from our building's HVAC system. The net result for the past two years has been a growing lab with relatively flat or declining power usage and a stable power bill. This is important when review units have been as large as fully populated storage racks, such as our SUSE Storage project on HPE Apollo hardware.

Looking at our past two years of power usage you can see a few trends. Power usage drops during winter months where we scavenge the heat for heating our building, and power usage increases in warmer months, but doesn't spike up dramatically. In that same time we've greatly expanded the equipment we operate with and put into place other power savings measures that help us keep our electricity usage in check. Review units aren't powered on between tests if there will be excessive idle time. Gear is shut down overnight if another test won't be kicked off until the next day. We use Managed Eaton G3 PDUs to kill power to servers, storage and networking equipment that isn't being used, completely removing transient power consumption that most gear has when left plugged in.

The gear required to cool our lab right now is very cost effective. We use two 8" inline fans that combined can move 1,400CFM of hot air out of the room. Both fans are connected to a circuit where a thermostat-controller relay can call for them to turn on or stay off. Turning off might seem counter-productive, but Ohio temperature snaps have caused the lab to reach 40-45F, cold enough that the building furnace turned back on. As the days get warmer as we move closer to summer, we are testing out a mix of fresh air cooling with Tripp-Lite spot chillers for temperature-sensitive gear. With a mix of portable and in-rack coolers, we can let most of our gear operate at higher temps, while more sensitive gear (SLA UPS batteries) stay within manufacturer recommended guidelines.

Discuss this story

Sign up for the StorageReview newsletter