Water and computers don’t mix, right? So why would anybody want to try to cool computer equipment with water? Lots of reasons. But the first thing you think, of course, is this: “Will it leak?” Well, probably not — but we’ll get into that. You should know that water and computers are definitely not mutually exclusive. In fact, you might be amused to learn about a 1940s computer that was powered entirely by water. We’ll tell you more about it at the end of this article. But first let’s deal with the matter at hand.
Hewlett-Packard addressed this question in an article entitled “Why you’ll be using liquid cooling in five years”. The first simple answer: density. “The problem is that CRAC units are not suited for high-density rack cooling,” suggests contributing writer Andy Patrizio, “because they simply cannot provide enough cooling airflow through high-density racks.” He quotes Geoff Lyon, CEO of CoolIT, who says: “The trend in IT is they want to increase server density, all the way down to the chip level. That means increasing the power of the chips, putting more chips per rack unit, and filling up the racks as much as possible.”
The result of shrinking all this hardware is that much more power ends up running through a single rack. In a traditional data center, rack power might be around 3.5 to 4 kilowatts. But newer, high-density data centers may require up to 70 kilowatts of power per rack. What we are talking about here are high performance computing (HPC) environments, defined by the Inside HPC website as follows:
“High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.”
So along with density, the need for high performance is another reason for liquid cooling. That may be why many gamers are turning to water cooling for their graphical processing units (GPUs). A video by InsideHPC shot at the International Supercomputing Conference shows that manufacturers like Asetek are offering liquid cooling for GPUs, CPUs, as well as memory cards. And IBM’s SuperMUC is billed as the world’s first hot water cooled supercomputer.
The justification for liquid cooling continues. There are claims that it will cut cooling costs in half and save companies millions of dollars. The environmental benefits make it a welcome addition to green data centers because it significantly reduces cooling energy. And water cooling is even being used for the heating and cooling of data center buildings.
As for the question of leaks, it’s apparently not as problematic as one might think. Todd Boucher of Leading Edge Design Group told Patrizio, “When they were first introduced, there was the risk of bringing water into computing equipment. But the manufacturers did a great job in designing products that minimize risk to the end user, and liquid-cooled vendors did the same thing. That doesn’t mean there is no risk, but they did a nice job.”
“A nice job” means that leaks really aren’t an issue. It might help to keep in mind that liquid cooling is not a foreign subject to us. It is used in our refrigerators, our air-conditioners, and in the cars that we drive. And while you might notice some leaking or condensation from your car’s air conditioner when you pull out of a parking space, that doesn’t mean that computer manufacturers would allow that to happen to expensive servers.
Ryan Fisher of PC Gamer asks, “Is liquid cooling your PC safe?” And he confesses, “In the four years since I first discovered the joys of custom water cooling, I’ve had just one leak and it was entirely my own fault.” He therefore says that you should take your time and double check everything. And he recommends leak testing following install.
Liquid cooling may not increase availability, but there’s no indication that it will decrease it either. If the main factor is human error, then that could happen with either water-cooled or air-cooled devices.
Of course, liquid and water are not exactly the same thing. There are other liquids to use to cool computer equipment. The 3M company manufactures their own concoction called Novec. And it doesn’t conduct electricity at all. They call it a breakthrough technology in their video “Introduction to Two-phase Immersion Cooling”. They say that Novec has “excellent heat transfer properties”. It removes heat through direct contact with the computer component.
3M claims that Novec can reduce cooling costs by 95%. And you don’t have to worry about leaking — your equipment is actually immersed in the fluid! But it looks like you would need to buy the equipment too. While retrofitting might work with some water cooling systems by replacing heat sinks, it’s not clear what new components you might have to purchase for this technology. It’s interesting to note that Novec has other uses, such as fire suppression.
We promised to tell you about a famous water-powered computer. It was called the Moniac, and it was an amazing hydro-mechanical computer created by Bill Phillips in the 1940s to predict how money flows through national economies. Check out this interesting video on the subject. So you see, water and computers do mix.
But if you’re keen on saving money and energy and shrinking data center floor space — particularly in new facilities — then liquid cooling may be way to go, despite the potential increase in points of failure.
As for the risks and benefits of the liquid cooling of computer components, it all depends. The article we shared from HP quotes one expert who suggests that many people remain “aquaphobic” when it comes to computers. The author Patrizio also says that liquid cooling is still a “fringe idea”, even in the HPC world. If you’re not doing high-density computing, it may not be worth all the retrofitting and installation costs. But if you’re keen on saving money and energy and shrinking data center floor space — particularly in new facilities — then liquid cooling may be way to go, despite the potential increase in points of failure. Of course, you’ll have to do more homework on your own to learn more. But there’s always plenty of homework to do, isn’t there?
Businesses take risks. It comes with the territory. But that doesn’t mean that an enterprise should push blindly forward, ignoring the potential threats to availability and ultimately its success. Risk assessment is essential to understanding the territory and blazing the trail ahead. And risk mitigation is the key to controlling those factors that endanger IT […]
After unabashedly extolling the virtues of redundancy in a recent article , you may be wondering why we would follow up with another post questioning whether sometimes too much (redundancy) was just too much. Credit fellow staffers for the suggestion that we revisit the issue. The problem was clearly a part of our initial research, and it deserves […]
IT systems go down for a lot of reasons. Some downtime causes are obvious, while others take some time to understand. And still others are just plain comical. In this article we’ll have a look at different approaches to assigning blame for outages, and we’ll offer a short list of our own. The concept of downtime applies […]
No one needs to explain to you the virtues of being proactive. Of course, no one can make you (or this writer) do it either. You may know that you should change the timing belt on your car every 60,000 miles or so, but it’s even easier to do nothing about it — until your […]