Electronic components generate a lot of heat. Keeping them cool can get expensive. Let's discuss some options.
You’ve got to spend money to make money. But could you spend money to save money?
What about spending money to save money on a system that makes money?
It’s not a riddle. Let’s think of the problem in terms of energy.
If you had a system that generated electricity, but in doing so, it generated too much heat, would you use more electricity to power another system, the purpose of which is to cool down the first system?
Underneath this complicated logic is a fairly simple, and very important, question about how to cool datacenters efficiently.
Let’s get more specific. You’ve got a datacenter, containing various components, such as CPUs, DIMMs, and NICs. In order to make this datacenter run, you need a lot of electricity, which costs money.
But then those components start to give off a lot of heat. This heat is the direct result of the electricity that you paid for. It’s your heat! You want to keep it, right?
Well, no. You want to get rid of it. It’s making your datacenter too hot. For one thing, your components might overheat. For another thing, unless you’re already fully deployed with Future Digital Infrastructure (FDI), you probably have humans working in your datacenter. And they may not be able to function in air that has been heated by proximity to your components.
So you spend money on a cooling system. And that cooling system runs on electricity, which also generates heat.
Popular ways to cool datacenters
How can you break out of this cycle of inefficiency? Here are some approaches gaining traction in the industry.
Open racks. Instead of being contained in a metal box, circuit boards are exposed to the open air, which is cooled. The Open Compute Project (OCP), of which Ericsson is a platinum member, has designs that support the open rack model.
Open datacenters. These datacenters are set up in areas with cold climates. The cold air from outside (filtered) is used to cool the components. When the air is hot, it’s vented back outside. Facebook has famously implemented this system in the Lulea datacenter, near the Arctic Circle.
Underwater datacenters. Nobody has actually done this yet, but Microsoft is talking about it as Project Natick, an ongoing research project with functional prototypes. Underwater datacenters are great as a concept, but they pose some challenging questions, such as how to replace components. Also, they require large bodies of water.
Immersive liquid cooling. The datacenter is dry, but individual components are kept immersed in small liquid containers. The liquid can absorb approximately 1,500 times more heat energy than air. Since the heat doesn’t get far from the components, the components can be placed closer together. And there’s no need for raised floors or fans, which means the whole datacenter can be smaller and it’s easier to build one in an out-of-the-way location.
Immersive liquid cooling for server components is one of the many features that make up Future Defined Infrastructure (FDI), Ericsson's vision for the datacenter of the future.
Indirect liquid cooling. This is the most common existing way of cooling CPUs with liquid. In this scenario, liquid cools a plate that is adjacent to the components. Ericsson is currently testing this type of cooling, along with specially drip-free connectors developed with CoolIT, on an Ericsson Hyperscale Datacenter System 8000 compute sled (pictured above), for possible use with the powerful new Intel® Xeon® Scalable processor.
Liquid cooling is a harbinger of the zero-distance world, where old ways of governing and managing are no longer effective. Learn more in the paper that I wrote in collaboration with Jason Hoffman, Head of Product Area Cloud Infrastructure, Ericsson.
Header image: Wyncliffe