How datacenters cooled
According to Wikipedia:" A data center is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems.
Since IT operations are crucial for business continuity, it generally includes redundant or backup components and infrastructure for power supply, data communication connections, environmental controls (e.g., air conditioning, fire suppression), and various security devices. A large data center is an industrial-scale operation using as much electricity as a small town."
Now I gonna concentrate just on the cooling thing side of the story for now.
Data centers are the backbone of every digital service, app, and most everyday used software. Without them, our lives would be much more inconvenient and in cases harder.
Those computers that keep the apps, and services running generate a lot of heat, and that heat must be dissipated to somewhere.
The cooling system in a modern data center regulates several parameters in guiding the flow of heat and cooling to achieve maximum efficiency. These parameters include but aren’t limited to:
- Temperatures
- Cooling performance
- Energy consumption
- Cooling fluid flow characteristics
All of the data center cooling system components are interconnected and impact the overall efficiency of the cooling system. No matter how you set up your data center or server room, cooling is necessary to achieve a data center that works and is available to run your business.
Proper data center cooling technologies allow servers to stay online for longer. Overheating can be disastrous in a professional environment that requires over 99.99% uptime, so any failure at the server level will have knock-on effects for your business and your customers.
Data doesn’t travel faster in cooler server rooms, but it travels a lot faster than if it was trying to travel over a crashed server!
Because data centers can quickly develop hot spots (regardless of whether the data center manager has intended a cold aisle set up or a hot aisle on), creating new solutions to cooling needs to be efficient and easily done on the fly.
This means only using liquid cooling technologies that are easily adaptable or air-cooling systems that can easily change the way cold air is used. Overall, this allows for greater efficiency when scaling up a data center.
Data center cooling is a balancing act that requires the IT technicians responsible for it to consider a number of factors. Among many, some of the most common ways of controlling computer room air are:
- Liquid cooling uses using water to cool the servers. Using a Computer Room Air Handler (CRAH) is a popular way to combine liquid cooling and air cooling, but new emerging technologies like Microsoft’s “boiling water cooling” are used for cooling data center servers and driving evaporative cooling technology.
- Air cooling uses a variety of Computer Room Air Conditioner (CRAC) technology to create east paths for hot air to leave the IT space. Raised floor platforms create a chilled space below the raised platform where a CRAH or CRAC can send the heat via chilled water coolers and other technologies which create cold aisles underneath the servers.
Temperature and humidity controls, such as an HVAC which controls the cooling infrastructure, and other technologies provide air conditioning functionality.
Control through hot and cold aisle containment allows hot aisles to feed onto cold aisles through the server room. Proper airflow, the use of a raised floor, and other cooling technology such as liquid cooling or HVAC cooling solutions are supported by hot and cold aisles within a data center.
Significant improvements in cooling system technologies in the last decade have allowed organizations to improve efficiency, but the pace of improvements has slowed more recently. Instead of regularly reinvesting in new cooling technologies to pursue diminishing returns, however, you can now implement artificial intelligence (AI) to efficiently manage the cooling operations of its data center infrastructure.
Traditional engineering approaches struggle to keep pace with rapid business needs. What worked for you in terms of temperature control and energy consumption a decade ago is likely not enough today—and AI can help to accurately model these complex interdependencies.
Google’s implementation of AI to address this challenge involves the use of neural networks, a methodology that exhibits cognitive behavior to identify patterns between complex input and output parameters.
For instance, a small change in ambient air temperature may require significant variations of cool airflow between server aisles, but the process may not satisfy safety and efficiency constraints among certain components.
This relationship may be largely unknown, unpredictable, and behave nonlinearly for any manual or human-supervised control system to identify and counteract effectively.
Organizations today can equip data centers with IoT sensors that provide real-time information on various components, server workloads, power consumption, and ambient conditions. The neural network takes the instantaneous, average, total, or meta-variable values from these sensors to process with the neural network.
Google pioneered this approach in the 2010s, and today, more and more organizations are embracing AI to support necessary IT operations.