The Importance of Liquid Cooling In Cloud Based Data Centers

The Unavoidability of Utilizing Liquids as Coolants
It has been a long time coming, people have been anticipating that the power density of racks will eventually reach levels that cannot be supported by air cooling. These expectations have proven to be unduly optimistic up until fairly recently, as density increases have not occurred as rapidly as anticipated. In spite of this, such expectations sparked a substantial amount of research and development (R&D) into potential solutions that could be able to sustain extremely high rack densities. The two most prominent of these were conductive cold plate and immersive liquid cooling.
A substantial portion of the data centre business is currently seeing a convergence of trends that is driving rack power usage to levels that were previously forecast.
Central processing units (CPUs) and graphics processing units (GPUs) of more recent generations have thermal power densities that are significantly higher than those of earlier generations. This is the driving force behind the rise in rack densities. Intel’s Cannon Lake central processing units, for example, had thermal power densities that were twice as high as those of the last generation of CPUs, which had been introduced just two years before. This came after years in which power densities were largely stable.
Concurrently, companies that make servers are stuffing more central processing units and graphics processing units into each rack unit) Even with containment, systems that supply cooling air to racks are unable to offer appropriate cooling capacity when numerous high-performance servers are housed in the same rack. In addition, the strategy of spreading out compute loads is not feasible in processing-intensive applications because of the latency challenges created by physical distance, which exist even within a single server. These challenges can be seen even when the loads are being distributed across multiple servers. As a consequence of this, components are being crammed into devices, which results in the creation of extremely dense 1U servers, which in turn drives rack densities to levels that have never been seen before.
The expansion of artificial intelligence and high-performance computing beyond their usual applications in scientific research is the underlying trend that is driving these technological developments. These technologies are currently being implemented in data centres that support high-performance computing in the cloud, online gaming, finance, healthcare, film editing, animation, and the streaming of media. As a consequence of this, high-density equipment racks are moving out of specialised applications and into more general use, which requires thermal management systems to develop in order to keep up with the growing list of necessities.
The design of data centres is being affected in a variety of ways as a result of this. The first change is that brand-new data centres are being built from the ground up with an exclusive reliance on liquid cooling. This results in data centres that are smaller, more efficient, and have massively increased computing capabilities. The second kind consists of data centres that are built using air cooling, but also incorporate infrastructure for liquid cooling in order to make the switch to more advanced cooling methods easier in the future. The third strategy, which is also the most popular, involves operators of data centres integrating liquid cooling into facilities that previously only used air cooling. This strategy frequently involves transferring some of the capacity of air systems to liquid systems. At long last, liquid cooling is establishing itself as a serious contender as an alternative to conventional methods for processing-intensive edge computing facilities.
non-electrically conductive fluids, the risk of electronic failure is not anticipated; instead, the focus is on leak awareness to minimise the volume of cooling liquid lost due to cost or operational impediment. This is because non-electrically conductive fluids do not allow electrical current to flow through them. It is of the utmost importance to establish suitable mitigation and protection techniques for the data centre. These should include procedures for detecting and preventing leaks. When developing advanced cooling solutions that make use of liquids to cool information technology equipment, these preventative measures must to be taken into consideration at the design stage.
The TCS moves liquids from the CDUs to the rows of racks in the warehouse. The design of the pipework is going to have to incorporate both the prevention of leaks and the ability of liquid leak detecting devices to provide timely and accurate reports of any detected leaks. Depending on the access needs, the pipes can be spread throughout the facility either overhead, under the floor, or in a recess in the floor. It is necessary to have pipework connections, couplings, and connectors between the CDU manifolds, row manifolds, and rack manifolds, as well as within the devices themselves. Every single point of connection is a possible source of leakage. Appropriate design considerations include the selection of pipework joining systems such as welding, threading, flanging, or coupling systems that are designed to specifically not leak and include, amongst other characteristics, wetted material compatibility for the liquid that is being transported. These systems can be joined together either by welding, threading, or flanging.
The compatibility of the wetted materials is one of the best practices that are included in the design considerations for the pipes.
Because of thermal expansion, operating temperatures can have an impact on the length and dimensions of pipework, and this effect extends to the materials that are used to link piping. The possibility of leaks being generated at the connections is included in the risk that contaminants will foul systems, so reducing their performance. Because the rate of flow of liquids within the pipework is significant, there is a risk of disruption and cavitation, which, when combined with a buildup of impurities, can cause joins to corrode and compromise their structural integrity over time. Manufacturers, the liquid cooling guidelines, the OCP cold plate guidelines, and the Energy Efficiency High-Performance Compute (EEHPC) Working group are some of the sources from which one can obtain specific information on the compatibility of materials, including the joining method, the materials, and the liquid that is used in the TCS fluid network. This information can be obtained. In the context of these principles, special consideration is given to strainers and filtering in order to eliminate impurities and debris that may accumulate, such as scale, fouling, and bacterial development, all of which can contribute to corrosion. During the stages of installation and commissioning of a liquid-cooled system, pollutants have the potential to enter the liquid network. Therefore, it is strongly advised that professional teams follow particular rules during the installation process.
The rates of liquid flow within the TCS pipework are determined by the manufacturing rules and the amount of cooling duty that is necessary. The design of the pipework architecture is constrained by these flow requirements, which serve as boundary conditions. Flow metres that include setpoints are an option that might be considered to ensure that the operation of the system stays within the parameters that were designed for it while leak detection and mitigation strategies are being considered. The TCS pipe size choices must be compatible with all other systems.

Learn more about Leak Detection

Learn more about flow meters

Please contact us to discuss your application

Recent Posts

See All