October 10, 2025
Data Centers, Part Two: Air Cooling and Liquid Cooling in Data Centers
Contributed by Josh Goldberg, Technical Market and Business Analyst
Air Cooling vs. Liquid Cooling
For data centers, cooling is vital! As one study put it, “Imagine bringing enough power for 36,000 homes into a single building.” Data center cooling falls into two different categories: air cooling and liquid cooling. Both are currently employed in the field; however, liquid cooling will likely be the more dominant type of cooling in the future.
Power demands for computing centers that support artificial intelligence (AI), data mining, and cryptocurrency would all require liquid cooling to keep up with their power demands. Current computer room air conditioning (CRAC) and computer room air handler (CRAH) designs can only handle up to 15kW of power draw per server rack, but can be pushed up to 50kW per rack if paired with liquid cooling in a hybrid system. The average power density per rack in data centers is currently around 17kW per rack, with projections reaching up to 30kW per rack by 2027 as more AI systems start coming online. AI systems like ChatGPT have a power density of 80kW per rack, and NVIDIA has just announced a chip that could require up to 120kW per rack, making liquid cooling essential to keep up with these power demands moving forward.
Air cooling CRAC and CRAH units are designed to force cooled air through a server room and through units to cool the servers efficiently. The heat is then transferred out of the room to a cooling coil that is either filled with water (in a CRAH system) or a coolant like glycol (in a CRAC system). The heat is then transferred to a chiller, where it is ejected out of the building, while the chilled water or glycol is then recycled back to the cooling coils. These cooling systems can be economical for small or single-building data centers, and the design has been well optimized over the years.
Liquid cooling is a more recent development, and there are several different ways it can be used. The first method is active or passive rear-door cooling, where the liquid coolant is circulated through coils located at the back of the servers. Passive rear-door cooling relies on the servers’ own fans to push the heat to the cooling coils, whereas active rear-door cooling has supplementary fans positioned near the cooling coils that actively draw the heat to the coils. Another type of liquid cooling pumps coolant to a unit that is in direct contact with the chip, and the final type of liquid cooling involves completely immersing the circuit board into the cooling fluid.
All of the liquid cooling designs differ from air-cooled buildings in terms of plumbing and consist of three different plumbing loops. The condenser water system (CWS) transfers heat from the facilities water system (FWS). The FWS runs throughout the building and transfers heat from the technology cooling system (TCS). The TCS is carefully monitored and comes into contact with the servers in one of the ways described above. It can use a variety of coolants, ranging from mineral oils to fluorocarbons, while the FWS and the CWS both support water.

As mentioned in the first article of this series, the general guidelines for the FWS stipulate that the water must meet “drinking water quality” standards, as it is also used for purposes such as restrooms, drinking fountains, and breakrooms. The CWS runs between the FWS chiller and the cooling tower, so it simply needs to be kept free of scaling and fouling. The TCS, as one might suspect, has more stringent guidelines, as it goes between the servers and the coolant distribution unit (CDU) that exchanges the heat from the TCS to the FWS. The table below shows the ASHRAE guidelines for the FWS and TCS liquids.

Asahi/America Product Spotlight
Data centers have a metric that is becoming an increasingly important design consideration. The “water usage effectiveness” (WUE) is determined by how efficiently a data center uses water in relation to its energy consumption, and there is a big push for more efficient water use in order to put less pressure on local municipal water supplies. One of the cornerstones to improving efficiency in a cooling system is to have good, reliable flow control.
Asahi/America Type-21 SST ball valves and Type-14 diaphragm valves both offer superior flow control through their engineered design. The graph below depicts the flow rate for some of our more popular valves. The gentler the rise of the line on the graph, the more predictable and controllable the flow rate of the valve.

The Type-21 SST ball valve design features seat support technology, enabling tighter and more consistent flow control, especially at lower flow rates. The Type-14 diaphragm valve has a weir-type design that allows for very precise flow control. Additionally, both of these valves can also be fitted with Series 19 electric actuation.
Reliable, controllable actuation is the cornerstone of a data center’s cooling and water systems. The computational fluid dynamics (CFD) system is a tool used by engineers to predict when and where hot spots might develop due to increased demand on the servers. They can then adjust the cooling to these areas according to anticipated demand, ensuring the data center servers run smoothly. For a CFD system to function properly, it needs two critical pieces. The first is valves with precise, repeatable flow control, like the Type-21 SST ball valve and the Type-14 diaphragm valve. The second is responsive, low-maintenance actuation that can be integrated into a data center’s control system, such as the Series 19 electric actuators.

The Series 19 line of electric actuators can all be integrated into a data center’s control system and offer several engineered features that provide additional safety factors for data center engineers to consider. These actuators offer simplified wiring for the range of available voltages (95-265 VAC or 24 VAC/VDC). They all feature a NEMA 4X enclosure, ensuring that the internal parts are protected against hose-directed water, dust, and debris, and a stainless steel trim for use in corrosive environments. Additionally, the Series 19 actuators also feature an internal thermostatically controlled heater to prevent moisture buildup on the internal components. Finally, they offer options like modulating and failsafe open/closed, which is ideal for precise valve control with a built-in failsafe safety feature for a CFD system.


Fluoropolymers like Super Proline® PVDF and Ultra Proline® ECTFE offer top-of-the-line performance and superior protection from harsh chemicals, such as concentrated acids, bleach, and ozone, used to treat facilities’ water and chiller loops. If the data center site has on-site storage of sulfuric acid for water treatment, Ultra Proline® is ideal for plumbing from the storage tank to the system. It offers superior resistance to sulfuric acid concentrations of 95% to 98.5%, coupled with natural UV resistance, making it ideal for plumbing to outdoor storage tanks. Super Proline® is a suspension-grade PVDF that exhibits improved performance when compared to emulsion-grade PVDF products. For improved laminar flow, PVDF typically uses reduced-bead butt fusion or beadless fusion. PVDF is a naturally additive-free product that is ideal for technical-grade water and coolant found in technology cooling loops of data center liquid cooling systems.
Stay tuned in the coming weeks for the third installment in our Data Centers series, where we’ll talk about power supply in data centers. In the meantime, we encourage you to browse our relevant resources, including this quick-reference data center solutions flyer!
