Liquid Cooling Technologies Driving Data Centre Efficiency


Data centre cooling market to reach £13.2bn in 2028
According to new research from global analyst, Omdia, the data centre thermal management market has surged to a staggering $7.67bn (£6bn), outpacing previous forecasts. This unprecedented growth is poised to continue with a robust CAGR of 18.4% until 2028. This surge will largely be fuelled by AI-driven demands and innovations in high-density infrastructure, marking a pivotal moment for the industry. As AI computing becomes ubiquitous, the demand for liquid cooling has surged dramatically. Key trends include the rapid adoption of Rear Door Heat Exchangers (RDHx) combined with 1-P direct-to-chip cooling, achieving an impressive 65% year-over-year growth, frequently integrating heat reuse applications. This period also sees a strategic blend of air and liquid cooling technologies, creating a balanced and efficient thermal management. Omdia’s Principal Analyst, Shen Wang, explains, “In 2023, the global data centre cooling market experienced increased consolidation, Top 5 and Top 10 concentration ratios rising by 5% from the previous year. Omdia expanded vendor coverage in its report to include 49 companies, up from 40, adding Chinese OEMs and direct liquid cooling component suppliers. Vertiv, Johnson Controls, and Stulz retained their top three positions, with Vertiv notably gained 6% market share, due to strong North American demand and cloud partnerships.” Market growth for data centre cooling was primarily constrained by production capacity, particularly for components like Cooling Distribution Units (CDUs), rather than a lack of demand. Numerous supply chain players struggled to satisfy the soaring market needs, causing component shortages. However, improvements forecasted for 2024 are expected to alleviate this issue, unlocking orders delayed from the previous year due to supply chain bottlenecks. During this time, liquid cooling adoption witnessed robust growth, particularly in North America and China, with new vendors entering the scene and tracked companies exhibiting significant expansion. In this near $1bn (£785m) market of liquid cooling, direct-to-chip vendor, CoolIT, remains the top leader in liquid cooling market, followed by immersion cooling leader, Sugon, and server vendor, Lenovo. The data centre thermal management is advancing due to AI's growing influence and sustainability requirements. Despite strong growth prospects, the industry faces challenges with supply chain constraints in liquid cooling and embracing sustainable practices. Moving forward, the integration of AI-optimised cooling systems, strategic vendor partnerships, and a continued push for energy-efficient and environmentally friendly solutions will shape the industry's evolution. Successfully addressing these challenges will ensure growth and establish thermal management as a cornerstone of sustainable and efficient data centre operations, aligning technology with environmental stewardship. Shen adds, “Data centre cooling is projected to be a $16.8bn (£13.2bn) market by 2028, fuelled by digitalisation, high power capacity demand, and a shift towards eco-friendly infrastructure, with liquid cooling emerging as the biggest technology in the sector.”

Macquarie starts construction on Sydney data centre
Macquarie Data Centres has started construction on its IC3 Super West data centre after appointing prominent Australian construction company, FDC Construction (FDC), as the main building contractor. The project will bring more sovereign AI and cloud data centre capacity to Sydney. IC3 Super West is being purpose built for high-density cloud and AI workloads, including hybrid air and liquid cooling options. The facility is the third and largest addition to the provider's flagship Macquarie Park Data Centre Campus in Sydney’s North Zone and will bring the total campus IT load up to 63 megawatts (MW). IC3 Super West will open its doors with all end state power secured. The new build is part of Macquarie Data Centres’ wider expansion strategy to meet the increase in demand for capacity from its hyperscale, government and enterprise customers. David Hirst, Group Executive of Macquarie Data Centres, says: “Sovereign AI and cloud data centres are the backbone of Australia’s AI-driven future. Like all of Macquarie Data Centres’ facilities, IC3 Super West will be Certified Strategic by the Australian Federal Government. This gives our data centres a strong compliance posture as regulations around data sovereignty and AI continue to tighten in Australia and worldwide. “This partnership brings together two Australian powerhouses with extensive experience in constructing state-of-the-art, mission critical facilities.” IC3 Super West marks the seventh project between FDC and Macquarie Data Centres, reinforcing the companies' long-standing partnership. Their most recent project was IC3 East, the previous addition to the Macquarie Park Data Centre Campus, which was delivered on time and on budget. Ben Cottle, Founder, FDC Construction, comments, “Our longstanding partnership with Macquarie Data Centres is testament to the trust and collaboration that exists between both organisations. With the rapid adoption of AI resulting in increased demand for data centres, FDC’s team of experts continue to be at the forefront of delivering scalable, energy-efficient facilities like IC3 Super West that can support the ever-evolving demands of Macquarie Data Centres’ customers.” IC3 Super West will offer customers AI densities, resilient data halls, dedicated office space and storage. The large-scale project is expected to bring more than 1,200 jobs to the region. The construction cost will be circa $350 million from FY25 to practical completion of phase one, which will deliver the powered core and shell, as well as 6MW of IT load fitted out. “The widespread adoption of AI is fuelling a new wave of next generation AI infrastructure and GPUs from tech giants such as Dell and Nvidia,” David adds. “These highly dense compute technologies can only live in purpose-built data centres that meet their significant power and cooling requirements. IC3 Super West is being built to cater to this rising demand here in Australia.” For more from Macquarie Data Centres, click here.

Liquid-cooled server components put to the test
Efficient cooling of server racks (colocation) is crucial for data centres in order to ensure the performance and longevity of the hardware. Liquid-cooled systems are becoming increasingly important in this respect, and as a result, German organisation Poppe + Potthoff Maschinenbau (PPM) is developing test benches to examine and optimise the quality of cooling components and systems. According to current forecasts by the International Energy Agency (IEA), data centres will consume more than 800 terawatt hours of energy worldwide by 2026 - more than twice as much as in 2022. Liquid cooling systems help to improve power usage effectiveness (PUE). They are up to 40% more efficient than conventional air cooling and make a significant contribution to reducing energy consumption and costs. DLC systems are considered to be particularly efficient. The coolant is in direct contact with heat-generating components in the server rack, which ensures very effective heat dissipation. This method enables high-density data centres, as DLC systems are very compact. As they can cope with higher temperatures than air cooling, fewer fans are required. This not only reduces power consumption and costs, but also noise pollution. To prevent damage caused by leaks, all media-carrying components of the DLC system must meet the highest requirements in terms of strength and tightness - and also with changing pressures and temperatures. These include the coolant distribution units (CDU), connectors, valves, lines and the cooling plates, inside which the coolant circulates through microchannels. These are installed directly above the heat-producing components such as CPUs and GPUs. To test the mechanical strength and tightness of DLC components and systems, Poppe + Potthoff Maschinenbau offers test benches for burst and leak tests up to 70 bar (1100 psi), as well as dynamic pressure pulsation tests of up to 20 bar (290 psi). Higher pressures and water hammer tests can also be realised. With sinusoidal and trapezoidal curves in frequencies of up to 2 Hz, all operating conditions can be comprehensively simulated over the service life. Testing is carried out with water-glycol emulsions or other coolants such as PG25. The media and ambient temperatures in the temperature-controlled test chambers usually vary between -20°C and +90°C (-4°F to +194°F). The simulation of real operating conditions in PPM's test benches makes it possible to minimise failure risks and costs and to ensure optimum performance of all components of the cooling system in interaction.

STULZ launches new coolant management and distribution unit
STULZ, a global mission critical air conditioning specialist, has announced the launch of CyberCool CMU – an innovative new coolant management and distribution unit (CDU) that is designed to maximise heat exchange efficiency in liquid cooling solutions. Launched at Data Centre World Frankfurt 2024 earlier this week, CyberCool CMU seeks to offer industry-leading levels of energy efficiency, flexibility and reliability within a small footprint, while providing precise control over an entire liquid cooling system. "The rapid advancement of high-performance computing, artificial intelligence (AI) and machine learning (ML) has led to a massive increase in data centre rack and server power density," explains Joerg Desler, Global Director Technology at STULZ. "Central processing units (CPUs) and graphics processing units (GPUs) are expected to exceed 1000W per processor or higher in the next few years. These processing requirements are placing tremendous demands on data centre cooling systems, and where liquid cooling was once an option, it is rapidly becoming essential." CyberCool CMU has been developed to maximise heat exchange by isolating the facilities water system (FWS) and technology cooling system (TCS) elements of a liquid cooling system. This significantly reduces the risk of cross-contamination and leaks, thereby enhancing overall reliability. It also provides precise control over each side of the cooling system, enabling better management of coolant flow rates, temperatures and pressure, which improves overall system efficiency. As it is precision engineered, CyberCool CMU accurately controls the supply temperature and flow rate of the coolant with minimal power consumption. Comprising premium grade water pumps, plate heat exchangers, water valves and controllers, CyberCool CMU provides a reliable and efficient liquid coolant supply. High liquid coolant quality is ensured through sanitary grade stainless-steel pipelines, and to enhance system compatibility the unit offers a range of structural, electrical and control options including the flexibility to accommodate customer specific configurations and power loads. Alongside a series of standard unit configurations and capacities, this new product line from STULZ can offer a high level of customisation, adapting to specific needs in the DLC market. Data centres are under increasing pressure to become more sustainable, so CyberCool CMU is designed to seamlessly integrate with ancillary STULZ A/C products, providing an efficient system solution throughout; as well as supporting ASHRAE’s guidelines for water cooling specifications. To achieve the highest standards of reliability and usability, CyberCool CMU’s software and hardware are perfectly harmonised with any liquid cooling solution, while its intuitive touchscreen display provides clear menu navigation. Multiple variable speed pumps provide adaptation to required liquid flow rates alongside energy efficiency gains as well as build in redundancy. Joerg concludes, "The transition to liquid cooling in data centres is well underway and we are confident that CyberCool CMU can meet the heat transfer demands of these systems sustainably, efficiently, reliably and flexibly." For more from STULZ, click here.

Sonic Edge partners with Iceotope to launch dedicated AI Pods
Sonic Edge, a provider of modular data centres (MDCs), is partnering with Iceotope, a global precision liquid cooling expert, to launch new Iceotope AI Pods. Sonic Edge provides a range of Edge and HPC (High-Performance Computing)-ready MDCs that enable organisations to run their operations anywhere in the world. With the significant increase in the compute densities required for AI applications, Sonic Edge recognised the opportunity to design and build containerised MDCs, or Pods, that are AI application-ready, incorporating advanced precision liquid cooling technology from Iceotope. The resulting Iceotope AI Pods are multi-tenant MDCs with a capacity of up to 450kW, a 12x4m footprint, and can be deployed either on-premise, or in remote locations. They are designed to include everything you would find in a standard data centre facility, such as UPS backup, fire suppression, and monitoring and evaluation. Stuart Priest, Founder and CEO, Sonic Edge, explains, “There are many organisations, particularly start-ups, that can’t afford to wait for colocation space to become available for their operations. They are looking to get their own high-performance, AI-ready MDCs up and running fast. We’re excited about our collaboration with Iceotope because we can now provide cloud or edge providers with multi-tenant Pods that have Iceotope’s advanced precision liquid cooling built in.” David Craig, CEO, Iceotope, adds, “We’re seeing an unprecedented surge in data generation and the evolving role of data centres as interactive AI powerhouses. To meet this demand – and with scalability, serviceability, and sustainability at the forefront of industry demands – our precision liquid cooling is pivotal to providers such as Sonic Edge. We are delighted to be partnering with them to have our technology incorporated into fast and easy-to-deploy Pods to facilitate high-performance AI.” Rapid implementation and cost-effectiveness are major benefits of the AI Pods, according to Stuart Priest. He notes, “To build and get a data centre up and running can take five or six years, whereas with an AI Pod it takes just 16 weeks from order to delivery. Everything needed to make it operational is there from day one, and we offer ‘tier three ready’ as standard. We also ensure that the Iceotope AI Pods adhere to all relevant industry compliance standards. The highest levels of security can also be incorporated, ranging from SR1 to SR8.” Flexibility is at the heart of the Iceotope AI Pods. Stuart continues, “Our Pods are meticulously designed to adapt seamlessly to customers’ growing requirements. We believe in building a solution to fit the project, rather than trying to fit the project into the solution. With Iceotope AI Pods, you can literally ‘pay as you grow’.” For more from Iceotope, click here.

LiquidStack opens new facility to scale liquid cooling production
LiquidStack, provider of liquid cooling solutions for data centres, has announced its new US manufacturing site and headquarters located in Carrollton, Texas. The new facility is a major milestone in its mission to deliver high performance, cost-effective and reliable liquid cooling solutions for high performance data centre and edge computing applications. With a significant uptick in liquid cooling demand associated with scaling generative AI, the new facility enables it to respond to customers' needs in an agile fashion, while maintaining the standards and services the company is known for. LiquidStack’s full range of liquid cooling solutions are being manufactured on site, including direct-to-chip Coolant Distribution Units (CDUs), single phase and two phase immersion cooling solutions and the company’s MacroModular and MicroModular prefabricated data centres. The site will also host a service training and demonstration centre for customers and its global network of service engineers and partners. “We are seeing incredibly high demand for liquid cooling globally as a result of the introduction of ultra-high TDP chips that are driving the scale and buildout of generative AI. Our investment in this new facility allows us to serve the rapidly growing market while creating new, high-skilled jobs right here in Carrollton,” says Joe Capes, CEO, LiquidStack. The new manufacturing facility and headquarters occupies over 20,000sqft. It has also been in operation since December 2023, and a formal ribbon cutting ceremony will be held on March 22, 2024. Expected attendees include members of the city council and the Metrocrest chamber of commerce, as well as LiquidStack customers and partners.

STULZ Modular and Asperitas join forces to redefine liquid cooling
STULZ Modular, provider of modular data centre solutions and a wholly owned subsidiary of STULZ GmbH, has collaborated with Asperitas in the domain of liquid cooling. The purpose of this collaboration is to realise the benefits of immersion cooling for high-density data centre environments and the implementation of a concept for a modular data centre solution with integrated immersion cooling for indoor and outdoor installation. The project was implemented jointly by STULZ Modular and Asperitas. As an independent technology partner, Asperitas primarily contributed immersion cooling expertise and products. STULZ Modular developed the concept for the data centre infrastructure components, in addition to the recirculating air conditioning and mechanical refrigeration with a view to efficiency and effectiveness. STULZ Modular's concept also includes the secure supply of the power train (switch-gear, UPSs and PDUs), the complete cooling circuit, remote monitoring and infrastructure management (DCIM), as well as early fire detection and extinguishing. The result of the collaboration is a compact, modular end-to-end data centre for an IT load of up to 200kW, in combination with immersion cooling technology from Asperitas. The IT capacity can be scaled up as per larger load requirements. It is specifically designed for highly efficient cooling of particularly power-hungry IT applications, such as the local processing of large amounts of data, data science, generative AI or industrial edge. In addition to the extremely efficient immersion cooling technology, the outstanding features of the modular data centre solution also include the consistent, fully integrated data centre infrastructure from STULZ Modular. The configuration offers high reliability and additional redundancy as well as rapid scalability and efficiency. The use of systems and components from leading manufacturers guarantees maximum reliability. The modular concept also enables customer-specific adaptations and being fully factory tested, and therefore, arrives at the final site ready for immediate use. It offers its customers a global presence plus premium services.

Iceotope achieves chip cooling industry milestone at 1000W
Iceotope has achieved chip-level cooling up to 1000W and beyond. The published results in, 'Achieving chip cooling at 1000W and beyond with single phase precision liquid cooling', validate how single-phase liquid cooling can achieve 1000W cooling and the thermal performance of precision liquid cooling. The data centre industry is looking to liquid cooling as the solution for solving challenges such as the compute densities required for AI, the overall rising thermal design power of IT equipment, and the need for sustainable cooling solutions. Data centre operators must know they are future-proofing their infrastructure investment for 1000W to 1500W to 2000W CPUs and GPUs in the coming years. The testing conducted by Iceotope Labs has demonstrated how precision liquid cooling technology is expected to meet these challenges.  Key findings from the testing include:  At a flow rate of 7l/min, Iceotope's copper-pinned KUL SINK achieved a thermal resistance of 0.039K/W when a 1000W heat load was applied to Intel’s Airport Cove thermal test vehicle (TTV), a thermal emulator for the 4th Gen Intel Xeon Scalable processors. This translates to an 11.4% improvement in thermal resistance, compared to a like-for-like test of a tank immersion product containing a forced-flow heatsink.  Thermal resistance remains almost constant at a given flow rate as the power was increased from 250W to 1000W.   The results demonstrate high confidence that testing at 1500W will yield the same consistency based on the testing of the thermal resistance from 250W to 1000W.   “Iceotope precision liquid cooling technology has achieved an important industry milestone by demonstrating enhanced thermal performance capability compared to other competing liquid cooling technologies,” says Neil Edmunds, Vice President of Product Management at Iceotope.  “We are confident that future testing of our standard solution at elevated power levels will demonstrate further inherent cooling capability. Iceotope is also continuing to develop new solutions which enable even higher roadmap power levels to be attained in a safe, sustainable and scalable way.”  “The ability to cool 1000W silicon is a key milestone in building the runway for silicon with higher thermal design power and enabling efficient data centre and Edge cluster solutions of the future,” says Mohan J Kumar, Intel Fellow.  Read more latest news from Iceotope here.

Concentric AB wins business nomination in liquid cooling market
Concentric AB has announced that it has received its first new multi-year business nomination from a leading global OEM customer in the data centre liquid cooling market. The value of this new business is 63MSEK per year, and the start of production is planned in the first quarter of 2025. This strategic customer selected Concentric’s seal-less e-pump based on its innovative design, proven endurance and dependability for its new data centre liquid cooling application. The global data centre market is expected to grow at a CAGR of 10-13% over the next six years. There is a clear trend towards liquid cooling in these applications, and it is anticipated that liquid cooling in data centres will grow at a faster rate of 24.4% during the same period, according to a report by MarketsAndMarkets Research. AI has redefined the way chips are designed and utilised in the semiconductor industry, leading to optimised energy efficiency and performance for larger datasets. As performance requirements increase, so does the need for cooling. Liquid cooling is more effective than air cooling in handling a data centre's growing densities, as these systems directly dissipate heat from the battery cells through the coolant, allowing customers to achieve precise temperature control, unaffected by external conditions. “This first business nomination from a global market leading OEM for data centre cooling systems is another testimonial of the successful execution of our growth plans into new markets. As with our previous wins in energy storage applications, data centres are another new market where our existing products, which are already proven to manage similar liquid cooling challenges, can fulfil the customer’s needs. This new business serves as a significant gateway for Concentric into this highly attractive and fast-growing market and I am extremely proud of our global sales and engineering team, who has developed this new solution with the customer, based on an existing Concentric product,” says Martin Kunz, President and CEO Concentric AB.

Building the telco edge
By Nathan Blom, Chief Commercial Officer, Iceotope With the growing migration of data to the edge, telco providers are facing new challenges in the race to net zero. Applications like IoT and 5G require ultra-low latency and high scalability to process large volumes of data close to where the data is generated, often in remote locations. Mitigating power constraints, simplifying serviceability and significantly driving down maintenance costs are rapidly becoming top priorities. Operators are tasked with navigating these changes in a sustainable and cost-effective manner, while working towards their net zero objectives. Liquid cooling is one solution able to help them do just that. Challenges facing telco operators The major challenges confronting telco operators can be distilled into three fundamental aspects: power constraints, increased density, and rising costs. The limitations of available power in the grid pose a significant challenge. Both urban areas and the extreme edge have concerns about diverting power from other essential activities. As telcos demand more data processing, increased computational power, and GPUs, power consumption becomes a critical bottleneck. This constraint pushes operators to find innovative solutions to reduce power consumption. Telco operators also face the dual challenge of increasing the number of towers while also enhancing the capacity of each tower. This requirement to boost compute power at each node and increase the number of nodes strains both power budgets and computational capabilities. The pursuit of maximising the value of each location becomes critical. Finally, the combination of increased density, heightened service costs per site, and a surge in operational expenses (OPEX) due to the need for service and maintenance leads to rising costs, particularly at the extreme edge. The logistics and expenses of servicing remote sites drive up OPEX, making it a pressing concern for telco operators. Liquid cooling as a solution One promising avenue to address these challenges is liquid cooling. Cooling is a vital aspect of data centre operations, consuming approximately 40% of the total electricity used. Liquid cooling is rapidly becoming the solution of choice to efficiently and cost-effectively accommodate today’s compute requirements. However, not all liquid cooling solutions are the same.  Direct-to-chip appears to offer the highest cooling performance at chip levels, but because it still requires air cooling, it adds inefficiencies at the system level. It is a nice interim solution to cool the hottest chips, but it does not address the longer-term goals of sustainability, serviceability, and scalability. Meanwhile, tank immersion offers a more sustainable option at the system level, but requires a complete rethink of data centre design. This works counter to the goals of density, scalability, and most importantly, serviceability. Facility and structural requirements mean brownfield data centre space is essentially eliminated as an option for both of those solutions, not to mention special training is required to service the equipment.   Precision liquid cooling combines the best of both technologies, by removing nearly 100% of the heat generated by the electronic components of a server and reducing energy use by up to 40% and water consumption by up to 100%. It does this by using a small amount of dielectric coolant to precisely target and remove heat from the hottest components of the server, ensuring maximum efficiency and reliability. This eliminates the need for traditional air-cooling systems and allows for greater flexibility in designing IT solutions. There are no hotspots to slow down performance, no wasted physical space on unnecessary cooling infrastructure, and minimal need for water consumption. Precision liquid cooling also reduces stress on chassis components, reducing component failures by 30% and extending server lifecycles. Servers can be hot swapped at both the data centre and at remote locations. Service calls are simplified and eliminate exposure to environmental elements on-site, de-risking service operations. Operating within standard rack-based chassis, Precision liquid cooling is also highly scalable. Telco operators can effortlessly expand their compute capacity from a single node to a full rack, adapting to evolving needs. The telco industry is on the cusp of a transformative era. Telco operators are grappling with the challenges of power constraints, increased density, and rising costs, particularly at the extreme edge. Precision liquid cooling offers a sustainable solution to these challenges. As the telecommunications landscape continues to evolve, embracing innovative cooling solutions becomes a strategic imperative for slashing energy and maintenance costs while driving toward sustainability goals. It's going to be an exciting time for the future of compute.



Translate »