Advertise on DCNN Advertise on DCNN Advertise on DCNN

Liquid Cooling


Vertiv predicts data centre trends for 2025
AI continues to reshape the data centre industry, a reality reflected in the projected 2025 data centre trends from Vertiv, a global provider of critical digital infrastructure and continuity solutions. Vertiv experts anticipate increased industry innovation and integration to support high-density computing, regulatory scrutiny around AI, as well as increasing focus on sustainability and cybersecurity efforts. “Our experts correctly identified the proliferation of AI and the need to transition to more complex liquid- and air-cooling strategies as a trend for 2024, and activity on that front is expected to further accelerate and evolve in 2025,” says Vertiv CEO, Giordano (Gio) Albertazzi. “With AI driving rack densities into three- and four-digit kWs, the need for advanced and scalable solutions to power and cool those racks, minimise their environmental footprint, and empower these emerging AI factories has never been higher. We anticipate significant progress on that front in 2025, and our customers demand it.” According to Vertiv's experts, these are the 2025 trends most likely to emerge across the data centre industry: 1. Power and cooling infrastructure innovates to keep pace with computing densification: In 2025, the impact of compute-intense workloads will intensify, with the industry managing the sudden change in a variety of ways. Advanced computing will continue to shift from CPU to GPU to leverage the latter’s parallel computing power and the higher thermal design point of modern chips. This will further stress existing power and cooling systems and push data centre operators toward cold-plate and immersion cooling solutions that remove heat at the rack level. Enterprise data centre will be impacted by this trend, as AI use expands beyond early cloud and colocation providers. • AI racks will require UPS systems, batteries, power distribution equipment and switchgear with higher power densities to handle AI loads that can fluctuate from a 10% idle to a 150% overload in a flash. • Hybrid cooling systems, with liquid-to-liquid, liquid-to-air and liquid-to-refrigerant configurations, will evolve in rackmount, perimeter and row-based cabinet models that can be deployed in brown/greenfield applications. • Liquid cooling systems will increasingly be paired with their own dedicated, high-density UPS systems to provide continuous operation. • Servers will increasingly be integrated with the infrastructure needed to support them, including factory-integrated liquid cooling, ultimately making manufacturing and assembly more efficient, deployment faster, equipment footprint smaller, and increasing system energy efficiency. 2. Data centres prioritise energy availability challenges: Overextended grids and skyrocketing power demands are changing how data centres consume power. Globally, data centres use an average of 1-2% of the world’s power, but AI is driving increases in consumption that are likely to push that to 3-4% by 2030. Expected increases may place demands on the grid that many utilities can’t handle, attracting regulatory attention from governments around the globe – including potential restrictions on data centre builds and energy use – and spiking costs and carbon emissions that data centre organisations are racing to control. These pressures are forcing organisations to prioritise energy efficiency and sustainability even more than they have in the past. In 2024, Vertiv predicted a trend toward energy alternatives and microgrid deployments, and in 2025 the company is expecting an acceleration of this trend, with real movement toward prioritising and seeking out energy-efficient solutions and energy alternatives that are new to this arena. Fuel cells and alternative battery chemistries are increasingly available for microgrid energy options. Longer-term, multiple companies are developing small modular reactors for data centres and other large power consumers, with availability expected around the end of the decade. Progress on this front bears watching in 2025. 3. Industry players collaborate to drive AI factory development: Average rack densities have been increasing steadily over the past few years, but for an industry that supported an average density of 8.2kW in 2020, the predictions of AI factory racks of 500 to 1000kW or higher soon represent an unprecedented disruption. As a result of the rapid changes, chip developers, customers, power and cooling infrastructure manufacturers, utilities and other industry stakeholders will increasingly partner to develop and support transparent roadmaps to enable AI adoption. This collaboration extends to development tools powered by AI to speed engineering and manufacturing for standardised and customised designs. In the coming year, chip makers, infrastructure designers and customers will increasingly collaborate and move toward manufacturing partnerships that enable true integration of IT and infrastructure. 4. AI makes cybersecurity harder – and easier: The increasing frequency and severity of ransomware attacks is driving a new, broader look at cybersecurity processes and the role the data centre community plays in preventing such attacks. One-third of all attacks last year involved some form of ransomware or extortion, and today’s bad actors are leveraging AI tools to ramp up their assaults, cast a wider net, and deploy more sophisticated approaches. Attacks increasingly start with an AI-supported hack of control systems, embedded devices or connected hardware and infrastructure systems that are not always built to meet the same security requirements as other network components. Without proper diligence, even the most sophisticated data centre can be rendered useless. As cybercriminals continue to leverage AI to increase the frequency of attacks, cybersecurity experts, network administrators and data centre operators will need to keep pace by developing their own sophisticated AI security technologies. While the fundamentals and best practices of defence in depth and extreme diligence remain the same, the shifting nature, source and frequency of attacks add nuance to modern cybersecurity efforts. 5. Government and industry regulators tackle AI applications and energy use: While Vertiv's 2023 predictions focused on government regulations for energy usage, in 2025, Vertiv expects the potential for regulations to increasingly address the use of AI itself. Governments and regulatory bodies around the world are racing to assess the implications of AI and develop governance for its use. The trend toward sovereign AI – a nation’s control or influence over the development, deployment and regulation of AI and regulatory frameworks aimed at governing AI – is a focus of The European Union’s Artificial Intelligence Act and China’s Cybersecurity Law (CSL) and AI Safety Governance Framework. Denmark recently inaugurated its own sovereign AI supercomputer, and many other countries have undertaken their own sovereign AI projects and legislative processes to further regulatory frameworks, an indication of the trajectory of the trend. Some form of guidance is inevitable, and restrictions are possible, if not likely. Initial steps will be focused on applications of the technology, but as the focus on energy and water consumption and greenhouse gas emissions intensifies, regulations could extend to types of AI application and data centre resource consumption. In 2025, governance will continue to be local or regional rather than global, and the consistency and stringency of enforcement will widely vary. For more from Vertiv, click here.

STULZ Modular configures data centre at University of Göttingen
STULZ Modular, a provider of modular data centre solutions and a wholly owned subsidiary of STULZ, has announced the completion of an installation at the University of Göttingen in Germany for the Emmy supercomputer, which employs an innovative combination of direct to chip liquid and air cooling. One of the top 100 most powerful supercomputers in the world, Emmy is named after renowned German mathematician, Emmy Noether, who was described by Albert Einstein as one of the most important women in the field of mathematics. The University of Göttingen needed a new data centre to house Emmy, as the existing facilities could not provide the required space and cooling infrastructure. It needed to be a modular construction with a 1.5MW total capacity that could accommodate further expansion, with the deployment of a cooling system that could remove heat density of up to 100kW per rack. Emmy’s power consumption was also a factor, so the implemented solution needed to address this by being as energy efficient and sustainable as possible. "We were given less than two months to design and install a two room modular data centre with a cooling infrastructure, which would be installed on a ground slab and connected to the on-site transformer station," explains Dushy Goonawardhane, Managing Director at STULZ Modular. "Our solution comprises four prefabricated modules – two larger modules cover an area of 85m² and are joined along the spine to accommodate the direct to chip liquid cooled supercomputer. Two smaller modules are also joined along the spine to accommodate air cooled IT equipment in 70m² of space." The entire data centre comprises high performance computers, 1,120kW direct to chip liquid cooled systems with approximately 20% residual heat, high-density racks and STULZ CyberAir and STULZ CyberRow precision air conditioning unit with free cooling. With 96kW per full rack and 11 racks currently in-situ, there is available capacity for up to 14 racks in total. STULZ Modular worked with CoolIT Systems which specialises in scalable liquid cooling solutions for the world’s most demanding computing environments, to incorporate direct to chip liquid cooling to Emmy’s microprocessors. Comprising two liquid loops, the secondary loop provides a flow of cooling fluid from the cooling distribution unit (CDU) to the distribution manifolds and into the servers, where heat is transferred through cold plates into the coolant. The secondary fluid then flows into the heat exchanger in the CDU, where it transfers heat into the primary loop and the absorbed heat energy is carried to a dry cooler and rejected. The direct to chip liquid cooled system removes 78% (74.9kW) of the server heat load. A water-cooled STULZ CyberRow (with free cooling option) air cooling unit removes the remaining 22% (21.1kW) of the heat load produced by components within the server. The CyberRow’s return air temperature is specified at approximately 48°C, supply air temperature at 27-35°C and water temperature at 32-36°C. The University of Göttingen is dedicated to reducing its carbon footprint and overall energy consumption across its campus. The STULZ modular data centre provides 27% electricity savings at an average 75% load, equating to 3.96GWh per year. Furthermore, compared to a standard air-cooled data centre with a Power Usage Effectiveness (PUE) of 1.56 – the current industry average according to the Uptime Institute – the hybrid direct to chip liquid and air-cooling system provides an overall annual facility PUE of 1.13, with a 1.07 PUE for the liquid cooled supercomputer room alone. STULZ Modular’s Dushy Goonawardhane concludes, "This installation demonstrates our commitment to pushing the boundaries of data centre cooling technology. By combining direct-to-chip liquid cooling with our advanced air-cooling systems, we've created a solution that not only meets the extreme demands of supercomputing but also aligns with the University of Göttingen's sustainability goals. We are excited to share the complexity and learnings from this project in a white paper we have produced in cooperation with the University of Göttingen. For more from STULZ, click here.

Sabey Data Centers forms partnership with Seguente
Sabey Data Centers, a designer, builder and operator of multi-tenant data centres, has announced a new strategic partnership with Seguente, a global technology company that provides innovative liquid-cooled IT hardware and AI software platforms for high-performance computing. This partnership enhances both companies' ability to deliver sustainable, optimised and scalable data centre solutions to customers across Sabey’s portfolio. Seguente specialises in deploying IT hardware with a passive direct-to-chip liquid cooling technology through its Coldware product line, offering an energy-efficient and environmentally friendly solution for data centres housing high-density IT equipment. The system uses a passive two-phase liquid cooling method with ultra-low global warming potential dielectric fluids, that is waterless and pumpless, allowing for seamless integration into both new and existing data centre infrastructures. This advanced low maintenance cooling method significantly reduces power usage and provides greater flexibility and reliability in high-demand computing environments, including artificial intelligence (AI), high-performance computing (HPC) and 5G. “This partnership with Sabey Data Centers represents a significant milestone in our vision to deliver fully configured IT and heat rejection products that are scalable and sustainable to support the global digital infrastructure evolution,” says Dr. Raffaele Luca Amalfi, CEO and co-founder of Seguente. “By combining our innovative Coldware products with Sabey’s vast expertise in data centre operations, we are equipped to offer flexible, high-performance solutions that also reduce carbon footprint, power consumption and water usage of data centre operations.” Rob Rockwood, President at Sabey Data Centers, adds, “Seguente’s advanced liquid cooling technology aligns perfectly with our mission to deliver state-of-the-art, energy-efficient data centres. This partnership will enable us to meet the growing demand for sustainable solutions in high-performance computing and AI environments.” The partnership emphasises the shared commitment of both companies to innovation, sustainability and operational excellence, ensuring that their clients can continue to expect the most efficient and forward-looking solutions available in the industry. For more from Sabey Data Centers, click here.

Iceotope launches new precision liquid-cooled server
Iceotope, a Precision Liquid Cooling (PLC) specialist, has announced the launch of KUL AI, a new solution to deliver the promise of AI everywhere and offering significant operational advantages where enhanced thermal management and maximum server performance are critical. KUL AI features an 8-GPU Gigabyte G293 data centre server-based solution integrated with Iceotope’s Precision Liquid Cooling and powered by Intel Xeon Scalable processors – the most powerful server integrated by Iceotope to date. Designed to support dense GPU compute, the 8-GPU G293 carries NVIDIA Certified-Solutions accreditation and is optimised by design for liquid cooling with dielectric fluids. KUL AI ensures uninterrupted, reliable compute performance by maintaining optimal temperatures, protecting critical IT components, and minimising failure rates, even during sustained GPU operations. The surge in power consumption and sheer volume of data produced by new technologies including Artificial Intelligence (AI), high-performance computing (HPC), and machine learning poses significant challenges for data centres. To achieve maximum server performance without throttling, Iceotope's KUL AI uses an advanced precision cooling solution for faster processing, more accurate results, and sustained GPU execution, even for demanding workloads. KUL AI is highly scalable and proven to achieve up to four times compaction, handling growing data and model complexity without sacrificing performance. Its innovative specifications make KUL AI ideal for a range of industries where AI is becoming increasingly essential: from AI research and development centres, HPC labs and cloud service provider (CSPs), to media production and visual effects (VFX) studios, and financial services and quantitative trading firms. Fitting seamlessly into the KUL family of Iceotope technologies, KUL AI uses Iceotope’s Precision Liquid Cooling technology which offers several advantages – from providing uniform cooling across all heat-generating server components to reducing hotspots and improving overall efficiency. Additionally, PLC eliminates the need for supplementary air cooling, leading to simpler deployments and lower overall energy consumption. Improving cost-effectiveness and operation efficiency are constant targets for Iceotope. In fact, KUL AI’s advanced thermal management maximises server utilisation, boosting compute density, cutting energy costs, and extending hardware lifespan for a lower total cost of ownership (TCO). Furthermore, KUL AI cuts energy use by up to 40% and water consumption by 96%, and minimises operational costs, while maintaining high thermal efficiency and meeting sustainability targets. Built with scalability and adaptability in mind, KUL AI is deployable in both data centres and across all edge IT installations. Precision Liquid Cooling removes noisy server fans from the cooling process, resulting in near-silent operations and making KUL AI ideal for busy non-IT and populous workspaces which nonetheless demand sustained GPU performance. Ideal for latency-sensitive edge deployments and environments with extreme conditions, KUL AI is sealed and protected at the server level, not only ensuring uniform cooling of all components on the GPU motherboard, but also rendering it impervious to airborne contaminants and humidity for greater reliability. Crucially, PLC minimises the risk of leaks and system damage, making it a safe choice for any critical environments. Nathan Blom, Co-CEO of Iceotope, says, “The unprecedented volume of data being generated by new technologies demands a state-of-the-art solution which not only guarantees server performance, but delivers on all vectors of efficiency and sustainability. KUL AI is a pioneering product delivering more computational power and rack space. It offers a scalable system for data centres and is adaptable in non-IT environments, enabling AI everywhere.” The launch will be showcased for the first time at Super Computing 2024, taking place in Atlanta from 17-22 November 2024. The Iceotope team will be welcoming interested parties at nVent Booth 1738. To schedule an introductory meeting, contact sales@iceotope.com. For more from Iceotope, click here.

atNorth announces data centre expansion in Iceland
atNorth, the Nordic colocation, high-performance computing, and AI service provider, has announced the substantial expansion of two of its data centres in Iceland. The ICE02 campus near Keflavík will gain an additional capacity of 35MW, while the ICE03 site in Akureyri (which opened last year) will gain additional capacity of 16MW. Both sites have surplus space for further expansion in line with future demand. Both data centre sites are highly energy efficient, operating at a maximum PUE of 1.2, and will also be able to accommodate the latest in air and liquid cooling technologies, depending on customer preference. The initial phase of ICE02’s expansion became operational in Q3 2024 and all further phases for both sites are expected to be completed in the first half of 2025. The innovative design of the data centres caters to data-intensive businesses that require high-density infrastructure for high-performance computing. The sites currently accommodate companies such as Crusoe, Advania, RVX, DNV, Opera, BNP Paribas, and Tomorrow.io. As part of atNorth’s ongoing commitment to sustainability and collaboration, the business has also entered into a partnership with AgTech startup, Hringvarmi, to recycle excess heat for use in food production. As part of this agreement, Hringvarmi will place its Generation 1 prototype module within ICE03 to test the concept of transforming 'data into dinner' by utilising waste heat to grow microgreens in collaboration with the food producer, Rækta Microfarm “We are delighted to be part of atNorth’s innovative data centre ecosystem”, says Justine Vanhalst, Co-Founder of Hringvarmi. “Our partnership aims to boost Iceland’s agriculture industry to lessen the need for imported produce and contribute to Iceland’s circular economy”. The expansion plans reflect the huge demand, both domestically and internationally for atNorth’s sustainable data centre solutions. Data intensive businesses, including hyperscalers and companies that run AI and High-Performance Computing workloads, recognise the quality of the digital infrastructure available and are attracted by Iceland’s advantageous location. The country benefits from a consistently cool climate and an abundance of renewable energy in addition to fully redundant connectivity and a highly skilled workforce. “We are experiencing a considerable increase in interest in our highly energy efficient, sustainable data centres”, says Eyjólfur Magnús Kristinsson, CEO at atNorth. “We have power agreements and building permits in place and will meet this demand as part of our ongoing sustainable expansion strategy”. atNorth operates seven data centres in four of the five Nordic countries and currently has four new data centre sites in development: two in Finland (FIN02 in Helsinki and FIN04 in Kouvola), and two in Denmark (DEN01 in the Ballerup region and DEN02 in Ølgod in Varde). For more from atNorth, click here.

Schneider Electric acquires liquid cooling company
Schneider Electric has announced that it has signed an agreement to acquire a controlling interest in Motivair Corporation, a company that specialises in liquid cooling and advanced thermal management solutions for high performance computing systems. The advent of Generative-AI and the introduction of Large Language Models (LLMs) have been additional catalysts driving enhanced power needs to support increased digitisation across end-markets. This shift to accelerated computing is resulting in new data centre architectures requiring more efficient cooling solutions, particularly liquid cooling, as traditional air cooling alone cannot mitigate the higher heat generated as a result. As the compute within data centres becomes higher-density, the need for effective cooling will grow, with multiple market and analyst forecasts predicting growth in liquid cooling solutions in excess of +30% CAGR in the coming years. This transaction strengthens Schneider Electric’s portfolio of direct-to-chip liquid cooling and high-capacity thermal solutions, enhancing existing offerings and furthering innovation in cooling technology. Headquartered in Buffalo, New York, Motivair was founded in 1988 and currently has over 150 employees. Leveraging its strong engineering competency and deep domain expertise, Motivair has a range of offers including Coolant Distribution Units (CDUs), Rear Door Heat Exchangers (RDHx), Cold Plates and Heat Dissipation Units (HDUs), alongside Chillers for thermal management. Motivair provides its customers with a portfolio to meet the thermal challenges of modern computing technology. While liquid cooling is not a new technology, specific application to the data centre and AI environment represents a nascent market set for strong growth in the coming years. Motivair has years of experience in cooling the world’s fastest supercomputers with liquid cooling solutions. In recent quarters, the company has been tracking a strong double-digit growth trajectory, which is expected to continue as it pivots to provide end-to-end liquid cooling solutions to several of the largest data centre and AI customers. Peter Herweck, CEO of Schneider Electric, comments, “The acquisition of Motivair represents an important step, furthering our world leading position across the data centre value chain. The unique liquid cooling portfolio of Motivair complements our value proposition in data centre cooling and further strengthens our prominent position in data centre build out, from grid to chip and from chip to chiller.” Rich Whitmore, President & CEO of Motivair Corporation - who will continue to run the Motivair business out of Buffalo after the closing of the transaction - adds, “Schneider Electric shares our core values and commitment to innovation, sustainability and excellence. Joining forces with Schneider will enable us to further scale our operations and invest in new technologies that will drive our mission forward and solidify our position as an industry leader. We are thrilled to embark on this exciting journey together." Under the terms of the transaction, Schneider Electric will acquire an initial 75% controlling interest in the equity of Motivair for an all-cash consideration of $850 million (£652m), which includes the value of a tax step-up, and values Motivair at a mid-single digit multiple of projected FY2025 revenue. The transaction is subject to customary closing conditions, including the receipt of required regulatory approvals, and is expected to close in the coming quarters. On completion, Motivair would be reported within the Energy Management business of Schneider Electric. The Group expects to acquire the remaining 25% of non-controlling interests in 2028. For more from Schneider Electric, click here.

Lenovo expands Neptune liquid cooling ecosystem
Lenovo has expanded its Neptune liquid-cooling technology to more servers with new ThinkSystem V4 designs that help businesses boost intelligence, consolidate IT and lower power consumption in the new era of AI. Powered by Intel Xeon 6 processors with P-cores, the new Lenovo ThinkSystem SC750 V4 supercomputing infrastructure (pictured above) combines peak performance with advanced efficiency to deliver faster insights in a space-optimised design for intensive HPC workloads. The full portfolio includes new Intel-based solutions optimised for rack density and massive transactional data, maximising processing performance in the data centre space for HPC and AI workloads. “Lenovo is helping enterprises of every size and across every industry bring new AI use cases to life based on improvements in real-time computing, power efficiency and ease of deployment,” says Scott Tease, Vice President and General Manager of High-Performance Computing and AI at Lenovo. “The new Lenovo ThinkSystem V4 solutions powered by Intel will transform business intelligence and analytics by delivering AI-level compute in a smaller footprint that consumes less energy.”As part of its ongoing investment in accelerating AI, Lenovo is pushing the envelope with the sixth generation of its Lenovo Neptune liquid-cooling technology, delivering it for mainstream use throughout its ThinkSystem V3 and V4 portfolios through compact design innovations that maximise computing performance while consuming less energy. Lenovo’s proprietary direct water-cooling recycles loops of warm water to cool data centre systems, enabling up to a 40% reduction in power consumption.The Lenovo ThinkSystem SC750 V4 helps support customers’ sustainability goals data centre operations with highly efficient direct water-cooling built directly into the solution and accelerators that deliver even greater workload efficiency with exceptional performance per watt. Engineered for space-optimised computing, the infrastructure fits within less than a square meter of data centre space in industry-standard 19-inch racks, pushing the boundaries of compact general-purpose supercomputing. Leveraging the new infrastructure, organisations can achieve faster time-to-value by quickly and securely unlocking new insights from their data. The ThinkSystem SC750 V4 uses a next-generation MRDIMM memory solution to increase critical memory bandwidth by up to 40%. It is also designed for handling sensitive workloads with enhanced security features for greater protection. Building on Lenovo’s leadership in AI innovation, the new solutions achieve advanced performance, increased reliability and higher density to propel intelligence. For more from Lenovo, click here.

Park Place Technologies introduces liquid cooling solutions
Park Place Technologies, a global data centre and networking optimisation firm, has announced the expansion of its portfolio of IT infrastructure services with the introduction of two liquid cooling solutions for data centres: immersion liquid cooling and direct-to-chip cooling. This announcement comes at a critical time for businesses who are seeing a dramatic increase in the compute power they require, driven by adoption of technologies like AI and IoT. This, in turn, is driving the need for more on-prem hardware, more space for that hardware, and more energy to run it all – presenting a significant financial and environmental challenge for businesses. Park Place Technology says that its new liquid cooling solutions present a strong solution for businesses looking to address these challenges, as the technology has the potential to deliver strong financial and environmental results. Direct-to-chip is an advanced cooling method that applies coolant directly to the server components that generate the most heat including CPUs and GPUs. Immersion cooling empowers data centre operators to do more with less: less space and less energy. Using these methods, businesses can increase their Power Usage Effectiveness (PUE) by up to 18 times, and rack density by up to 10 times. Ultimately, this can help deliver power savings of up to 50%, which in turn leads to lower operation costs. From an environmental perspective, liquid cooling is significantly more efficient than traditional air cooling. At present, air cooling technology only captures 30% of the heat generated by the servers, compared to the 100% captured by immersion cooling, resulting in lower carbon emissions for businesses that opt for immersion cooling methods. Park Place Technologies can deliver a complete turnkey solution for organisations looking to implement liquid cooling technology, removing the complexity of adoption, which is a common barrier for businesses. Park Place Technologies provides a single-vendor solution for the whole process from procuring the hardware, conversion of the servers for liquid cooling, installation, maintenance, monitoring and management of the hardware and the cooling technology. “Our new liquid cooling offerings have the potential to have a significant impact on our customers’ costs and carbon emissions, two of the key issues they face today,” says Chris Carreiro, Chief Technology Officer at Park Place Technologies. “Park Place Technologies is ideally positioned to help organisations cut their data centre operations costs, giving them the opportunity to re-invest in driving innovation across their businesses. “The decision to invest in immersion cooling and direct-to-chip cooling depends on various factors, including the specific requirements of the data centre, budget constraints, the desired level of cooling efficiency, and infrastructure complexity. Park Place Technologies can work closely with customers to find the best solution for their business, and can guide them towards the best long-term strategy, while offering short-term results. This takes much of the complexity out of the process, which will enable more businesses to capitalise on this exciting new technology.”

New 1MW Coolant Distribution Unit launched
Airedale by Modine, a critical cooling specialist, has announced the launch of a coolant distribution unit (CDU), in response to increasing demand for high performance, high efficiency liquid and hybrid (air and liquid) cooling solutions in the global data centre industry. The Airedale by Modine CDU will be manufactured in the US and Europe and is suitable for both colocation and hyperscale data centre providers who are seeking to manage higher-density IT heat loads. The increasing data processing power of next-generation central processing units (CPUs) and graphics processing units (GPUs), developed to support complex IT applications like AI, result in higher heat loads that are most efficiently served by liquid cooling solutions. The CDU is the key component of any liquid cooling system, isolating facility water systems from the IT equipment and precisely distributing coolant fluid to where it is needed in the server / rack. Delivering up to 1MW of cooling capacity based on ASHRAE W2 or W3 facility water temperatures, Airedale’s CDU offers the same quality and high energy efficiency associated with other Airedale by Modine cooling solutions. Developed with complete, intelligent cooling systems in mind, the CDU’s integrated controls communicates with the site building management system (BMS) and system controls, for optimal performance and reliability. The ability to network up to eight CDUs makes it a flexible and scalable solution, responsive to a wide range of high-density loads. Manufactured with the highest quality materials and components, with N+1 pump redundancy, the Airedale CDU is engineered to perform in the uptime-dependent world of the hyperscale and colocation global data centre markets. Richard Burcher, Liquid Cooling Product Manager at Airedale by Modine, says, “Our investment in the liquid cooling market strengthens Airedale by Modine’s position in the data centre industry. We are seeing an increasing amount of enquiries for liquid cooling solutions, as providers move to a hybrid cooling approach to manage low to mid-density and high-density heat loads in the same space. “Airedale by Modine is a complete system provider, encompassing air and liquid cooling, as well as control throughout the thermal chain, supported with in-territory aftersales. This expertise in all areas of data centre cooling affords our clients complete life-cycle assurance.” For more from Airedale, click here.

Portus Data Centers announces new Hamburg site
Portus Data Centers has announced the planned availability of its new data centre (IPHH4) located on the IPHH Internet Port Hamburg Wendenstrasse campus. Construction work will begin in the first quarter of 2025 and is due to be completed in the fourth quarter of 2026 (Phase 1). The Tier III+ data centre is designed for a PUE of 1.2 or under and will have a total IT load of 12.8MW (two infrastructures each 6.4MW). With a data centre area (white space) of 6,380m² (3,190m² per building), the grid connection capacity is 20.3MVA. The facility is already fully compliant with the new EnEfG requirements and liquid cooling will be available for High Performance Computing (HPC). The IPHH data centre business was acquired by Arcus European Infrastructure Fund 3 SCSp on behalf of Portus last year. In addition to IPHH4, adjacent to the existing IPHH3 data centre at the main Wendenstrasse location, IPHH operates two other facilities in Hamburg. Sascha E. Pollok, CEO of IPHH, comments, “With the benefit of significant investment by Arcus under the Portus Data Centers umbrella, I am excited to see the accelerated transformation and expansion of the IPHH operation. IPHH4 will be fully integrated into the virtual campus of IPHH2 and IPHH3, and will therefore be the ideal interconnect location in Hamburg. With around 50 carriers and network operators, just a standard cross connect away, IPHH4 will be a major and highly accessible interconnection hub.” Adriaan Oosthoek, Chairman of Portus Data Centers, adds, “The rapid expansion of IPHH is testament to our mission to establish Portus Data Centers as a major regional force in Germany and adjacent markets. Our buy-and-build regional data centre aggregation strategy is focused on ensuring our current and future locations are equipped with the capacity and connectivity required to meet customer demand.” IPHH’s growing customer base includes telecom carriers, global technology and social media companies and content distribution networks that rely on IPHH’s strong interconnection services. Aligning with Arcus’ commitment to ensuring that its investments have a positive ESG impact, energy consumed by IPHH’s data centres is certified as being 100% renewably sourced. IPHH’s highly efficient facilities currently provide a power usage effectiveness ratio of circa 1.3 times, which is subject to continuous improvement as part of Portus’ ongoing commitment to optimising its ESG policies and practices. For more from Portus, click here.



Translate »