Advertise on DCNN Advertise on DCNN Advertise on DCNN
Sunday, June 15, 2025

Cooling


Forestry & Land Scotland embrace cloud technology
Nutanix has announced that Forestry & Land Scotland (FLS) has upgraded its data centre infrastructure to a hyperconverged infrastructure (HCI), selecting the Nutanix Cloud Platform to support a workload of 300 virtual machines. FLS opted for Nutanix Cloud Clusters (NC2) on Microsoft Azure. With Nutanix NC2, it has been able to migrate the whole data centre to Azure without the time, effort and expense of re-engineering applications for native deployment. Founded in 2018 as part of the Scottish devolution process, FLS manages over 1.5 million acres of national forests and land. To meet the short term IT needs of a newly devolved Scottish government agency, at the same time, supporting its move to the public cloud in line with a cloud-first government policy, it was required to rapidly revamp its legacy on-premises data centre. FLS was already using Microsoft Azure to provide for disaster recovery of its on-premise data centre, so naturally, the organisation first looked at re-engineering for native operation of its applications on that platform. It soon realised that NC2 for Azure would be a better, quicker and more cost-effective approach, enabling it to stretch its existing environment seamlessly into the cloud and migrate workflows at its own pace, without having to transform or re-engineer the code in any way. The migration also offered immediate benefits in terms of both performance and on-demand scalability. It resulted in a significantly smaller data centre footprint, in terms of both physical space and power and cooling requirements. As with the original data centre project, Mahlitz, Nutanix was able to help by arranging a proof of concept trial of Nutanix NC2 on Microsoft Azure involving actual FLS production workloads.

Schneider Electric delivers data centre project for Loughborough University
Schneider Electric has delivered a new data centre modernisation project for Loughborough University, in collaboration with its elite partner, on365. The project saw Schneider Electric and on365 modernise the university’s IT infrastructure with new energy efficient technologies, including an EcoStruxure Row Data Center, InRow Cooling solution, Galaxy VS UPS and EcoStruxure IT software, enabling the university to harness the power of resilient IT infrastructure, data analytics and digital services to support new breakthroughs in sporting research. As Loughborough University is known for its sports-related subjects and is home to world-class sporting facilities, IT is fundamental to its operations, from its high-performance computing (HPC) servers which support analytical research projects, to a highly virtualised data centre environment that provides critical applications including finance, administration and security. To overcome a series of data centre challenges, including requirements for a complete redesign, modernisation of legacy cooling systems, improved cooling efficiencies, and greater visibility of its distributed IT assets, the university undertook the project at its Haslegrave and Holywell Park data centres. Delivered in two phases, the project firstly saw on365 modernise the Haslegrave facility by replacing an outdated raised floor and deploying an EcoStruxure Row Data Center solution. The deployment of this significantly improved the overall structure, enabling an efficient data centre design. During the upgrade, it also brought other parts of the infrastructure under the IT department’s control, using new InRow DX units to deliver improved cooling reliability, and provide it with greater ability to cope with unplanned weather such as heat waves, which had adversely affected its IT and cooling operations in the past. Use of this solution also created a new space for future IT expansions and extended a ‘no single points of failure’ design throughout the facility. This made the environment more suitable for a new generation of compact and powerful servers, and the solution was replicated at Holywell Park thereafter. Further improvements in resilience and efficiency were also achieved by Schneider Electric’s Galaxy VS UPS with lithium-ion batteries. “At the foundational level of everything which is data-driven at the university, the Haslegrave and Holywell data centres are the power behind a host of advancements in sports science, and our transition towards a more sustainable operation,” says Mark Newall, IT Specialist at the University of Loughborough. “Working with Schneider Electric and on365 has enabled our data centre to become more efficient, effective and resilient.” The university has also upgraded the software used to manage and control its infrastructure. It has deployed the company’s EcoStruxure IT platform, providing it with enhanced visibility and data-driven insights that help identify and mitigate potential faults before they become critical. This, in conjunction with a new three-year Schneider Electric services agreement delivered via on365, has given the university 24x7 access to maintenance support. The university also utilises a large distributed edge network environment, which has in excess of 60 APC Smart-UPS protecting it. As part of its services agreement, all critical power systems are monitored and maintained via EcoStruxure IT, providing real-time visibility and helping IT personnel to manage the campus’ network more efficiently.

Ongoing drought may potentially stunt Spanish data centre market
Amid one of the driest springs in Spain, a sector expert has warned that a lack of free cooling capacity could hinder the burgeoning growth of the nation’s data centre market. As hyperscalers and colocation facilities alike grapple with power-related challenges in the FLAP-D markets, data providers are gravitating towards Europe’s tier 2 markets of Zurich, Milan, Madrid and Berlin, with a 2022 projection from CBRE, forecasting that these will triple in size by autumn 2023. Of these, Madrid was highlighted as the main beneficiary, with 47MW set to come online in 2022 and 2023. However, following reports that the Spanish water reserve fell below 50% in May, Aggreko has warned that interruptions to free cooling processes have the potential to stifle the market’s ongoing growth. Billy Durie, Global Sector Head for Data Centres at Aggreko, says, “Spain, and Madrid in particular, is becoming an increasingly attractive location for data centre facilities. The Spanish government’s ‘Digital Spain 2026’ policy is a huge bonus for data providers, while the nation’s wider commitment to realising renewable energy means that energy shortages are less severe compared to other European nations. “That said, Spain is currently enduring one of the worst droughts recorded this century. Without water, free cooling processes simply aren’t possible, which has the potential to stunt the wider development of the market if left unchecked. For this reason, it’s critical that data centre operators ensure that contingency plans are in place in the meantime to maintain business continuity.” Aggreko recently published a report, Uptime on the Line, which explores the challenges facing European data providers, based on research insights from 700 European data centre consultants. Within the report, changing temperature demands has been highlighted as a key area of concern, with extreme weather posing a threat to data centre cooling systems. Here, it highlights on-site cooling packages as a potential solution, with connection points being installed during the colder months of the year, allowing chillers to be quickly brought in to maintain uptime. Billy concludes, “Right now, one of the main factors making Madrid such an attractive location for new data centres is the lack of power-related challenges associated with the FLAP-D markets. However, this unique position faces the threat of being undermined by a lack of free cooling capacity. “Extreme weather is by no means a new phenomenon, and seems to only become more common year on year. For this reason, I’d strongly recommend operators to incorporate temporary chillers as a part of their contingency strategy going forwards, to allow the Spanish data centre market to continue to thrive.”

Iceotope Technologies appoints Simon Jesenko as CFO
Iceotope Technologies Ltd has announced the appointment of Simon Jesenko, as Chief Financial Officer. Recognised for his track record in successfully scaling fast-growing international SMEs, Simon joins the company from predictive maintenance specialists, Senseye, where he oversaw the company’s acquisition and successful integration into Siemens. While at Senseye, Simon fundamentally transformed the organisation’s finance function, focusing on SaaS-specific financial reporting and forecasting, changing pricing strategies to align with customer needs and market maturity, as well setting up the structure required for rapid international expansion and leading various funding activities on the way towards an eventual exit. David Craig, CEO, Iceotope Technologies, says, “Simon is an accomplished CFO with an impressive track record of preparing the ground for corporate growth. His appointment is most welcome. He joins us at a time when the market is turning to liquid cooling to solve a wide range of challenges. These challenges include increasing processor output and efficiency, delivering greater data centre space optimisation and reducing energy inefficiencies associated with air-cooling to achieve greater data centre sustainability. Simon is a dynamic and well-respected CFO, with a clear understanding of how to optimise corporate structures and empower improved financial performance company-wide through the democratisation of fiscal data.” Simon says, “We find ourselves at a pivotal moment in the market, where the pull towards liquid cooling solutions is accelerating as a result of two key factors: one, sustainability initiatives and regulation imposed by governments and two, increase in computing power to accommodate processing-intensive applications, such as AI and advanced analytics. Iceotope’s precision liquid cooling technology is at the forefront of existing liquid cooling technologies and therefore places the company in a unique position to seize this huge opportunity. “My focus is going to be on delivering growth and financial performance that will increase shareholder value in the years to come as well as building a robust business structure to support this exponential growth along the way.”

Airedale by Modine expands its service offerings in the US
A five megawatts testing laboratory has been recently commissioned at the Modine Rockbridge facility in Virginia, further expanding the services that Airedale by Modine can offer its data centre customers and meet increasing demand from the data centre industry for validated and sustainable cooling solutions. Airedale is a trusted brand of Modine and provides complete cooling solutions to industries where removing heat is mission critical. Its facility opened in 2022 to manufacture chillers to meet the growing demand from US data centre customers. The new lab can test a complete range of air conditioning equipment, accommodating air-cooled chillers up to 2.1MW and water-cooled chillers up to 5MW. Crucially for data centre applications, the ambient temperature inside the chamber can be reduced to prove chiller-free cooling performance. Free cooling is the process of using external ambient temperature to reject heat, rather than using the refrigeration process. If used within an optimised system, it can help a data centre significantly reduce its energy consumption and carbon footprint. The lab also can facilitate quality witness tests for customers to validate chiller performance in person. In addition, the first US based service team has been launched to provide ongoing support to data centre customers in the field. The team offers coverage for spare parts, planned maintenance and emergency response. The facility is also working with colleges in Northern Virginia to recruit and train service engineers, either as new graduates who will receive fast-tracked training or through apprenticeships. Apprentices will have a mix of college classes and on-site training, after which they will graduate with an associate’s degree in engineering. Rob Bedard, General Manager of Modine’s North America data centre business says, “Our ongoing investment in our people in the US and the launch of the service team and apprenticeship program, along with the opening of our 5MW chiller test centre allows us to better serve our customers and cement our continuing commitment to the US data centre industry.”

GRC introduces HashRaQ MAX to enhance crypto mining
GRC (Green Revolution Cooling) has announced its newest offering for blockchain applications - HashRaQ MAX. The HashRaQ MAX is a next-gen, productivity-driven, immersion cooling solution that tackles the extreme heat loads generated by crypto mining. The precisely engineered system features a high-performance cooling distribution unit (CDU) that supports high-density configuration and ensures maximum mining capability with minimal infrastructure costs, allowing for installation in nearly any location with access to power and water. The unit’s moulded design provides even coolant distribution, so each miner operates at peak capability. HashRaQ MAX was developed utilising the experience and customer feedback GRC has accumulated over its 14 years of designing, building, and deploying immersion cooling systems specifically for the mining industry. The unit is capable of cooling 288kW with warm water when outfitted with 48 Bitmain S19 miners. Its space-saving and all-inclusive design consists of racks, frame, power distribution units (PDUs), coolant distribution unit (CDU), and monitoring, ensuring users can capitalise on the benefits of a comprehensive, validated, and cost-effective cooling solution. It’s a well-established fact that cryptocurrency mining utilises a significant amount of energy, with Bitcoin alone consuming a reported 127TWh a year. In the United States, mining operations are estimated to emit up to 50 million tons of CO2 annually. HashRaQ MAX is designed to reduce the carbon footprint of mining operations by minimising energy use while also enabling miners to optimise profitability. Additionally, the system is manufactured utilising post-industrial, recycled materials and is flat-pack shipped to further reduce costs and carbon emissions. The unit is also fully recyclable at the end of its life. “We are proud to present digital asset mining operators with a complete and reliable cooling solution that eliminates the time and complexity of piecing together an in-house system - and doesn’t break the bank,” says Peter Poulin, CEO of GRC. “We’ve been developing systems specifically for the blockchain industry since our inception in 2009, and our Hash family of products has been proven in installations around the world. It’s exciting to release this next generation HashRaQ MAX immersion cooling system in continuing support of cryptocurrency miners during this next era in digital asset mining.”

Supermicro launches NVIDIA HGX H100 servers with liquid cooling
Supermicro continues to expand its data centre offering with liquid-cooled NVIDIA HGX H100 rack scale solutions. Advanced liquid cooling technologies reduce the lead times for a complete installation, increase performance, and result in lower operating expenses while significantly reducing the PUE of data centres. Savings for a data centre are estimated to be 40% for power when using Supermicro liquid cooling solutions when compared to an air-cooled data centre. In addition, up to 86% reduction in direct cooling costs compared to existing data centres may be realised. “Supermicro continues to lead the industry supporting the demanding needs of AI workloads and modern data centres worldwide,” says Charles Liang, President, and CEO of Supermicro. “Our innovative GPU servers that use our liquid cooling technology significantly lower the power requirements of data centres. With the amount of power required to enable today's rapidly evolving large scale AI models, optimising TCO and the Total Cost to Environment (TCE) is crucial to data centre operators. We have proven expertise in designing and building entire racks of high-performance servers. These GPU systems are designed from the ground up for rack scale integration with liquid cooling to provide superior performance, efficiency, and ease of deployments, allowing us to meet our customers' requirements with a short lead time.” AI-optimised racks with the latest Supermicro product families, including the Intel and AMD server product lines, can be quickly delivered from standard engineering templates or easily customised based on the user's unique requirements. Supermicro continues to offer the industry's broadest product line with the highest-performing servers and storage systems to tackle complex compute-intensive projects. Rack scale integrated solutions give customers the confidence and ability to plug the racks in, connect to the network and become more productive sooner than managing the technology themselves. The top-of-the-line liquid cooled GPU server contains dual Intel or AMD CPUs and eight or four interconnected NVIDIA HGX H100 Tensor Core GPUs. Using liquid cooling reduces the power consumption of data centres by up to 40%, resulting in lower operating costs. In addition, both systems significantly surpass the previous generation of NVIDIA HGX GPU equipped systems, providing up to 30 times the performance and efficiency of today's large transformer models, with faster GPU-GPU interconnect speed and PCIe 5.0 based networking and storage. Supermicro's liquid cooling rack level solution includes a Coolant Distribution Unit (CDU) that provides up to 80kW of direct-to-chip (D2C) cooling for today's highest TDP CPUs and GPUs for a wide range of Supermicro servers. The redundant and hot-swappable power supply and liquid cooling pumps ensure that the servers will be continuously cooled, even with a power supply or pump failure. The leak-proof connectors give customers the added confidence of uninterrupted liquid cooling for all systems. Rack scale design and integration has become a critical service for systems suppliers. As AI and HPC have become an increasingly critical technology within organisations, configurations from the server level to the entire data centre must be optimised and configured for maximum performance. The Supermicro system and rack scale experts work closely with customers to explore the requirements and have the knowledge and manufacturing abilities to deliver significant numbers of racks to customers worldwide.

Carrier advances data centre sustainability with lifecycle solutions
Carrier is providing digital lifecycle solutions to support the unprecedented growth and criticality of data centres. More than 300 data centre owners and operators with over one million racks, spanning enterprise, colocation and edge markets benefit from Carrier’s optimisation solutions across their portfolios. “Data centre operators have made great strides in power usage effectiveness over the past 15 years,” says Michel Grabon, Data Centre Solutions Director, Carrier. “Continual technology advances with higher powered server processors present power consumption and cooling challenges requiring the specialised solutions that Carrier provides.” Carrier’s range of smart and connected solutions deliver upstream data from the data centre ecosystem to cool, monitor, maintain, analyse and protect the facility to meet green building standards, sustainability goals and comply with local greenhouse gas emission regulations. Carrier’s Nlyte DCIM tools share detailed information between the HVAC equipment, power systems and servers/workloads that run within data centres, providing unprecedented transparency and control of the infrastructure for improved uptime. Carrier’s purpose-built solutions are integrated across its solutions portfolio with HVAC equipment, data centre infrastructure management (DCIM) tools and building management systems to help data centre operators use less power and improve operating costs and profitability for many years. Marquee projects around the world include: • OneAsia’s data centre in Nantong Industrial Park. Carrier collaborated with the company to build its first data centre in China, equipped with a water-cooled chiller system. By optimising the energy efficiency of the entire cooling system, the high-efficiency chiller plant can reduce the annual electricity bill by approximately $180,000. • China’s Zhejiang Cloud Computing Centre is an example of how Carrier’s AquaEdge centrifugal chillers and integrated controls provide the required stability, reliability and efficiency for 200,000 servers. The integrated controls help reduce operating expenses and allow facility managers to monitor performance remotely and manage preventative maintenance to keep the chillers running according to operational needs. • Iron Mountain’s growing underground data centre, in a former Pennsylvania limestone mine, earned the industry’s top rating with the use of Carrier’s retrofit solution to control environmental heat and humidity. AquaEdge chillers with variable speed drive respond with efficient cooling, enabling the HVAC units to work under part-or full-load conditions. Carrier’s Nlyte Asset Lifecycle Management and Capacity Planning software provides automation and efficiency to asset lifecycle management, capacity planning, audit and compliance tracking. It simplifies space and energy planning, easily connecting to an IT service management system and all types of business intelligence applications, including Carrier’s Abound cloud-based digital platform and BluEdge service platform to track and predict HVAC equipment health, enabling continuous operations.

The liquid future of data centre cooling
By Markus Gerber, Senior Business Development Manager, nVent Schroff Demand for data services is growing and yet, there has never been greater pressure to deliver those services as efficiently and cleanly as possible. As every area of operation comes under greater scrutiny to meet these demands, one area in particular - cooling - has come into sharp focus. It is an area not only ripe for innovation, but where progress has been made that shows a way forward for a greener future. The number of internet users worldwide has more than doubled since 2010. Furthermore, as technologies emerge that are predicted to be the foundation of future digital economies, demand for digital services will rise not only in volume, but also sophistication and distribution. This level of development brings challenges for energy consumption, efficiency, and architecture. The IEA estimates that data centres are responsible for nearly 1% of energy-related greenhouse gas (GHG) emissions. While it acknowledges that since 2010, emissions have grown modestly despite rapidly growing demand, through energy efficiency improvements, renewable energy purchases by ICT companies and broader decarbonisation of electricity grids, it also warns that to align with the net zero by 2050 scenario, emissions must halve by 2030. This is a significant technical challenge. Firstly, in the last several decades of ICT advancement, Moore’s Law has been an ever-present effect. It states that compute power would more or less double, with costs halving, every two years or so. As transistor densities become more difficult to increase as they get into the single nanometre scale, the CEO of NVidia has asserted that Moore’s Law is effectively dead. This means that in the short term, to meet demand, more equipment and infrastructure will have to be deployed, in greater density. All changes will impact upon cooling infrastructure and cost In this scenario of increasing demand, higher densities, larger deployments, and greater individual energy demand, cooling capacity must be ramped up too. Air as a cooling medium was already reaching its limits, being as it is, difficult to manage, imprecise, and somewhat chaotic. As rack systems become more demanding, often mixing both CPU and GPU-based equipment, individual rack demands are approaching or exceeding 30W each. Air-based systems also tend to demand a high level of water consumption, for which the industry has also received criticism in the current environment. Liquid cooling technologies have developed as a means to meet the demands of both volume and density needed for tomorrow’s data services. Liquid cooling takes many forms, but the three primary techniques currently are direct-to-chip, rear door heat exchangers, and immersion cooling. Direct to chip (DtC) cooling is where a metal plate sits on the chip or component, and allows liquid to circulate within enclosed chambers carrying heat away. It is often used with specialist applications, such as high performance compute (HPC) environments. Rear door heat exchangers are close-coupled indirect systems that circulate liquid through embedded coils to remove server heat before exhausting into the room. They have the advantage of keeping the entire room at the inlet air temperature, making hot and cold aisle cabinet configurations and air containment designs redundant, as the exhaust air cools to inlet temperature and can recirculate back to the servers. Immersion technology employs a dielectric fluid that submerges equipment and carries away heat from direct contact. This enables operators to immerse standard servers with certain minor modifications such as fan removal, as well as sealed spinning disk drives. Solid-state equipment generally does not require modification. An advantage of the precision liquid cooling approach is that full immersion provides liquid thermal density, absorbing heat for several minutes after a power failure without the need for back-up pumps. Cundall’s liquid cooling findings According to a study by engineering consultant, Cundall, liquid cooling technology outperforms conventional air cooling. This is principally due to the higher operating temperature of the FWS, compared to the cooling mediums used for air cooled solutions. In all air cooled cases, considerable energy and water is consumed to arrive at a supply air condition that falls within the required thermal envelope. The need for this is avoided with liquid cooling. There were consistent benefits found, in terms of energy efficiency and consumption, water usage and space reduction, in multiple liquid cooling scenarios, from hybrid to full immersion, as well as OpEx and CapEx benefits. In hyperscale, colocation and edge computing scenarios, Cundall found the total cost of cooling ITE per kW consumed in liquid versus the base case of current air cooling technology, varied from 13-21% less. In terms of emissions, Cundall states that PUE and TUE are lower for the liquid cooling options in all tested scenarios. Expressing the reduction in terms of kg CO2 per kW of ITE power per year, results saw more than 6% for colocation, rising to almost 40% for edge computing scenarios. What does the future hold in terms of liquid cooling? Combinations of liquid and air cooling techniques will be vital in providing a transition, especially for legacy instances, to the kind of efficiency and emission-conscious cooling needs of current and future facilities. Though immersion techniques offer the greatest effect, hybrid cooling offers an improvement over air alone, with OpEx, performance and management advantages. Even as the data infrastructure industry institutes initiatives to better understand, manage and report sustainability efforts, more can be done to make every aspect of implementation and operation sustainable. Developments in liquid cooling technologies are a step forward that will enable operators and service providers to meet demand, while ensuring that sustainability obligations can be met. Initially, hybrid solutions will facilitate legacy operators to make the transition to more efficient and effective systems, while more advanced technologies will ensure new facilities more efficient, even as capacity is built out to meet rising demand. By working collaboratively with the broad spectrum of vendors and service providers, cooling technology providers can ensure that requirements are met, enabling the digital economy to develop to the benefit of all, while contributing towards a liveable future.

Vertiv introduces new chilled water thermal wall
Vertiv has introduced the Vertiv Liebert CWA, a new generation of thermal management system for slab floor data centres. For decades, hyperscale and colocation providers have used raised floor environments to cool their IT equipment. Simplifying data centre design with slab floors enables the construction of new white space more efficiently and cost-effectively, but also introduces new cooling challenges. The Liebert CWA was designed to provide uniform air distribution to the larger surface area which comes with a slab floor application, while also allowing more space for rack installation and compute density. Developed in the United States, the Liebert CWA chilled water thermal wall cooling unit is available in 250kW, 350kW and 500kW capacities across EMEA, as well as the Americas. Liebert CWA technology utilises integrated state-of-the-art controls to facilitate improved airflow management and provide an efficient solution for infrastructures facing the challenges of modern IT applications. The Liebert CWA can also be integrated with the data centre’s chilled water system to improve the operating conditions of the entire cooling network. Furthermore, the Liebert CWA is installed outside the IT space to allow more floor space in the data centre, increase accessibility for maintenance personnel, and also increase the security of the IT space itself. “The launch of the Liebert CWA reinforces our mission to provide innovative, state-of-the-art technologies for our customers that allow them to optimise the design and operation of their data centres” says Roberto Felisi, Senior Global Director, thermal core offering and EMEA business leader at Vertiv. “As the Liebert CWA can be quickly integrated with existing cooling systems, customers can leverage all the benefits of a slab floor layout, such as lower installation and maintenance costs, and a greater availability of white space.” Air handling units have been used in the past to cool raised-floor data centres but there is now an opportunity in the market to drive more innovative thermal management solutions for slab floor data centres. The Vertiv Liebert CWA provides Vertiv’s customers with a standardised thermal wall built specifically for data centre applications, therefore minimising installation costs of custom-made solutions on site. The product's layout is engineered to maximise the cooling density and to meet the requirements for cooling continuity set by the most trusted and established certification authorities for data centre design and operation. Vertiv has developed the Liebert CWA in close consultation with experienced data centre operators. With data centres having a myriad of layouts and equipment configurations, Vertiv has defined a strategic roadmap to enhance standardised thermal management solutions for slab floor applications. Vertiv also provides consulting and design expertise to create the right solution for their customers’ specific data centre white space requirements.



Translate »