Advertise on DCNN Advertise on DCNN Advertise on DCNN
Sunday, June 15, 2025

Cooling


Iceotope announces the retirement of CEO David Craig
Iceotope, the precision liquid cooling specialist, has announced the retirement of David Craig from the position of CEO effective from 30 September 2024. The company will be led jointly by Chief Commercial Officer (CCO) Nathan Blom and Chief Financial Officer (CFO), Simon Jesenko until the appointment of David’s successor. David will continue to advise the company and provide assistance during the transition period. Nathan has a leadership background driving revenue and strategy in Fortune 500 companies, including Lenovo and HP. Simon is a deep tech finance executive with experience supporting private equity and venture capital-backed companies as they achieve hypergrowth. David joined Iceotope in June 2015 and during his nine years at the helm, he has successfully guided the company through a transformative period as it seeks to become recognised as the leader in precision liquid cooling. His achievements include building a strong team with a clear vision to engineer practical liquid cooling solutions to meet emerging challenges, such as AI, distributed telco edge, high power dense computing, and sustainable data centre operations. The company’s cooling technology is critical in meeting today’s global data centre sustainability challenges. Its technology removes nearly 100% of the heat generated, reduces energy use by up to 40% and water consumption by up to 100%. The strength of its technology has attracted an international consortium of investors that include ABC Impact, British Patient Capital, Northern Gritstone, nVent and SDCL. David comments, “The past nine years have been an amazing ride – we have built a fantastic team, developed a great IP portfolio and created the only liquid cooling solution that addresses the thermal and sustainability challenges facing the data centre industry today and tomorrow. “I have enjoyed every moment and have nothing but pride in the team, company and product. However, it feels like now is an appropriate time for me to step aside, enjoy retirement, and focus on other passions in my life; particularly my charitable work in the UK and Africa. I look forward to seeing the future success of Iceotope and can’t wait to see what comes next.” Iceotope Chairman, George Shaw, states, “On behalf of everyone at Iceotope, we thank David for his dedication and endless enthusiasm for the company, the technology and the people who make it all possible. We know he will be a tremendous brand ambassador for precision liquid cooling in the years to come. We wish him all the best in his well-deserved retirement.” For more from Iceotope, click here.

IST completed at new data centre campus in Virginia
Corscale is moving closer to the opening of its initial 72MW data centre at Gainesville Crossing Data Campus in Virginia, US, following the successful integrated systems testing (IST) of the recently installed Airedale by Modine cooling solution. Advising on all areas of design, installation and operations, Airedale by Modine has worked closely with Corscale and its approved contractors.to maximise system efficiencies. Following the approval of all four phases of IST testing, the specialist US commissioning agent appointed by Corscale has commended Airedale, noting that it has gone above and beyond its remit to drive optimisation and efficiency gains. The independent commissioning agent was appointed to manage the testing and handover of this project, and it split the testing schedule into four 18MW data halls for increased scrutiny. With full expectation of the equipment to perform in all conditions, it put the fan wall units and chillers to the test in emergency simulations (for example, fast-start and sequencer testing). For a fast-start test, the power is switched off and reinstated after 30 seconds by a generator. An uninterruptible power supply (UPS) restores the 18MW power feed in the data hall, whilst the chiller system has to return to full load and remove the build-up of heat that occurred in the 30 seconds downtime. Sequencer testing involves deliberately ‘failing’ a chiller to ensure the next chiller in the sequence handles the heat load. Other critical scenarios are also tested and reported back to both Corscale and the unnamed hyperscale client who will eventually lease the data centre space. The feedback from these reports has been exceptional, recognising Airedale’s expertise and willingness to drive efficiencies further. Airedale was appointed by Corscale because of its innovative chiller economiser technology, paired with its in-depth knowledge and understanding of the data centre industry. The order for the first data centre at Gainesville Crossing data campus includes 56 OptiChill chillers, 256 AireWall fan wall units, and 8 SmartCool computer room air handling units, providing 72 MW of cooling. Phase one of testing started at the back end of 2023, and with all four phases now completed and signed off, the data centre will soon be handed over to Corscale. Nic Bustamante, Chief Technology Officer for Corscale, says, “We have been consistently reassured by Airedale’s technical expertise and commitment, seeing it go above and beyond, sharing its knowledge and experience with other specialist contractors to develop the most efficient and effective system for our clients.” Rob Bedard, General Manager of Airedale by Modine North America, adds, “Working with Corscale is a privilege that allows us to form a collaborative working environment with its appointed agencies and end-user clients. Such transparency and ease of communication has afforded us all the opportunity to further enhance efficiencies and maximise opportunities for sustainability gains. We look forward to undertaking more great work with Corscale on this and future projects.” For more from Airedale, click here.

Iceotope launches state-of-the art liquid cooling lab
Iceotope Technologies, a global provider of precision liquid cooling technology, has announced the launch of Iceotope Labs, the first of its state-of-the-art liquid cooling lab facilities in Sheffield. Designed to revolutionise high-density data centre research and testing capabilities for customers seeking to deploy liquid cooling solutions, Iceotope believes that its Iceotope Labs will set new standards as the industry's most advanced liquid-cooled data centre test environment available today. Amid the exponential growth of AI and machine learning, liquid cooling is rapidly becoming an enabling technology for AI workloads. As operators evolve their data centre facilities to meet this market demand, validating liquid cooling technology is key to future-proofing infrastructure decisions. By leveraging advanced monitoring capabilities, data analysis tools, and a specialist team of test engineers, Iceotope Labs will provide quantitative data and a state-of-the-art research and development (R&D) environment to demonstrate the benefits of liquid cooling to customers and partners seeking to utilise the latest advancements in high-density infrastructure and GPU-powered computing. Examples of recent research conducted by Iceotope Labs includes groundbreaking testing for next-gen chip level cooling at both 1500W and 1000W. These tests demonstrated precision liquid cooling’s ability to meet the thermal demands of future computing architectures needed for AI compute. Working in partnership with Efficiency IT, a UK specialist in data centres, IT and critical communications environments, the first of Iceotope’s bespoke labs showcases the adaptability and flexibility of leveraging liquid cooling in a host of data centre settings including HPC, supercomputing and edge environments. The fully functional, small-scale liquid cooled data centre includes two temperature-controlled test rooms and dedicated space for thermal, mechanical and electronic testing for everything from next generation CPUs and GPUs to racks and manifolds. Iceotope Labs also features a facility water system (FWS) loop, a technology cooling system (TCS) loop with heat exchangers, as well as an outside dry cooler – demonstrating key technologies for a complete liquid cooled facility. The two flexible, secondary loops are independent of each other and have a large temperature band to stress-test the efficiency and resiliency of a customers' IT equipment if and when required. Additionally, the flexible test space considers all ASHRAE guidelines and best practices to ensure optimal conditions for a range of test setups for enhanced control and monitoring all while maximising efficiency and safety. "We are investing in our research and innovation capabilities to offer customers an unparalleled opportunity," says David Craig, CEO of Iceotope. “Iceotope Labs not only serves as a blueprint for what a liquid cooled data centre should be, but is also a collaborative hub for clients to explore liquid cooling solutions without the need for their own lab space. It's a transformative offering within the data centre industry." David continues, “We’d like to thank Efficiency IT for its role in bringing Iceotope Labs to fruition. Its design expertise has empowered us with the flexibility needed to create a cutting-edge facility that exceeds industry standards." “With new advancements in GPU, CPU and AI workloads having a transformative impact on both data centre design and cooling architectures, it’s clear to see that liquid cooling will play a significant role in improving the resiliency, energy and environmental impact of data centres,” adds Nick Ewing, MD, EfficiencyIT. “We’re delighted to have supported Iceotope throughout the design, development and installation of its industry-first Iceotope Lab, and look forward to building on our collaboration as together, we develop a new customer roadmap for high-density, liquid-cooled data centre solutions.” Located at Iceotope's global headquarters in Sheffield, UK, Iceotope Labs further expands the location as a hub for technology innovation and enables Iceotope to continue to deliver the highest level of customer experience. For more from Iceotope, click here.

atNorth recognised at RAC Cooling Industry Awards
atNorth, the Nordic colocation, high-performance computing, and AI service provider, has announced its shortlisting in the ‘Customer Initiative of the Year’ category at RAC’s 21st annual Cooling Industry Awards. The awards aim to celebrate innovation, best practice and business excellence across the refrigeration and air conditioning sectors. With global awareness of the amount of energy needed to power and cool data centres, especially those built to cater for AI and other data intensive industries, it is important to atNorth to showcase the benefits of its cool Nordic locations that allow for highly energy efficient cooling technologies. atNorth was recognised for its work with Shearwater Geoservices, a business that initially had some apprehension over moving digital infrastructure away from the UK to one of atNorth’s data centres in Iceland. atNorth liaised extensively with the Shearwater team, enabling access to its Gompute HPCaaS Platform in order to conduct comprehensive testing before migrating to its own servers. Shearwater successfully moved its UK HPC to atNorth’s ICE02 site, resulting in a 92% reduction in CO2 output and an 85% reduction in cost. “We are delighted to be acknowledged at RAC’s Annual Cooling Awards,” says Anna Kristín Pálsdóttir, CDO, at atNorth. “The cooling of digital infrastructure is becoming a fundamental factor in choosing a data centre partner and we are committed to raising awareness of more sustainable options in the industry.” The news follows atNorth’s win in the ‘Digital Infrastructure Project of the Year’ category at the Tech Capital Awards. Additionally, the business has also achieved considerable recognition by multiple other awarding bodies including TechRound’s Sustainability60 campaign, the Data Cloud Global Awards, the Energy Awards, the DCS Awards and the UK Green Business Awards. For more from atNorth, click here.

New Danfoss connector for liquid cooling applications
Danfoss Power Solutions has launched a Blind Mate Quick Connector for data centre liquid cooling applications. Compliant with soon-to-be-released Open Compute Project Open Rack V3 specifications, the Danfoss Hansen BMQC simplifies installation and maintenance of inner rack servers while increasing reliability and efficiency. The BMQC enables blind connection of the server chassis to the manifold at the rear of the rack, providing faster and easier installation and maintenance in inaccessible or non-visible locations. With its patented self-alignment design, the BMQC compensates for angular and radial misalignment of up to 5mm and 2.7 degrees, enabling simple and secure connections. The Danfoss Hansen BMQC also offers a highly reliable design, the company states. The coupling is manufactured from corrosion-resistant 303 stainless steel and the seal material is EPDM rubber, providing broad fluid compatibility and a long lifetime with minimal maintenance requirements. In addition, Danfoss performs helium leak testing on every BMQC to ensure 100% leak-free operation. Amanda Bryant, Product Manager at Danfoss Power Solutions, comments, “As a member of the Open Compute Project community, Danfoss is helping set the industry standard for data centre liquid cooling. Our rigorous product design and testing capabilities are raising the bar for component performance, quality, and reliability. Highly critical applications like data centre liquid cooling require 100% uptime and leak-free operation, and our complete liquid cooling portfolio is designed to meet this demand, making Danfoss a strong system solution partner for data centre owners.” With its high flow rate and low pressure drop, the BMQC improves system efficiency. This reduces the power consumption of the data centre rack, thereby reducing operational costs. Furthermore, the BMQC can be connected and disconnected under pressure without the risk of air entering the system. This eliminates the need to depressurise the entire system, minimising downtime. The Danfoss Hansen BMQC features a working pressure of 2.4 bar (35 psi), a rated flow of 6 litres per minute (1.6 gallons per minute), and maximum flow rate of 10 lpm (2.6 gpm). It has a pressure drop of 0.15 bar (2.3 psi) at 6 lpm (1.6 gpm). It is available in a 5mm size and is interchangeable with other OCP Open Rack V3 blind mate quick couplings. For more from Danfoss, click here.

Schneider experts explore liquid cooling for AI data centres
Schneider Electric has released its latest white paper, Navigating Liquid Cooling Architectures for Data Centres with AI Workloads. The paper provides a thorough examination of liquid cooling technologies and their applications in modern data centres, particularly those handling high-density AI workloads. The demand for AI is growing at an exponential rate. As a result, the data centres required to enable AI technology are generating substantial heat, particularly those containing AI servers with accelerators used for training large language models and inference workloads. This heat output is increasing the necessity for the use of liquid cooling to maintain optimal performance, sustainability, and reliability. Schneider Electric’s latest white paper guides data centre operators and IT managers through the complexities of liquid cooling, offering clear answers to critical questions about system design, implementation, and operation. Over the 12 pages, authors Paul Lin, Robert Bunger, and Victor Avelar identify two main categories of liquid cooling for AI servers: direct-to-chip and immersion cooling. They describe the components and functions of a coolant distribution unit (CDU), which are essential for managing temperature, flow, pressure, and heat exchange within the cooling system. “AI workloads present unique cooling challenges that air cooling alone cannot address,” says Robert Bunger, Innovation Product Owner, CTO Office, Data Centre Segment, Schneider Electric. “Our white paper aims to demystify liquid cooling architectures, providing data centre operators with the knowledge to make informed decisions when planning liquid cooling deployments. Our goal is to equip data centre professionals with practical insights to optimise their cooling systems. By understanding the trade-offs and benefits of each architecture, operators can enhance their data centres’ performance and efficiency.” The white paper outlines three key elements of liquid cooling architectures: Heat capture within the server: Utilising a liquid medium (e.g. dielectric oil, water) to absorb heat from IT components. CDU type: Selecting the appropriate CDU based on heat exchange methods (liquid-to-air, liquid-to-liquid) and form factors (rack-mounted, floor-mounted). Heat rejection method: Determining how to effectively transfer heat to the outdoors, either through existing facility systems or dedicated setups. The paper details six common liquid cooling architectures, combining different CDU types and heat rejection methods, and provides guidance on selecting the best option based on factors such as existing infrastructure, deployment size, speed, and energy efficiency. With the increasing demand for AI processing power and the corresponding rise in thermal loads, liquid cooling is becoming a critical component of data centre design. The white paper also addresses industry trends such as the need for greater energy efficiency, compliance with environmental regulations, and the shift towards sustainable operations. “As AI continues to drive the need for advanced cooling solutions, our white paper provides a valuable resource for navigating these changes,” Robert adds. “We are committed to helping our customers achieve their high-performance goals while improving sustainability and reliability.” This white paper is particularly timely and relevant in light of Schneider Electric's recent collaboration with NVIDIA to optimise data centre infrastructure for AI applications. This partnership introduced the first publicly available AI data centre reference designs, leveraging NVIDIA's advanced AI technologies and Schneider Electric's expertise in data centre infrastructure. Schneider claims that the reference designs set new standards for AI deployment and operation, providing data centre operators with innovative solutions to manage high-density AI workloads efficiently. For more information and to download the white paper, click here. For more from Schneider Electric, click here.

Vertiv cooling unit seeks to lower carbon footprint
Vertiv, a global provider of critical digital infrastructure and continuity solutions, has introduced new, highly efficient Vertiv Liebert PDX-PAM direct expansion perimeter units with low global warming potential (GWP) and non-flammable R513A refrigerant. Available now in the EMEA region, the system is designed to operate with an eco-friendly refrigerant (as compared to legacy refrigerants) to enable increased efficiency, reliability and maximum flexibility of installation. Liebert PDX-PAM allows data centre owners to comply with the EU F-Gas Regulation 2024/573 and enables their pressing sustainability goals. The non-flammable R513A refrigerant provides up to a 70% GWP reduction when compared to the traditional R410A, without compromising safety or reliability. No additional safety devices are required, as is the case for units using flammable refrigerants, enabling reduced installation costs and CAPEX. "In an era where efficiency and reliability are paramount, we recognise the urgent need for eco-friendly alternatives to stay ahead of regulatory requirements and provide our customers with state-of-the-art innovations,” states Karsten Winther, President for Vertiv in Europe, Middle East and Africa. “With this new solution, we're not just addressing our customers' current sustainability objectives; we're actively innovating and advancing the future of cooling technology and setting new heights for efficiency and reliability." Liebert PDX-PAM is available from 10 kW to 80 kW with a wide range of airflow configurations, options and accessories, making the unit easily adaptable to various installation needs, from small to medium data centres including edge computing applications, UPS and battery rooms. In conjunction with the Liebert PDX-PAM units, a wide choice of cooling solutions are available for managing heat rejection externally, depending on the specific system configuration. Vertiv is seeking to raise the technology threshold with Liebert PDX-PAM, a low-GWP, non-flammable R513A refrigerant solution with inverter-driven brushless motor compressors, staged coil design with an innovative patent-pending filter, electronic expansion valves and state-of-the-art electronically commutated (EC) fans, all included as standard features. The integrated Vertiv Liebert iCOM controller enables seamless synchronisation of these components, allowing complete modulation of performance. This way, the Liebert PDX-PAM unit can adapt to changing operating conditions and heat load efficiently and reliably. The full continuous modulation capability significantly reduces the annual power consumption, resulting in a more cost-effective solution, thanks to the enhanced part load efficiency and precise monitoring of the machine's operation, facilitating performance tracking and more timely and effective maintenance, thereby creating opportunities for predictive maintenance actions. “The introduction of low GWP refrigerants for direct expansion systems marks a significant advancement in sustainable air-cooling technology,” says Lucas Beran, Research Director at Dell’Oro Group. By utilising low-GWP and non-flammable refrigerants, Vertiv complies with EU F-Gas Regulation requirements and aims to reduce carbon footprints without compromising on safety or efficiency. This innovation is significant for data centre operators aiming to achieve their sustainability goals while maintaining high operational standards." For more from Vertiv, click here.

New white paper published on liquid cooling for AI data centres
Schneider Electric has released white paper 133 titled Navigating Liquid Cooling Architectures for Data Centres with AI Workloads. The paper provides a thorough examination of liquid cooling technologies and their applications in modern data centres, particularly those handling high-density AI workloads. The demand for AI is growing at an exponential rate. As a result, the data centres required to enable AI technology are generating substantial heat, particularly those containing AI servers with accelerators used for training large language models and inference workloads. This heat output is increasing the necessity for the use of liquid cooling to maintain optimal performance, sustainability, and reliability. Schneider Electric’s latest white paper guides data centre operators and IT managers through the complexities of liquid cooling, offering clear answers to critical questions about system design, implementation, and operation. Understanding liquid cooling architectures Over the 12-pages, authors Paul Lin, Robert Bunger and Victor Avelar identify two main categories of liquid cooling for AI servers: direct-to-chip and immersion cooling. They describe the components and functions of a coolant distribution unit (CDU), which are essential for managing temperature, flow, pressure, and heat exchange within the cooling system. “AI workloads present unique cooling challenges that air cooling alone cannot address,” says Robert Bunger, Innovation Product Owner, CTO Office, Data Centre Segment, Schneider Electric. “Our white paper aims to demystify liquid cooling architectures, providing data centre operators with the knowledge to make informed decisions when planning liquid cooling deployments. Our goal is to equip data centre professionals with practical insights to optimise their cooling systems. By understanding the trade-offs and benefits of each architecture, operators can enhance their data centres’ performance and efficiency.” The white paper outlines three key elements of liquid cooling architectures: Heat Capture Within the Server: Utilising a liquid medium (e.g. dielectricoil, water) to absorb heat from IT components. CDU Type: Selecting the appropriate CDU based on heat exchange methods(liquid-to-air, liquid-to-liquid) and form factors (rack-mounted,floor-mounted). Heat Rejection Method: Determining how to effectively transfer heat to theoutdoors, either through existing facility systems or dedicated setups. Choosing the right architecture The paper details six common liquid cooling architectures, combining different CDU types and heat rejection methods, and provides guidance on selecting the best option based on factors such as existing infrastructure, deployment size, speed and energy efficiency. With the increasing demand for AI processing power and the corresponding rise in thermal loads, liquid cooling is becoming a critical component of data centre design. The white paper also addresses industry trends such as the need for greater energy efficiency, compliance with environmental regulations, and the shift towards sustainable operations. “As AI continues to drive the need for advanced cooling solutions, our white paper provides a valuable resource for navigating these changes,” adds Robert. “We are committed to helping our customers achieve their high-performance goals while improving sustainability and reliability.” For more from Schneider Electric, click here.

Data centre cooling market to reach £13.2bn in 2028
According to new research from global analyst, Omdia, the data centre thermal management market has surged to a staggering $7.67bn (£6bn), outpacing previous forecasts. This unprecedented growth is poised to continue with a robust CAGR of 18.4% until 2028. This surge will largely be fuelled by AI-driven demands and innovations in high-density infrastructure, marking a pivotal moment for the industry. As AI computing becomes ubiquitous, the demand for liquid cooling has surged dramatically. Key trends include the rapid adoption of Rear Door Heat Exchangers (RDHx) combined with 1-P direct-to-chip cooling, achieving an impressive 65% year-over-year growth, frequently integrating heat reuse applications. This period also sees a strategic blend of air and liquid cooling technologies, creating a balanced and efficient thermal management. Omdia’s Principal Analyst, Shen Wang, explains, “In 2023, the global data centre cooling market experienced increased consolidation, Top 5 and Top 10 concentration ratios rising by 5% from the previous year. Omdia expanded vendor coverage in its report to include 49 companies, up from 40, adding Chinese OEMs and direct liquid cooling component suppliers. Vertiv, Johnson Controls, and Stulz retained their top three positions, with Vertiv notably gained 6% market share, due to strong North American demand and cloud partnerships.” Market growth for data centre cooling was primarily constrained by production capacity, particularly for components like Cooling Distribution Units (CDUs), rather than a lack of demand. Numerous supply chain players struggled to satisfy the soaring market needs, causing component shortages. However, improvements forecasted for 2024 are expected to alleviate this issue, unlocking orders delayed from the previous year due to supply chain bottlenecks. During this time, liquid cooling adoption witnessed robust growth, particularly in North America and China, with new vendors entering the scene and tracked companies exhibiting significant expansion. In this near $1bn (£785m) market of liquid cooling, direct-to-chip vendor, CoolIT, remains the top leader in liquid cooling market, followed by immersion cooling leader, Sugon, and server vendor, Lenovo. The data centre thermal management is advancing due to AI's growing influence and sustainability requirements. Despite strong growth prospects, the industry faces challenges with supply chain constraints in liquid cooling and embracing sustainable practices. Moving forward, the integration of AI-optimised cooling systems, strategic vendor partnerships, and a continued push for energy-efficient and environmentally friendly solutions will shape the industry's evolution. Successfully addressing these challenges will ensure growth and establish thermal management as a cornerstone of sustainable and efficient data centre operations, aligning technology with environmental stewardship. Shen adds, “Data centre cooling is projected to be a $16.8bn (£13.2bn) market by 2028, fuelled by digitalisation, high power capacity demand, and a shift towards eco-friendly infrastructure, with liquid cooling emerging as the biggest technology in the sector.”

Schneider reveals data centre White Space portfolio
Schneider Electric, the leader in digital transformation of energy management and automation, today unveiled its revamped data centre White Space portfolio, where racks and IT equipment sit within a data centre. This new portfolio includes the second generation of NetShelter SX Enclosures (NetShelter SX Gen2), new NetShelter Aisle Containment, and a future update to the NetShelter Rack PDU Advanced, designed to meet the evolving needs of modern data centres - particularly those handling high-density applications and AI workloads, as well as regulatory requirements like the European Energy Efficiency Directive (EED). The NetShelter SX Gen2 enclosures are specifically engineered to support the demands of contemporary data centres. These new racks can support up to 25% more weight than previous models, handling approximately 4,000 pounds (1,814 kilograms), which is essential for accommodating the heavier, denser equipment associated with AI and high-performance computing. Enhanced perforation in the doors increases airflow, vital for cooling high-density server configurations, and the racks offer more space and better cable management options for larger, more complex server setups. With security of physical equipment remaining an important requirement, the enclosures feature all-steel construction and three -point locking systems to improve data centre protection. The NetShelter SX Gen2 racks reduce their overall climate change impact by around 3.3% per rack and are designed to be highly recyclable, with approximately 97% of the rack being recyclable. These racks are available in standard sizes of 42U, 45U, and 48U along with wide, extra-wide and deep models. “Our NetShelter SX Gen2 enclosures are a leap forward in addressing the critical requirements of high-density applications,” says Elliott Turek, Director of Category Management, Secure Power Division, Schneider Electric. “With enhanced weight support, airflow management, and physical security, we are enabling our customers to optimise their data centre operations while also advancing sustainability.” Advanced cooling and flexibility with NetShelter Aisle Containment The latest NetShelter Aisle Containment can achieve up to 20% more cooling capacity. This is crucial for managing the heat generated by AI servers and other high-density applications. The system incorporates an air flow controller that automates fan speed, reducing fan energy consumption by up to 40% compared to traditional passive cooling systems. The vendor neutral containment systems provide greater flexibility and speed of setup for data centre operators, allowing for easier integration and adaptation to existing builds. The new design also simplifies installation and field modifications, while reducing energy expenses by between 5 and 10%. “Containment remains paramount in today's high-density data centres," Elliott notes. "Even in liquid cooled applications, air heat rejection plays a critical role. Our NetShelter Aisle Containment solutions not only enhance cooling capacity but also offer significant energy savings, aligning with our commitment to sustainability.” Security and management with NetShelter Rack PDU Advanced and Secure NMC3 The NetShelter Rack PDU Advanced with Secure NMC3 is an updated power distribution unit equipped with advanced security features and enhanced management capabilities. The Secure NMC3 network management card provides robust cybersecurity measures and enables third-party validation for firmware updates for consistent compliance. This support for mass firmware updates significantly reduces the manual effort required to keep the PDUs secure and up-to-date, which is crucial for maintaining security across large deployments. The PDU is suitable for a range of applications, including those with power requirements up to an including 70kW per rack, making it a versatile solution for various data centre configurations. It includes features that enhance energy efficiency and operational reliability, contributing to the overall sustainability of the data centre. “Security and efficiency are at the forefront of our advanced PDUs,” Elliott explains. “By integrating expended security and management features, we are ensuring that our customers can maintain secure and efficient operations with ease.” All products in Schneider Electric’s revamped White Space portfolio are available for quotation and order (Secure NMC3 coming in Q4). For more from Schneider Electric, click here.



Translate »