Advertise on DCNN Advertise on DCNN Advertise on DCNN

Liquid Cooling


Inspur Information and JD Cloud launch liquid-cooled server
Inspur Information and JD Cloud have announced they have jointly launched the liquid-cooled rack server ORS3000S. The server utilises cold-plate liquid cooling to reduce data centre power consumption by 45% compared to traditional air-cooled rack servers, making it a green solution that dramatically reduces total cost of ownership (TCO). Cold-plate liquid cooling technology allows ORS3000S to improve heat dissipation efficiency by 40%. It adopts a centralised power supply design with N+N redundancy that is capable of meeting the demands of whole rack power supply, and can function at the highest efficiency throughout operation due to power balance optimisation. This results in an overall efficiency increase of 10% when compared to a distributed power supply. Pre-installation at the factory, plus efficient operations and maintenance (O&M) allow for 5-10x faster delivery and deployment. The ORS3000S has been widely deployed in JD Cloud data centres, providing computing power support for JD during major shopping events. It brings a performance increase of 34–56% while minimising power usage effectiveness (PUE), carbon emissions and energy consumption. Inspur Information has been a pioneer in direct and indirect cooling. With new heat conduction technologies such as phase-change temperature uniformity, micro-channel cooling and immersion cooling, Inspur achieves a 30–50% optimisation in the comprehensive energy efficiency of the cooling system. This is achieved via cooling improvements throughout the server design, including a micro/nano-cavity, phase-change, and uniform temperature design for high-power components such as the CPU and GPU. This improves heat dissipation performance by 150% compared to traditional air cooling technologies. Experienced in the industrial application of liquid cooling, Inspur has built one of the world’s largest liquid-cooled data centre production facilities with an annual manufacturing capacity of 100,000 servers. This includes a full-chain liquid-cooling smart manufacturing solution covering R&D, testing, and delivery for the mass production of cold-plate liquid-cooled rack servers. As a result, the PUE for data centres is less than 1.1, and the entire delivery cycle takes five to seven days. Inspur Information’s cold-plate, heat-pipe, and immersion liquid-cooled products have been deployed at a large scale. In addition, Inspur offers complete solutions for liquid-cooled data centres, including primary and secondary liquid cooling circulation and the coolant distribution unit (CDU). This total solution enables a full-path liquid cooling circulation for data centres with the overall PUE reaching the design limit of less than 1.1. Inspur holds more than 100 core patents in the liquid cooling, and has participated in the formulation of technical standards and test specifications for cold-plate and immersion liquid-cooled products in data centres. The company is committed to and will continue to lead the rapid development of the liquid cooling industry and the large-scale application of innovative liquid cooling technology.

DCNN Exclusive: Making sustainability gains with liquid cooling
This piece was written by Stuart Crump, Director of Sales at Iceotope Technologies Limited on how liquid cooling could be vital in the race to net zero. Environmental, Social and Governance (ESG) objectives have started to drive data centre business goals as the world transitions to a low carbon economy. Sustainability is no longer being viewed as a cost on business, indeed many customers are now using sustainability as a criterion for vendor selection. Positive action to reduce emissions is not only good for the planet, it’s also good for business. It will also signpost efficient data centres to an enlightened market. New developments in liquid cooling can assist data centre sustainability targets by significantly reducing facility energy consumption for mechanical services, decreasing water use, and providing a platform for high-grade reusable heat. Together, the characteristics of liquid cooling adds up to bottom-line benefits as well as ecological advantages to data centre operators, helping deliver competitive advantage in this highly commercialised sector. According to the IEA, data centres account for around 1% of global electricity demand. While data centre workloads and internet traffic have multiplied dramatically since 2015, energy use has remained relatively flat. However, demand for more digital services is growing at an astounding rate. For every bit of data that travels the network, a further five bits are transmitted within and among data centres. Immersion liquid cooling can greatly benefit data centre sustainability by significantly reducing overall cooling energy requirements by up to 80%. Data centre operators and customers now understand that air-cooled ITE environments are reaching the limits of their effectiveness.  As compute densities increase, the energy demands of individual servers and racks spiral upwards. Legacy air-cooled data halls cannot move the volume of cool air through the racks required by the latest CPU and GPU systems to maintain operating temperature. This means they must have a plan that includes liquid cooling if these sites are to remain viable. Liquid cooling techniques, such as precision immersion cooling circulates small volumes of a harmless dielectric compound across the surface of the server removing almost 100% of the heat generated by the electronic components. The latest solutions use a sealed chassis that enables IT equipment including servers and storage devices to be easily added or removed from racks with minimal disruption and no mess. Precision liquid cooling removes the requirement for server fans by eliminating the need to blow cool air over the IT components. Removing air cooling infrastructure from data centres also removes the capital expense of some cooling plant, as well as the operational costs of installation, power, servicing and maintenance. Removal of fans and plant not only produces an immediate benefit in terms of reducing noise in the technical area, it also frees up useful space in racks and cabinets as well as in plant rooms. Space efficiency equates to either facilities which are smaller in physical footprint, or the ability to host larger numbers of high density racks. Importantly, precision liquid cooling provides futureproof, scalable infrastructure to meet the provisioning requirements of tomorrow’s workloads and storage needs, Precision cooling and data centre water use The media reports widely on the lack of clean water for irrigation and consumption in drought hit areas from around the world. However, what has sometimes been called the data centre’s ‘dirty little secret’ is the volume of potable water required to operate certain data centres. Many air cooled data centres need water and lots of it. A small 1MW data centre using a conventional air-cooling process can use around 25.5 million litres of water every year. With mainly air-cooled processes, the data centre industry is currently consuming billions of litres of water each year. On the other hand, precision immersion liquid cooling consumes zero water in most cases and can be installed anywhere – including many existing data centres. The water in the cooling system, allowing for maintenance and water loop refreshes, can easily reduce data centre water use by more than 95%. The benefit of all this hot air… Creating a revenue generator from a cost item on the balance sheet is an ultimate dream come true. Currently, air cooled data centres eject heat into the atmosphere in the vast majority of cases. Liquid cooling techniques which capture and remove high-grade heat from the servers offers the capability to redirect this heat to district heating networks, industrial applications and other schemes. Using well established techniques this revenue stream, or sustainability project, could help to heat industrial sites and local facilities, such as schools and hospitals.  Climate change, government intervention with emission standards and public and investor pressure has helped drive change in the wider data centre business outlook. Savings and new revenue streams that benefit the organisations sustainability credentials warrant a critical review of their cost/benefit. There is the opportunity for data centres to move away from previous notions of how data centres operate towards much greater efficiency and sustainable operations.

EuroEXA reports European Exascale programme innovation
At launch one of the largest projects ever funded by EU Horizon 2020, EuroEXA aimed to develop technologies to meet the demands of Exascale high-performance computing (HPC) requirements and provide a ground-breaking platform for breakthrough processor-intensive applications. EuroEXA brought together expertise from a range of disciplines across Europe, from leading technologists to end-user companies and academic organisations to design a solution capable of scaling to peak performance of 400 PetaFLOPS, with a peak power system envelope of 30MW that approaches PUE parity using renewables and chassis-level precision liquid cooling. Dr Georgios Goumas EuroEXA Project Coordinator says, “Today, High-Performance Computing is ubiquitous and touches every aspect of human life. The need for massively scalable platforms to support AI-driven technology are critical to facilitate advances in every sector, from enabling more predictive medical diagnoses, treatment and outcomes to providing more accurate weather modelling so that, e.g. agriculture, can manage the effects of climate change on food production.” EuroEXA Demonstrates EU Innovation on an equal footing with RoW Meeting the need for a platform that answers the call for increased sustainability and lower operational carbon footprint, the 16-partner strong coalition delivered an energy-efficient solution. To do so, the partners overcame challenges throughout the development stack, including energy efficiency, resilience, performance and scalability, programmability, and practicality. The resulting innovations enable a more compact and cooler system, reducing both the cost per PetaFLOPS and its environmental impact; is robust and resilient across every component and manages faults without extended downtime; provides a manageable platform that will continue to provide Exascale performance as it grows in size and complexity; harnesses open-source systems to ensure the widest possible range of applications, ensuring it is relevant and able to impact real-world applications. The project extended and matured leading European software stack components and productive programming model support for FPGA and exascale platforms, with advances in Maxeler MaxJ, OmpSs, GPI and BeeGFS. It built expertise in state-of-the-art FPGA programming through the porting and optimisation of 13 FPGA-accelerated applications, in the HPC domains of Climate and Weather, Physics and Energy, and Life Sciences and Bioinformatics, at multiple centres across Europe. EuroEXA innovation being applied today at ECMWF and Neurasmus A prototype of a weather prediction model component extracted from ECMWF’s IFS suite demonstrated significantly better energy-to-solution than on current HPC nodes, achieving a 3x improvement over an optimised GPU version running on an NVIDIA Volta GPU. Such an improvement in execution efficiency provides an exciting avenue for more power-efficient weather prediction in the future. Further successful outcomes were made through the healthcare partnership with the Neurasmus programme at the Amsterdam UMC, where brain activity is being investigated. The platform was used to generate more accurate neuron simulations than has previously been possible, helping to predict more accurately healthcare outcomes for patients. EuroEXA legacy – extensive FPGA testbed an aid to further developments Outcomes generated include deploying what is believed to be the world’s largest network cluster of FPGA (Field Programmable Gate Array) testbeds, configured to drive high-speed multi-protocol interconnect, with Ethernet switches providing low-latency and high-switching bandwidth. The original proposal was for three FPGA clusters across the European partnership. However, COVID-19 travel restrictions necessitated an increased resource of 22 testbeds, developed in various partner locations. This has benefited the project by accelerating through the massively increased permutations and iterations available, which has also provided a blueprint for several partners to develop high-performance FPGA-based technologies. Partners in the programme have committed to further technology developments to support the advances made by the EuroEXA project and which are now targeted at other applications.

Leading partners join forces with Equinix to test sustainable data centre innovations
Equinix has announced the opening of its first Co-Innovation Facility (CIF), located in its DC15 International Business Exchange (IBX) data centre at the Equinix Ashburn Campus in the Washington, D.C. area. A component of Equinix's Data Centre of the Future initiative, the CIF is a new capability that enables partners to work with Equinix on trialling and developing innovations. These innovations, such as identifying a path to clean hydrogen-enabled fuel cells or deploying more capable battery solutions, will be used to help define the future of sustainable digital infrastructure and services globally. Sustainable innovations, including liquid cooling, high-density cooling, intelligent power management and on-site prime power generation, will be incubated in the CIF in partnership with leading data centre technology innovators including Bloom Energy, ZutaCore, Virtual Power Systems (VPS) and Natron. In collaboration with Equinix, these partners will test core and edge technologies with a focus on proving reliability, efficiency and cost to build. These include: Generator-less and UPS-less Data Centres (Bloom Energy) – utilising on-site solid oxide fuel cells enables the data centre to generate redundant cleaner energy on-grid, and potentially eliminates the need for fossil fuel-powered generators and power-consuming Uninterrupted Power Supply (UPS) systems.High-Density Liquid Cooling (ZutaCore) – highly efficient, direct-on-chip, waterless, two-phase liquid cooled rack systems, capable of cooling upwards of 100kW per rack in a light, compact design. Eliminates risk of IT meltdown, minimises use of scarce resources including energy, land, construction and water, and dramatically shrinks the data centre footprint.Software-Defined Power (VPS) with cabinet-mounted Battery Energy Storage (Natron Energy) – cabinet power management and battery energy storage system manages power draw and minimises power stranding to near zero per cent, leading to a potential 30-50% improvement of power efficiency. "ZutaCore is honoured to be featured at the CIF and partner with Equinix to advance the proliferation of liquid cooling on a global scale,” says Udi Paret, President of ZutaCore.   “Together we aim to prove that liquid cooling is an essential technology in realising fundamental business objectives for data centres of today and into the future. HyperCool liquid cooling solutions deliver unparalleled performance and sustainability benefits to directly address sustainability imperatives. With little to no infrastructure change, it consistently provides easy to deploy and maintain, environmentally friendly, economically attractive liquid cooling to support the highest core-count, high power and most dense requirements for a range of customer needs from the cloud to the edge."

The adoption of alternative data centre cooling to keep climate change in check
DataQube, together with Primaria, is championing the adoption of alternative data centre cooling refrigerants in response to European regulations to phase out greenhouse gases. Field trials are currently underway to establish the feasibility of replacing legacy HFCs – (fluorinated hydrocarbons) coolants with a next-generation refrigerant that efficiently carries heat and delivers a lower environmental impact. The two main refrigerants currently used in data centre cooling systems are R134a and especially R410a. Whilst both have an ozone depletion potential (ODP) of zero, their global warming potential (GWP) ratings of 1430 and 2088 respectively are a thousand times higher than carbon dioxide. R-32 on the other hand, because of its efficient heat conveying capabilities which can reduce total energy usage by up to 10% and due to its chemical structure has a GWP rating that is up to 68% lower at just 675. “The environmental impact of the data centre industry is significant, estimated at between 5-9% of global electricity usage and more than 2% of all CO2 emissions.” Says David Keegan, CEO of DataQube.  “In light of COP 26 targets, the industry as a whole needs to rethink its overall energy usage if it is to become climate neutral by 2030, and our novel system is set to play a major part in green initiatives.” “For data centre service providers it’s important that their operations are state of the art when it comes to energy efficiency and GWP (of the refrigerants used) since it impacts both their balance sheet and their sustainability,” comments Henrik Abrink, Managing Director of Primaria “With the development and implementation of R-32 in the DataQube cooling units we have taken a step further to deliver high added value on both counts in a solution that is already proving to be the most energy efficient edge data centre system on the market.” Unlike conventional data centre infrastructure, DataQube, because of its unique person-free layout, in an alternative way reduces power consumption by as much as 56% and CO2 emissions by as much as 56% as the energy transfer is primarily dedicated to powering computers. Exploiting next generation cooling products such as R-32 together with immersive cooling in its core infrastructure offers the potential to reduce these figures further. DataQube’s efficient use of space, combined with optimised IT capacity makes for a smaller physical footprint because less land, raw materials and power are needed from the outset.  Moreover, any surplus energy may be reused for district heating, making the system truly sustainable.  

The latest trends and developments in cooling solutions
This article was contributed to DCNN by nVent, on the latest trends and developments in cooling. For facility planners, thermal engineers, architects and managers responsible for implementing powerful IT Equipment (ITE), the goal is to maintain high availability at minimal operational costs, while minimising energy consumption. To do so, all equipment must be kept below a specified temperature range. However, legacy cooling in data centres, server rooms and other IT environments use technology based on traditional air conditioning systems that alone struggle to keep up with the rising heat load demands of today’s high-density, high-performance connected technologies. Consequently, sustainability gets sacrificed, and facilities can experience equipment failures, unplanned downtime and soaring energy costs. To run the latest and greatest IT equipment, liquid cooling has become the standard, and high-performance servers are designed with liquid cooling installed. In the right applications, these will offer a strong return on investment and total cost of ownership when compared to non-liquid approaches. In recent years, the range of liquid cooling solutions has advanced to help meet the unique protection needs of applications in virtually any environment. Whether for smaller decentralised edge computing, harsh environments, or large data centre installations, no one size fits all approach exists for thermal management. This article shares key considerations for choosing among the comprehensive range of standard and customised air, indirect and direct water-cooling solutions. It also provides guidance on identifying which solution will best protect your ITE assets and your bottom line. Understanding the range of cooling solutions The advanced thermal management solutions that exist today offer the breadth, flexibility and modularity needed to meet unique application needs and address a range of ITE heat-load challenges. For reference, the primary cooling approaches include: Air cooled – Heat is transferred directly to the room air and cooled via traditional data centre cooling Indirect water-cooled – Heat is transferred indirectly to water through an air-to-water heat exchanger located within the row or single cabinet Direct water-cooled – Heat is transferred directly to an attached heat transfer component, such as a cold plate. Hybrid direct and indirect water-cooled – Selective cooling of highest energy-consuming components with direct contact liquid cooling and the balance of the cabinet is cooled via secondary air-to-water cooling device, such as a Rear Door Cooler (RDC).  Air cooling is becoming less feasible in high density data centres, as heat loads increase and server racks become so densely configured that air circulation is impeded. Today’s air cooling solutions generally can manage 10 kilowatts or less cost effectively. Data centres that try to cope by increasing air velocity can quickly become a wind-tunnel-like environment that is difficult to work in. With this in mind, as energy needs increase, so does the likelihood of needing a liquid cooling component to your thermal management strategy. Liquid cooling systems offer effective solutions for achieving required temperature parameters and lowering energy consumption of the cooling system, thus lowering operating costs. Liquid provides a much greater heat transfer capacity – 3,500 times higher than that of air – because it is denser than air. Subsequently, direct-contact liquid cooling (direct-to-chip) in which a coldplate is placed directly on processors inside the server presents the highest efficiencies. The coldplate has internal micro channels and an inlet and outlet through which liquid is circulated to carry away heat. Low-profile coldplates used in direct-contact liquid cooling also have the advantage of taking up much less rack space than traditional heat sinks. However, while liquid cooling offers huge advantages in moving heat, managing the entire heat load of the rack with liquid cooling methods can be unnecessary and cost prohibitive for some applications. In many cases, a hybrid solution – combining both liquid and air cooling – often is a more accessible and scalable deployment option, that effectively leverages the highly efficient heat transfer of liquids. For example, more current cooling designs include aisle containment and rack-based cooling. These models increase efficiency and often incorporate an air-to-liquid heat transfer to leverage the higher heat transfer qualities of liquids. Choosing the right solution for your application To determine the most efficient and effective cooling technologies and layouts for your specific IT hardware, it is important to evaluate the current and future thermal profile of the environment, and model the necessary infrastructure modifications or layout changes. As you do, also consider these important factors: Existing equipment capabilities. The placement of IT equipment, air handlers, close-coupled cooling, and direct liquid cooling technologies within the data centre, server room, or other ITE environment is critical to the efficient use of available space and cooling capacity.Current cooling needs and anticipated future ones, taking into account potential technological advances, business growth and other objectives.Current, pending and trending global standards as well as relevant regulations.Resources vs. return on investment and total cost of ownership. The placement of IT equipment, air handlers, close-coupled cooling, and direct liquid cooling technologies within the data centre, server room or other ITE environment is critical to the efficient use of available space and cooling capacity. Lead time for facility upgrades and capital planning requirements mandate comprehensive planning. For example, a university or research facility may only have 6-10 racks, but require liquid cooling because of the high-performance computing power In one recent case, a global IT original equipment manufacturer needed robust cooling for its data centre. In comparing an air cooling solution and a liquid one, they found some critical differences. Both solutions work, but the liquid cooling solution offered significant footprint and energy efficiency advantages. It also set the OEM up for future upgrades as advances in technology increase the performance power and density needs. Depending on your specific application and environmental needs, myriad solutions exist. So, you need to weigh the options to determine the most appropriate cooling solution. Once you have determined the right cooling solution for your application, you will need to gather relevant data on your facility requirements to help ensure a smooth, streamlined installation. This will entail providing detailed information on the equipment, configuration, thermal attributes and design needs of your primary loop, secondary loop, and architecture, as well as any unique requirements. You also will want to create a routine maintenance plan. For liquid cooling solutions, this will include scheduling required inspections and ordering replacement parts in advance to make sure the fluid in the system is within safe operating range.

GF Piping Systems' presents solutions for liquid cooling in data centres
Creating the winning formula for energy efficiency, time to market, and carbon neutrality is imperative for today's data centre solutions. At this year's Data Centre World (DCW) held in Frankfurt, from 8-9 December, GF Piping Systems will showcase how prefabrication, sustainable technologies, and engineering transform mission-critical facilities' planning, building, and cooling. The demands of higher capacity in every new data centre are increasing parallel to energy efficiency and sustainability requirements. GF Piping Systems will present its pioneering prefabrication solutions for increased project deployment quality and efficiency. Hence, the demands on the mission-critical cooling plant also increase accordingly, as 50% of the power usage and, therefore, energy costs in a data centre originate from the cooling plant. Mission-critical facility owners have a high focus to reduce these costs while reaching their common goal: the net-zero data centre. GF Piping Systems can help optimise the energy efficiency of the complete cooling plant with prefabrication, plastic piping systems, and Non-Destructive Testing (NDT). "Our engineered plastic piping solutions for cooling applications are the result of years of pioneering innovation," says Mark Bulmer, Global Market Development Data Centers. "Combined with our global network of prefabrication shops, GF Piping Systems provides owners and operators of data centres with a quicker set-up and more efficient and reliable operation during the entire service life of their projects, reducing energy usage for life." Mark Bulmer will deep-dive into 'Plastic pipes for liquid cooling' speaking at Critical Infrastructure Theatre on 8 December at 11:20 CET. Planners and installation technicians are under considerable time pressure when installing new plants and modernising existing ones. The construction sites, often located in geographically remote areas, must adhere to local regulations, including energy efficiency or water protection. Project delays often incur high contractual penalties and must, therefore, be strictly avoided. GF Piping Systems shortens the time from the planning stage to the commissioning, employing offsite prefabrication of framed modules, which are simple and easy to install on-site, which means that projects can be executed cost-efficiently and on schedule.



Translate »