Liquid Cooling Technologies Driving Data Centre Efficiency


Omdia: The data centre market is healthy and ready for AI demand
The recent explosion of high-profile AI successes and investment announcements has captured the attention and imagination of the business world. In light of the latest AI media frenzy, new research from Omdia reveals that the data centre market has a heightened awareness of practical applications for AI that promise to improve productivity and lower costs. The collective evidence so far says this will not just be another flash in the pan. Colocation businesses, including both multi-tenant and single tenant data centre providers, are expected to be riding this wave of new AI growth. Some of these companies have adapted their data centre designs to enable higher rack power density. The power consumption of servers configured for AI training is akin to high-performance computing (HPC) clusters for scientific research. “The colocation providers able to provide the highest rack densities and access to liquid cooling will now have the upper hand in the market for data centre space,” says, Alan Howard, Principal Analyst at Omdia. Research from Omdia projects continued strong growth in the colocation market and it’s likely the proliferation of AI hardware will be an added tailwind to growth. The colocation industry is quite healthy and is expected to reach $65.2bn in 2027, with a five year growth CAGR of 9.4%, according to Omdia’s Colocation Services Tracker - 2023. Depending on how the acceleration in AI hardware deployments materialises, colocation data centre revenue could get a significant boost over the next few years. The top three colocation service providers in the world are Equinix, Digital Realty, and NTT Global Data Centres (NTT GDC). Between them, they operate over 700 data centres and have over 100 construction projects underway as covered in Omdia’s Data Centre Building Tracker – 1H23. These three companies represent 33% of the total 2022 revenue of $41.6bn, according to Omdia’s Colocation Services Tracker - 2023. Not all data centres can handle AI or HPC equipment, but these companies and numerous other noteworthy colocation service providers have been anticipating this emerging growth trend. Data centres built over the last couple years and many of those under construction, have been designed and architected to accommodate these high-power density equipment racks. These data centre design and architecture properties include high-density power distribution management and precision cooling for thermal management to protect servers. In some cases, colocation customers require direct to chip liquid cooling, which requires special data centre plumbing designs to provide customers access to a liquid cooling loop, or the option to install immersion cooling tanks where the hottest servers are sunken into a bath of non-conductive fluids. Alan concludes, “Achieving these advanced data centre operating characteristics are not for the faint of heart or those companies with an aversion to high capital expenditures (capex). Colocation companies like Equinix, Digital Realty, NTT GDC, Flexential, DataBank, Compass, Aligned, Iron Mountain, and a host of others are in the business of taking that capital risk to build data centres so that enterprises and cloud service providers don’t have to.”

Why hybrid cooling is the future for data centres
Gordon Johnson, Senior CFD Manager, Subzero Engineering Rising rack and power densities are driving significant interest in liquid cooling for many reasons. Yet, the suggestion that one size fits all ignores one of the most fundamental aspects of potentially hindering adoption - that many data centre applications will continue to utilise air as the most efficient and cost-effective solution for their cooling requirements. The future is undoubtedly hybrid, and by using air cooling, containment, and liquid cooling together, owners and operators can optimise and future-proof their data centre environments. Today, many data centres are experiencing increasing power density per IT rack, rising to levels that just a few years ago seemed extreme and out of reach, but today are considered both common and typical while simultaneously deploying air cooling. In 2020 for example, the Uptime Institute found that due to compute-intensive workloads, racks with densities of 20kW and higher are becoming a reality for many data centres. This increase has left data centre stakeholders wondering if air-cooled IT equipment (ITE) along with containment used to separate the cold supply air from the hot exhaust air has finally reached its limits and if liquid cooling is the long-term solution. However, the answer is not as simple as yes or no. Moving forward, it’s expected that data centres will transition from 100% air cooling to a hybrid model, encompassing air and liquid-cooled solutions with all new and existing air-cooled data centres requiring containment to improve efficiency, performance, and sustainability. Additionally, those moving to liquid cooling may still require containment to support their mission-critical applications, depending on the type of server technology deployed. One might ask why the debate of air versus liquid cooling is such a hot topic in the industry right now? To answer this question, we need to understand what’s driving the need for liquid cooling, the other options, and how can we evaluate these options while continuing to utilise air as the primary cooling mechanism. Can air and liquid cooling coexist? For those who are newer to the industry, this is a position we’ve been in before, with air and liquid cooling successfully coexisting, while removing substantial amounts of heat via intra-board air-to-water heat exchangers. This process continued until the industry shifted primarily to CMOS technology in the 1990s, and we’ve been using air cooling in our data centres ever since. With air being the primary source used to cool data centres, ASHRAE (American Society of Heating, Refrigeration, and Air Conditioning Engineers) has worked towards making this technology as efficient and sustainable as possible. Since 2004, it has published a common set of criteria for cooling IT servers with the participation of ITE and cooling system manufacturers entitled ‘TC9.9 Thermal Guidelines for Data Processing Environments’. ASHRAE has focused on the efficiency and reliability of cooling the ITE in the data centre. Several revisions have been published with the latest being released in 2021 (revision 5). This latest generation TC9.9 highlights a new class of high-density air-cooled ITE (H1 class) which focuses more on cooling high-density servers and racks with a trade-off in terms of energy efficiency due to lower cooling supply air temperatures recommended to cool the ITE. As to the question of whether or not air and liquid cooling can coexist in the data centre white space, it’s done so for decades already, and moving forward, many experts expect to see these two cooling technologies coexisting for years to come. What do server power trends reveal? It’s easy to assume that when it comes to cooling, a one-size will fit all in terms of power and cooling consumption, both now and in the future, but that’s not accurate. It’s more important to focus on the actual workload for the data centre that we’re designing or operating. In the past, a common assumption with air cooling was that once you went above 25kW per rack, it was time to transition to liquid cooling. But the industry has made some changes in regards to this, enabling data centres to cool up to and even exceed 35kW per rack with traditional air cooling. Scientific data centres, which include largely GPU-driven applications like machine learning, AI, and high analytics like crypto mining, are the areas of the industry that typically are transitioning or moving towards liquid cooling. But if you look at some other workloads like the cloud and most businesses, the growth rate is rising but it still makes sense for air cooling in terms of cost. The key is to look at this issue from a business perspective, what are we trying to accomplish with each data centre? What’s driving server power growth? Up to around 2010, businesses utilised single-core processors, but once available, they transitioned to multi-core processors, however, there still was a relatively flat power consumption with these dual and quad-core processors. This enabled server manufacturers to concentrate on lower airflow rates for cooling ITE, which resulted in better overall efficiency. Around 2018, with the size of these processors continually shrinking, higher multi-core processors became the norm and with these reaching their performance limits, the only way to continue to achieve the new levels of performance by compute-intensive applications is by increasing power consumption. Server manufacturers have been packing in as much as they can to servers, but because of CPU power consumption, in some cases, data centres were having difficulty removing the heat with air cooling, creating a need for alternative cooling solutions such as liquid. Server manufacturers have also been increasing the temperature delta across servers for several years now, which again has been great for efficiency since the higher the temperature delta, the less airflow that’s needed to remove the heat. However, server manufacturers are, in turn, reaching their limits, resulting in data centre operators having to increase the airflow to cool high-density servers and to keep up with increasing power consumption. Additional options for air cooling Thankfully, there are several approaches the industry is embracing to cool power densities up to and even greater than 35kW per rack successfully, often with traditional air cooling. These options start with deploying either cold or hot aisle containment. If no containment is used typically, rack densities should be no higher than 5kW per rack, with additional supply airflow needed to compensate for recirculation air and hot spots. What about lowering temperatures? In 2021, ASHRAE released their 5th generation TC9.9, which highlighted a new class of high-density air-cooled IT equipment, which will need to use more restrictive supply temperatures than the previous class of servers. At some point, high-density servers and racks will also need to transition from air to liquid cooling, especially with CPUs and GPUs expected to exceed 500W per processor or higher in the next few years. But this transition is not automatic and isn’t going to be for everyone. Liquid cooling is not going to be the ideal solution or remedy for all future cooling requirements. Instead, the selection of liquid cooling instead of air cooling has to do with a variety of factors, including specific location, climate (temperature/humidity), power densities, workloads, efficiency, performance, heat reuse, and physical space available. This highlights the need for data centre stakeholders to take a holistic approach to cooling their critical systems. It will not and should not be an approach where only air or only liquid cooling is considered moving forward. Instead, the key is to understand the trade-offs of each cooling technology and deploy only what makes the most sense for the application. Click here for more thought leadership.

Supermicro launches NVIDIA HGX H100 servers with liquid cooling
Supermicro continues to expand its data centre offering with liquid-cooled NVIDIA HGX H100 rack scale solutions. Advanced liquid cooling technologies reduce the lead times for a complete installation, increase performance, and result in lower operating expenses while significantly reducing the PUE of data centres. Savings for a data centre are estimated to be 40% for power when using Supermicro liquid cooling solutions when compared to an air-cooled data centre. In addition, up to 86% reduction in direct cooling costs compared to existing data centres may be realised. “Supermicro continues to lead the industry supporting the demanding needs of AI workloads and modern data centres worldwide,” says Charles Liang, President, and CEO of Supermicro. “Our innovative GPU servers that use our liquid cooling technology significantly lower the power requirements of data centres. With the amount of power required to enable today's rapidly evolving large scale AI models, optimising TCO and the Total Cost to Environment (TCE) is crucial to data centre operators. We have proven expertise in designing and building entire racks of high-performance servers. These GPU systems are designed from the ground up for rack scale integration with liquid cooling to provide superior performance, efficiency, and ease of deployments, allowing us to meet our customers' requirements with a short lead time.” AI-optimised racks with the latest Supermicro product families, including the Intel and AMD server product lines, can be quickly delivered from standard engineering templates or easily customised based on the user's unique requirements. Supermicro continues to offer the industry's broadest product line with the highest-performing servers and storage systems to tackle complex compute-intensive projects. Rack scale integrated solutions give customers the confidence and ability to plug the racks in, connect to the network and become more productive sooner than managing the technology themselves. The top-of-the-line liquid cooled GPU server contains dual Intel or AMD CPUs and eight or four interconnected NVIDIA HGX H100 Tensor Core GPUs. Using liquid cooling reduces the power consumption of data centres by up to 40%, resulting in lower operating costs. In addition, both systems significantly surpass the previous generation of NVIDIA HGX GPU equipped systems, providing up to 30 times the performance and efficiency of today's large transformer models, with faster GPU-GPU interconnect speed and PCIe 5.0 based networking and storage. Supermicro's liquid cooling rack level solution includes a Coolant Distribution Unit (CDU) that provides up to 80kW of direct-to-chip (D2C) cooling for today's highest TDP CPUs and GPUs for a wide range of Supermicro servers. The redundant and hot-swappable power supply and liquid cooling pumps ensure that the servers will be continuously cooled, even with a power supply or pump failure. The leak-proof connectors give customers the added confidence of uninterrupted liquid cooling for all systems. Rack scale design and integration has become a critical service for systems suppliers. As AI and HPC have become an increasingly critical technology within organisations, configurations from the server level to the entire data centre must be optimised and configured for maximum performance. The Supermicro system and rack scale experts work closely with customers to explore the requirements and have the knowledge and manufacturing abilities to deliver significant numbers of racks to customers worldwide.

Dr Kelley Mullick joins Iceotope Technologies
Iceotope Technologies has announced the appointment of Kelley A. Mullick PhD, as Vice President, Technology Advancement and Alliances. Recognised for her expertise in immersion and cold plate liquid cooling, Kelley joins the company from Intel Corporation where she worked in product management and strategy for the data centre and AI group, where she developed Intel’s first immersion cooling warranty, announced at Open Compute Project (OCP) 2022. Kelley also holds a BSc in Chemistry and Biology from Walsh University, an MSc in Chemical Engineering from the University of Akron and a PhD in Chemical Engineering from Ohio State University. David Craig, CEO, Iceotope Technologies says, “Kelley is a welcome addition to the Iceotope team. She joins us as the market is turning increasingly to liquid cooling to solve a range of challenges from increasing processor output and efficiency to delivering greater data centre space optimisation and reducing the energy waste and inefficiencies associated with air-cooling for greater data centre sustainability. Kelley is a dynamic and results-oriented problem solver who brings solid systems engineering know-how. With many industry accolades, she is also a champion for diversity and inclusion having personally developed initiatives for women and under-represented minorities.” Kelley says, “As a systems engineer I fixate on technical requirements in tandem with business requirements to drive solutions. Today, existing challenges to mitigate against the climate emergency are joined by the technological expedients of AI applications such as ChatGPT. These compute-intensive operations need the support of compute-intensive infrastructure. The limitations and inefficiencies of air cooling are well known. Only precision immersion liquid cooling can meet the environmental needs of all processor board components in a familiar form factor that fits with the way we design data centres and carry out moves, adds and changes. “With the focus on sustainability at Intel, I became familiar with all types of liquid cooling. When I appraised Iceotope’s technology, I saw complete differentiation from anything else in the market. In addition to all the benefits of liquid cooling, it offers high levels of heat reuse, almost completely eliminates the use of water, and offers greater compute density and scalability than other solutions like cold-plate and tank immersion. It is the technology of the future that I want to invest my calories in.” Kelley to build out Iceotope’s ecosystem With responsibilities for building and maintaining alliances with OEMs and technology partnerships, Kelley’s role will also make Iceotope technology more accessible to the wider market. The company currently has alliances with leading global vendors including IT giants, HPE and Lenovo, as well as physical infrastructure manufacturers, nVent and Schneider Electric, and technology supply chain specialists, Avnet. As things stand, Iceotope precision liquid cooling solutions can be supplied with a warranty almost anywhere around the globe. By augmenting its ecosystem with additional technology and channel partners, Iceotope can build upon its aptitude for ease of installation and use, to make precision liquid cooling the first choice for new data centre developments as well as upgrading existing facilities as operators strive for greater cooling efficiency and reliability, and increased operational sustainability. Engineered to cool the whole IT stack from hyperscale to the extreme edge, Iceotope’s patented chassis-level precision liquid cooling offers up to 96% water reduction, up to 40% power reduction, and up to 40% carbon emissions reduction per kW of ITE . Kelley, a champion for minorities in tech Kelley is passionate about diversity and inclusion. She has worked throughout her career to help prepare and resource women as well as other underrepresented minorities to be confident and successful in their own careers. In addition to creating programmes in the workplace, she has also invested her personal time in developing free-to-access online materials in support of greater equality in the workforce.

Showcase the next generation in modular data centres
Mission Critical Facilities International (MCFI) is collaborating with Iceotope and nVent at Supercomputing 2022 (SC22), held November 13-18 in Dallas at the Kay Bailey Hutchison Convention Center. Together, the companies will showcase the features and benefits of their prefabricated all-in-one data centre solutions, highlighting fully-integrated, precision immersion liquid cooling solutions. MCFI’s liquid-cooled containers allow precision immersion liquid cooling to be deployed as a stand-alone solution in any location and climate - even at the far edge.  MCFI is leading the next generation of modular/prefabricated data centres with its customisable GENIUS solutions as well as MicroGENIUS, a sustainable microgrid communications shelter that delivers efficient, grid-independent energy solutions. Both solutions provide for reduced CapEx and OpEx costs, enhanced speed to market, global repeatability and scale, sustainable designs, reduced carbon building materials and zero-emission technology.  MCFI’s energy-efficient, scalable and cost-effective containerised/prefabricated data centre solution features innovative integrations with Iceotope’s precision immersion technology and nVent’s electrical connection and protection solutions. The MCFI solution allows for high-density computing anywhere, combining high-density loads alongside standard IT loads. It also eliminates mechanical cooling in the data centre while maximising free cooling to reduce energy consumption/cost by applying a hybrid water cooling technique that utilises the return water from the rear door heat exchangers to feed the Iceotope precision immersion technology.  The alliance is beneficial to enterprise data centres, high-performance and edge computing, smart manufacturing, content delivery, telemedicine, AI and virtual reality. The combined solution reduces execution complexities, lowers costs and eases the implementation of liquid cooling in retrofit and new build environments. “The MCFI, Iceotope and nVent relationships further exemplify the importance of collaborative commitments in developing innovative and sustainable solutions for the future of digital infrastructure and our planet,” says Patrick Giangrosso, Vice President at MCFI.  Visit MCFI, Iceotope and nVent at SC22, Booth 427 for a deeper dive into modular data centre solutions and the latest innovations in liquid cooling technologies.

Rising temperatures highlight need for liquid cooling systems
The rising frequency of extreme weather periods in Europe necessitates a move towards liquid cooling systems, suggests a sector expert. This warning follows record-breaking temperatures in the UK last month, with some locations exceeding 40°C. As a result, a number of high-profile service providers in the nation experienced outages that impacted customer services, the effects of which were felt as far as in the US. One operator attributed the failure to ‘unseasonal temperatures’. However, with the UK MET Office warning that heatwaves are set to become more frequent, more intense and long-lasting, Gemma Reeves, Data Centre Specialist at Alfa Laval, believes that data centres will need to transition to liquid cooling systems in order to cope. She says: “The temperatures observed last month are a sign of what is to come. Summers are continuing to get hotter by the year, so it’s important that data centres are able to manage the heat effectively. “Mechanical cooling methods have long been growing unfit for the needs of the modern data centre, with last month’s weather only serving to highlight this. As both outside temperatures and rack densities continue to rise, more efficient approaches to cooling will clearly be necessary.” Traditional mechanical cooling systems make use of an electronically powered chiller, which creates cold air to be distributed by a ventilation system. However, most mechanical cooling systems in the UK are designed for a maximum outdoor temperature of 32°C - a figure which continues to be regularly exceeded. Gemma believes that liquid cooling can solve this challenge. Cooling with dielectric fluid rather than air means that the cooling systems may be run at much higher temperatures. Liquid cooled principles such as direct-to-chip, single-phase immersive IT chassis, or single-phase immersive tub allow the servers to remain cool despite much higher outdoor air temperatures, while maintaining lower energy consumption and providing options for onward heat reuse. In studies, this has also been shown to increase the lifetime of servers due to maintaining a stable stasis. Gemma concludes: “The data centre sector remains in an era of air-based cooling. That said, July’s recent heatwave may be the stark reminder the sector needs that these systems are not sustainable in the long term. “Liquid cooling is truly the future of data centres, this technique allows us to cool quicker and more efficiently than ever before, which will be a key consideration with temperatures on the rise.”

DataQube Global has the rights to market products of LiquidCool Solutions
DataQube Global has announced that it has obtained exclusive rights to market products of LiquidCool Solutions (LCS) in various markets around the globe. In addition, DataQube has agreed to make an investment in LCS.   The agreement covers LCS' ZPServer and also the newly launched miniNODE, a next generation harsh environment sealed cooling solution technology solution, developed using eco-friendly dielectric fluids, and intended for mission critical infrastructures where reliability, low maintenance and equipment longevity are key. DataQube Global is planning to deploy LCS’s miniNODE across its portfolio of edge data centre solutions by the end of 2022, to assist clients in deploying edge technology. Incorporating LCS’ immersive cooling technology into the design architecture of DataQube Global’s edge data centre products deliver a range of operational and performance advantages, including low maintenance, reduced downtime and extended component shelf life, the LCS’ novel miniNODE cooling solution, along with the ability to deliver 1400 times more cooling power than air. The LCS technology fully supports DataQube in its mission to deploy edge data centre systems that are eco-friendly. Unlike other solutions, DataQube Global’s unique person-free layout reduces power consumption and CO2 emissions by up to 50% as the energy transfer is primarily dedicated to powering computers. Exploiting next generation cooling technologies such as those developed by LCS offers the potential to reduce these figures further. “We have already secured a major deal in the US to augment our presence in north America.” says David Keegan, Group CEO of DataQube Global. “Investing into LiquidCool Solutions cements our position as a serious player in the data centre industry and a force to be reconciled with.” “We are extremely happy to formalise our relationship with DataQube Global. Their rapidly expanding presence in edge computing and harsh environment markets provides LCS new opportunities and complements the growth plans of DataQube. The relationship with DataQube is a key element for introducing our patented chassis-based single-phase immersion technology to the burgeoning edge and data centre markets.” concludes Ken Krei, CEO of LiquidCool Solutions.

Inspur Information and JD Cloud launch liquid-cooled server
Inspur Information and JD Cloud have announced they have jointly launched the liquid-cooled rack server ORS3000S. The server utilises cold-plate liquid cooling to reduce data centre power consumption by 45% compared to traditional air-cooled rack servers, making it a green solution that dramatically reduces total cost of ownership (TCO). Cold-plate liquid cooling technology allows ORS3000S to improve heat dissipation efficiency by 40%. It adopts a centralised power supply design with N+N redundancy that is capable of meeting the demands of whole rack power supply, and can function at the highest efficiency throughout operation due to power balance optimisation. This results in an overall efficiency increase of 10% when compared to a distributed power supply. Pre-installation at the factory, plus efficient operations and maintenance (O&M) allow for 5-10x faster delivery and deployment. The ORS3000S has been widely deployed in JD Cloud data centres, providing computing power support for JD during major shopping events. It brings a performance increase of 34–56% while minimising power usage effectiveness (PUE), carbon emissions and energy consumption. Inspur Information has been a pioneer in direct and indirect cooling. With new heat conduction technologies such as phase-change temperature uniformity, micro-channel cooling and immersion cooling, Inspur achieves a 30–50% optimisation in the comprehensive energy efficiency of the cooling system. This is achieved via cooling improvements throughout the server design, including a micro/nano-cavity, phase-change, and uniform temperature design for high-power components such as the CPU and GPU. This improves heat dissipation performance by 150% compared to traditional air cooling technologies. Experienced in the industrial application of liquid cooling, Inspur has built one of the world’s largest liquid-cooled data centre production facilities with an annual manufacturing capacity of 100,000 servers. This includes a full-chain liquid-cooling smart manufacturing solution covering R&D, testing, and delivery for the mass production of cold-plate liquid-cooled rack servers. As a result, the PUE for data centres is less than 1.1, and the entire delivery cycle takes five to seven days. Inspur Information’s cold-plate, heat-pipe, and immersion liquid-cooled products have been deployed at a large scale. In addition, Inspur offers complete solutions for liquid-cooled data centres, including primary and secondary liquid cooling circulation and the coolant distribution unit (CDU). This total solution enables a full-path liquid cooling circulation for data centres with the overall PUE reaching the design limit of less than 1.1. Inspur holds more than 100 core patents in the liquid cooling, and has participated in the formulation of technical standards and test specifications for cold-plate and immersion liquid-cooled products in data centres. The company is committed to and will continue to lead the rapid development of the liquid cooling industry and the large-scale application of innovative liquid cooling technology.

DCNN Exclusive: Making sustainability gains with liquid cooling
This piece was written by Stuart Crump, Director of Sales at Iceotope Technologies Limited on how liquid cooling could be vital in the race to net zero. Environmental, Social and Governance (ESG) objectives have started to drive data centre business goals as the world transitions to a low carbon economy. Sustainability is no longer being viewed as a cost on business, indeed many customers are now using sustainability as a criterion for vendor selection. Positive action to reduce emissions is not only good for the planet, it’s also good for business. It will also signpost efficient data centres to an enlightened market. New developments in liquid cooling can assist data centre sustainability targets by significantly reducing facility energy consumption for mechanical services, decreasing water use, and providing a platform for high-grade reusable heat. Together, the characteristics of liquid cooling adds up to bottom-line benefits as well as ecological advantages to data centre operators, helping deliver competitive advantage in this highly commercialised sector. According to the IEA, data centres account for around 1% of global electricity demand. While data centre workloads and internet traffic have multiplied dramatically since 2015, energy use has remained relatively flat. However, demand for more digital services is growing at an astounding rate. For every bit of data that travels the network, a further five bits are transmitted within and among data centres. Immersion liquid cooling can greatly benefit data centre sustainability by significantly reducing overall cooling energy requirements by up to 80%. Data centre operators and customers now understand that air-cooled ITE environments are reaching the limits of their effectiveness.  As compute densities increase, the energy demands of individual servers and racks spiral upwards. Legacy air-cooled data halls cannot move the volume of cool air through the racks required by the latest CPU and GPU systems to maintain operating temperature. This means they must have a plan that includes liquid cooling if these sites are to remain viable. Liquid cooling techniques, such as precision immersion cooling circulates small volumes of a harmless dielectric compound across the surface of the server removing almost 100% of the heat generated by the electronic components. The latest solutions use a sealed chassis that enables IT equipment including servers and storage devices to be easily added or removed from racks with minimal disruption and no mess. Precision liquid cooling removes the requirement for server fans by eliminating the need to blow cool air over the IT components. Removing air cooling infrastructure from data centres also removes the capital expense of some cooling plant, as well as the operational costs of installation, power, servicing and maintenance. Removal of fans and plant not only produces an immediate benefit in terms of reducing noise in the technical area, it also frees up useful space in racks and cabinets as well as in plant rooms. Space efficiency equates to either facilities which are smaller in physical footprint, or the ability to host larger numbers of high density racks. Importantly, precision liquid cooling provides futureproof, scalable infrastructure to meet the provisioning requirements of tomorrow’s workloads and storage needs, Precision cooling and data centre water use The media reports widely on the lack of clean water for irrigation and consumption in drought hit areas from around the world. However, what has sometimes been called the data centre’s ‘dirty little secret’ is the volume of potable water required to operate certain data centres. Many air cooled data centres need water and lots of it. A small 1MW data centre using a conventional air-cooling process can use around 25.5 million litres of water every year. With mainly air-cooled processes, the data centre industry is currently consuming billions of litres of water each year. On the other hand, precision immersion liquid cooling consumes zero water in most cases and can be installed anywhere – including many existing data centres. The water in the cooling system, allowing for maintenance and water loop refreshes, can easily reduce data centre water use by more than 95%. The benefit of all this hot air… Creating a revenue generator from a cost item on the balance sheet is an ultimate dream come true. Currently, air cooled data centres eject heat into the atmosphere in the vast majority of cases. Liquid cooling techniques which capture and remove high-grade heat from the servers offers the capability to redirect this heat to district heating networks, industrial applications and other schemes. Using well established techniques this revenue stream, or sustainability project, could help to heat industrial sites and local facilities, such as schools and hospitals.  Climate change, government intervention with emission standards and public and investor pressure has helped drive change in the wider data centre business outlook. Savings and new revenue streams that benefit the organisations sustainability credentials warrant a critical review of their cost/benefit. There is the opportunity for data centres to move away from previous notions of how data centres operate towards much greater efficiency and sustainable operations.

EuroEXA reports European Exascale programme innovation
At launch one of the largest projects ever funded by EU Horizon 2020, EuroEXA aimed to develop technologies to meet the demands of Exascale high-performance computing (HPC) requirements and provide a ground-breaking platform for breakthrough processor-intensive applications. EuroEXA brought together expertise from a range of disciplines across Europe, from leading technologists to end-user companies and academic organisations to design a solution capable of scaling to peak performance of 400 PetaFLOPS, with a peak power system envelope of 30MW that approaches PUE parity using renewables and chassis-level precision liquid cooling. Dr Georgios Goumas EuroEXA Project Coordinator says, “Today, High-Performance Computing is ubiquitous and touches every aspect of human life. The need for massively scalable platforms to support AI-driven technology are critical to facilitate advances in every sector, from enabling more predictive medical diagnoses, treatment and outcomes to providing more accurate weather modelling so that, e.g. agriculture, can manage the effects of climate change on food production.” EuroEXA Demonstrates EU Innovation on an equal footing with RoW Meeting the need for a platform that answers the call for increased sustainability and lower operational carbon footprint, the 16-partner strong coalition delivered an energy-efficient solution. To do so, the partners overcame challenges throughout the development stack, including energy efficiency, resilience, performance and scalability, programmability, and practicality. The resulting innovations enable a more compact and cooler system, reducing both the cost per PetaFLOPS and its environmental impact; is robust and resilient across every component and manages faults without extended downtime; provides a manageable platform that will continue to provide Exascale performance as it grows in size and complexity; harnesses open-source systems to ensure the widest possible range of applications, ensuring it is relevant and able to impact real-world applications. The project extended and matured leading European software stack components and productive programming model support for FPGA and exascale platforms, with advances in Maxeler MaxJ, OmpSs, GPI and BeeGFS. It built expertise in state-of-the-art FPGA programming through the porting and optimisation of 13 FPGA-accelerated applications, in the HPC domains of Climate and Weather, Physics and Energy, and Life Sciences and Bioinformatics, at multiple centres across Europe. EuroEXA innovation being applied today at ECMWF and Neurasmus A prototype of a weather prediction model component extracted from ECMWF’s IFS suite demonstrated significantly better energy-to-solution than on current HPC nodes, achieving a 3x improvement over an optimised GPU version running on an NVIDIA Volta GPU. Such an improvement in execution efficiency provides an exciting avenue for more power-efficient weather prediction in the future. Further successful outcomes were made through the healthcare partnership with the Neurasmus programme at the Amsterdam UMC, where brain activity is being investigated. The platform was used to generate more accurate neuron simulations than has previously been possible, helping to predict more accurately healthcare outcomes for patients. EuroEXA legacy – extensive FPGA testbed an aid to further developments Outcomes generated include deploying what is believed to be the world’s largest network cluster of FPGA (Field Programmable Gate Array) testbeds, configured to drive high-speed multi-protocol interconnect, with Ethernet switches providing low-latency and high-switching bandwidth. The original proposal was for three FPGA clusters across the European partnership. However, COVID-19 travel restrictions necessitated an increased resource of 22 testbeds, developed in various partner locations. This has benefited the project by accelerating through the massively increased permutations and iterations available, which has also provided a blueprint for several partners to develop high-performance FPGA-based technologies. Partners in the programme have committed to further technology developments to support the advances made by the EuroEXA project and which are now targeted at other applications.



Translate »