Thursday, April 24, 2025

Cooling


Castrol and Schneider Electric launch liquid cooling lab in Shanghai
Castrol and Schneider Electric have opened a new liquid cooling technology co-laboratory in Shanghai under a strategic partnership agreement. This collaboration aims to offer customers new innovations in data centre cooling technology. The co-laboratory will support the development of benchmark liquid cooling projects for data centres in the future. It will also serve as a jointly branded customer demonstration centre, showcasing significant breakthroughs in liquid cooling technology to the data centre industry. Castrol and Schneider Electric will work together to carry out in-depth product development and projects that can address the practical technical challenges faced by customers – such as compatibility between the cooling liquid and devices, and improving heat dissipation, among other issues. Through joint research and development, technology sharing and other approaches, both companies will aim to expand the adoption of liquid cooling technology across various scenarios.  Castrol's high-performance cooling liquids will be integrated with Schneider Electric's data centre solutions, including infrastructure such as the Cooling Distribution Unit (CDU), power supplies, server rack and intelligent power distribution equipment. In the future, both companies will collaborate to achieve further in-depth integration by conducting compatibility tests of data centre liquid cooling fluids and infrastructure. This will help ensure the stability and safety of the combined products of Castrol and Schneider Electric and provide one-stop liquid cooling solutions for more customers. At the opening of the co-laboratory, Peter Huang, Vice President, Thermal Management at Castrol, said, "In the era of AI, the construction of liquid cooling infrastructure in data centres is developing rapidly. Through Castrol’s strategic partnership with Schneider Electric, we will jointly provide end-to-end solutions for the construction, operation and maintenance of data centres, ranging from the hardware in server rooms to liquid cooling fluids." Castrol and Schneider Electric are committed to providing higher-quality data centre liquid cooling services and promoting safe and energy-efficient development of data centres that are fit for the future.

Redefining liquid cooling from the server to the switch
By Nathan Blom, CCO, Iceotope Liquid cooling has long been a focal point in discussions surrounding data centres, and rightfully so, as these facilities are at the epicentre of an unprecedented data explosion. The explosive growth of the internet, cloud services, IoT devices, social media, and AI has fuelled an unparalleled surge in data generation, intensifying the strain on rack densities and placing substantial demands on data centre cooling systems. In fact, cooling power alone accounts for a staggering 40% of a data centre's total energy consumption. However, the need for efficient IT infrastructure cooling extends beyond data centres. Enterprise organisations are also looking for ways to reduce costs, maximise revenue and accelerate sustainability objectives. Not to mention the fact that reducing energy consumption is rapidly becoming one of the top priorities for telcos with thousands of sites in remote locations, making the reduction of maintenance costs key as well. Liquid cooling technologies have emerged as a highly efficient solution for dissipating heat from IT equipment, regardless of the setting. Whether it's within a data centre, on-premises data hall, cloud environment, or at the edge, liquid cooling is proving its versatility. While most applications have centred on cooling server components, new applications are rapidly materialising across the entire IT infrastructure spectrum. BT Group, in a ground-breaking move, initiated trials of liquid cooling technologies across its networks to enhance energy efficiency and reduce consumption as part of its commitment to achieving net zero status. BT kicked off the trials with a network switch cooled using Iceotope’s Precision Liquid Cooling technology and Juniper Networks QFX Series Switches. With 90% of its overall energy consumption coming from networks, it’s easy to see why reducing energy consumption is such a high priority. In a similar vein, Meta released a study last year confirming the practicality, efficiency and effectiveness of precision liquid cooling technology to meet the cooling requirements of high-density storage disks. Global data storage is growing at such a rate there is an increased need for improved thermal cooling solutions. Liquid cooling for high-density storage is proving to be a viable alternative as it can mitigate for variances and improve consistency. Ultimately, it lowers overall power consumption and improves ESG compliance. Liquid cooling technologies are changing the game when it comes to removing heat from the IT stack. While each of the technologies on the market today have their time and place, there is a reason we are seeing precision liquid cooling in trials that are broadening the use case for liquid cooling. It also ensures maximum efficiency and reliability as it uses a small amount of dielectric coolant to precisely target and remove heat from the hottest components of the server. This approach not only eliminates the need for traditional air-cooling systems, but it allows for greater flexibility in designing IT solutions than any other solution on the market today. There are no hotspots that can slow down performance, no wasted physical space on unnecessary cooling infrastructure, and minimal need for water consumption. As the demand for data increases, the importance of efficient and sustainable IT infrastructure cooling cannot be overstated. Liquid cooling, and precision liquid cooling in particular, is at the forefront of this journey. Whether it's reducing the environmental footprint of data centres, enhancing the energy efficiency of telecommunication networks, or meeting the ever-increasing demands of high-density storage, liquid cooling offers versatile and effective solutions. These trials and applications are not just milestones, they represent a pivotal shift toward a future where cooling is smarter, greener, and more adaptable, empowering businesses to meet their evolving IT demands while contributing to a more sustainable world.

Why hybrid cooling is the future for data centres
Gordon Johnson, Senior CFD Manager, Subzero Engineering Rising rack and power densities are driving significant interest in liquid cooling for many reasons. Yet, the suggestion that one size fits all ignores one of the most fundamental aspects of potentially hindering adoption - that many data centre applications will continue to utilise air as the most efficient and cost-effective solution for their cooling requirements. The future is undoubtedly hybrid, and by using air cooling, containment, and liquid cooling together, owners and operators can optimise and future-proof their data centre environments. Today, many data centres are experiencing increasing power density per IT rack, rising to levels that just a few years ago seemed extreme and out of reach, but today are considered both common and typical while simultaneously deploying air cooling. In 2020 for example, the Uptime Institute found that due to compute-intensive workloads, racks with densities of 20kW and higher are becoming a reality for many data centres. This increase has left data centre stakeholders wondering if air-cooled IT equipment (ITE) along with containment used to separate the cold supply air from the hot exhaust air has finally reached its limits and if liquid cooling is the long-term solution. However, the answer is not as simple as yes or no. Moving forward, it’s expected that data centres will transition from 100% air cooling to a hybrid model, encompassing air and liquid-cooled solutions with all new and existing air-cooled data centres requiring containment to improve efficiency, performance, and sustainability. Additionally, those moving to liquid cooling may still require containment to support their mission-critical applications, depending on the type of server technology deployed. One might ask why the debate of air versus liquid cooling is such a hot topic in the industry right now? To answer this question, we need to understand what’s driving the need for liquid cooling, the other options, and how can we evaluate these options while continuing to utilise air as the primary cooling mechanism. Can air and liquid cooling coexist? For those who are newer to the industry, this is a position we’ve been in before, with air and liquid cooling successfully coexisting, while removing substantial amounts of heat via intra-board air-to-water heat exchangers. This process continued until the industry shifted primarily to CMOS technology in the 1990s, and we’ve been using air cooling in our data centres ever since. With air being the primary source used to cool data centres, ASHRAE (American Society of Heating, Refrigeration, and Air Conditioning Engineers) has worked towards making this technology as efficient and sustainable as possible. Since 2004, it has published a common set of criteria for cooling IT servers with the participation of ITE and cooling system manufacturers entitled ‘TC9.9 Thermal Guidelines for Data Processing Environments’. ASHRAE has focused on the efficiency and reliability of cooling the ITE in the data centre. Several revisions have been published with the latest being released in 2021 (revision 5). This latest generation TC9.9 highlights a new class of high-density air-cooled ITE (H1 class) which focuses more on cooling high-density servers and racks with a trade-off in terms of energy efficiency due to lower cooling supply air temperatures recommended to cool the ITE. As to the question of whether or not air and liquid cooling can coexist in the data centre white space, it’s done so for decades already, and moving forward, many experts expect to see these two cooling technologies coexisting for years to come. What do server power trends reveal? It’s easy to assume that when it comes to cooling, a one-size will fit all in terms of power and cooling consumption, both now and in the future, but that’s not accurate. It’s more important to focus on the actual workload for the data centre that we’re designing or operating. In the past, a common assumption with air cooling was that once you went above 25kW per rack, it was time to transition to liquid cooling. But the industry has made some changes in regards to this, enabling data centres to cool up to and even exceed 35kW per rack with traditional air cooling. Scientific data centres, which include largely GPU-driven applications like machine learning, AI, and high analytics like crypto mining, are the areas of the industry that typically are transitioning or moving towards liquid cooling. But if you look at some other workloads like the cloud and most businesses, the growth rate is rising but it still makes sense for air cooling in terms of cost. The key is to look at this issue from a business perspective, what are we trying to accomplish with each data centre? What’s driving server power growth? Up to around 2010, businesses utilised single-core processors, but once available, they transitioned to multi-core processors, however, there still was a relatively flat power consumption with these dual and quad-core processors. This enabled server manufacturers to concentrate on lower airflow rates for cooling ITE, which resulted in better overall efficiency. Around 2018, with the size of these processors continually shrinking, higher multi-core processors became the norm and with these reaching their performance limits, the only way to continue to achieve the new levels of performance by compute-intensive applications is by increasing power consumption. Server manufacturers have been packing in as much as they can to servers, but because of CPU power consumption, in some cases, data centres were having difficulty removing the heat with air cooling, creating a need for alternative cooling solutions such as liquid. Server manufacturers have also been increasing the temperature delta across servers for several years now, which again has been great for efficiency since the higher the temperature delta, the less airflow that’s needed to remove the heat. However, server manufacturers are, in turn, reaching their limits, resulting in data centre operators having to increase the airflow to cool high-density servers and to keep up with increasing power consumption. Additional options for air cooling Thankfully, there are several approaches the industry is embracing to cool power densities up to and even greater than 35kW per rack successfully, often with traditional air cooling. These options start with deploying either cold or hot aisle containment. If no containment is used typically, rack densities should be no higher than 5kW per rack, with additional supply airflow needed to compensate for recirculation air and hot spots. What about lowering temperatures? In 2021, ASHRAE released their 5th generation TC9.9, which highlighted a new class of high-density air-cooled IT equipment, which will need to use more restrictive supply temperatures than the previous class of servers. At some point, high-density servers and racks will also need to transition from air to liquid cooling, especially with CPUs and GPUs expected to exceed 500W per processor or higher in the next few years. But this transition is not automatic and isn’t going to be for everyone. Liquid cooling is not going to be the ideal solution or remedy for all future cooling requirements. Instead, the selection of liquid cooling instead of air cooling has to do with a variety of factors, including specific location, climate (temperature/humidity), power densities, workloads, efficiency, performance, heat reuse, and physical space available. This highlights the need for data centre stakeholders to take a holistic approach to cooling their critical systems. It will not and should not be an approach where only air or only liquid cooling is considered moving forward. Instead, the key is to understand the trade-offs of each cooling technology and deploy only what makes the most sense for the application. Click here for more thought leadership.

Paying attention to data centre storage cooling
Authored by Neil Edmunds, Director of Innovation, Iceotope With constant streams of data emerging from the IoT, video, AI and more, it is no surprise we are expected to generate 463EB of data each day by 2025. How we access and interact with data is constantly changing and is going to have a real impact on the processing and storage of that data. In just a few years, it's predicted that global data storage will exceed 200ZB with half of that stored in the cloud. This presents a unique challenge for hyperscale data centres and their storage infrastructure. According to Seagate, cloud data centres choose mass capacity hard disk drives (HDDs) to store 90% of their exabytes. HDDs are tried and tested technology, typically found in a 3.5in form factor. They continue to offer data centre operators cost effective storage at scale. The current top-of-the-range HDD features 20TB capacity. By the end of the decade that is expected to reach 120TB+, all within the existing 3.5in form factor. The practical implications of this show a need for improved thermal cooling solutions. More data storage means more spinning of the disks, higher speed motors, more actuators – all of which translates to more power being used. As disks go up in power, so does the amount of heat produced by them. Next, with the introduction of helium into the hard drives in the last decade, performance has not only improved, thanks to less drag on the disks, but the units are now sealed. There is also ESG compliance to consider. With data centres consuming 1% of global electricity demand and cooling power accounting for more than 35% of a data centre’s total energy consumption, pressure is on data centre owners to reduce this consumption. Comparison of cooling technologies Traditionally, data centre environments use air cooling technology. The primary way of removing heat with air cooling methods is by pulling increasing volumes of airflow through the chassis of the equipment. Typically, there is a hot aisle behind the racks and a cold aisle configuration in front of the racks which dissipates the heat by exchanging warm air with cooler air. Air cooling is widely deployed and well understood. It is also well engrained into nearly every data centre around the world. However, as the volume of data evolves, it is becoming increasingly likely that air cooling will no longer be able to ensure an appropriate operating environment for energy dense IT equipment. Technologies like liquid cooling are proving to be a much more efficient way to remove heat from IT equipment. Precision liquid cooling, for example, circulates small volumes of dielectric fluid across the surface of the server, removing almost 100% of the heat generated by the electronic components. There are no performance throttling hotspots and no front to back air cooling, or bottom to top immersion constraints which are present in tank solutions. While initial applications of precision liquid cooling have been in a sealed chassis for cooling server components, given the increased power demands of HDD, storage devices are also an ideal application. High density storage demands With high density HDD, traditional air cooling pulls air through the system from front to back. What typically occurs in this environment is that disks in the front become much cooler than those in the back. As the cold air comes and travels through the JBOD device, the air gets hotter. This can result in a 20°C or more temperature differential between the discs at the front and back of the unit depending on the capacity of the hard drive. For any data centre operator, consistency is key. When disks are varying by nearly 20°C from front to back, there is inconsistent wear and tear on the drives leading to unpredictable failure. The same goes for variance across the height of the rack, as lower devices tend to consume the cooler air flow coming up from the floor tiles. Liquid cooling for storage While there will always be variances and different tolerances taking place within any data centre environment, liquid cooling can mitigate for these variances and improve consistency. In 2022, Meta published a study showcasing how an air cooled, high density storage system was reengineered to utilise single phase liquid cooling. The study found that precision liquid cooling was a more efficient means of cooling the HDD racks with the following results: The variance in temperature of all HDDs was just 3°C, regardless of location inside the JBODs. HDD systems could operate reliably in rack water inlet temperatures up to 40°C. System-level cooling power was less than 5% of the total power consumption. Mitigating acoustic vibrational issues. While consistency is a key benefit, cooling all disks at a higher water temperature is important too. This means data centre operators do not need to provide chilled water to the unit. Reduced resource consumption – electrical, water, space, audible noise – all lead to greater reduction in TCO and improved ESG compliance. Both of which are key benefits for today’s data centre operators. As demand for data storage continues to escalate, so will the solutions needed by hyperscale data centre providers to efficiently cool the equipment. Liquid cooling for high density storage is proving to be a viable alternative as it cools the drives at a more consistent temperature and removes vibration from fans, with lower overall end-to-end power consumption and improved ESG compliance. At a time when data centre operators are under increasing pressure to reduce energy consumption and improve sustainability metrics, this technology may not only be good for the planet, but also good for business. Enabling innovation in storage systems Today’s HDDs are designed with forced air cooling in mind, so it stands to reason that air cooling will continue to play a role in the short term. For storage manufacturers to embrace new alternatives demonstrations of liquid cooling technology, like the one Meta conducted, are key to ensuring adoption. Looking at technology trends moving forward, constantly increasing fan power on a rack will not be a long term sustainable solution. Data halls are not getting any larger and costs to cool a rack are increasing. The need for more data storage capacity at greater density is exponentially growing. Storage designed for precision liquid cooling will be smaller, use fewer precious materials and components, perform faster and fail less often. The ability to deliver a more cost effective HDD storage solution in the same cubic footprint, delivers not only a TCO benefit but contributes to greater ESG value as well. Making today's technology more efficient and removing limiting factors for new and game changing data storage methods can help us meet the global challenges we face and is a step forward towards enabling a better future. Click here for more thought leadership.

Castrol and Hypertec accelerate immersion cooling technology
Castrol has announced its collaboration with Hypertec. To accelerate the widespread adoption of Hypertec’s immersion cooling solutions for data centres, supported by Castrol’s fluid technology, both companies will collaborate to develop and test the immersion cooling technology at Castrol’s global headquarters in Pangbourne, UK. Castrol announced in 2022 that it will invest up to £50m investment in its headquarters at Pangbourne. It is pleased to have the first systems in place and fully functional for research to begin on furthering immersion cooling technologies across systems, servers and fluids to provide world class, integrated solutions to customers. Hypertec is the first server OEM to join Castrol in its drive to accelerate immersion cooling technology. The two will leverage Castrol’s existing collaboration with Submer, a leader in immersion cooling technology, who has provided its SmartPod and MicroPod tank systems to the Pangbourne facility, which have been modified to test new fluids and new server technologies. Working together, Castrol will be able to continue to develop its offers for data centre customers and look to accelerate the adoption of immersion cooling as a path to explore more sustainable and more efficient data centre operations. With immersion cooling, water usage and the power consumption needed to operate and cool server equipment can be significantly reduced. Click here for latest data centre news.

Leading partners join forces with Equinix to test sustainable data centre innovations
Equinix has announced the opening of its first Co-Innovation Facility (CIF), located in its DC15 International Business Exchange (IBX) data centre at the Equinix Ashburn Campus in the Washington, D.C. area. A component of Equinix's Data Centre of the Future initiative, the CIF is a new capability that enables partners to work with Equinix on trialling and developing innovations. These innovations, such as identifying a path to clean hydrogen-enabled fuel cells or deploying more capable battery solutions, will be used to help define the future of sustainable digital infrastructure and services globally. Sustainable innovations, including liquid cooling, high-density cooling, intelligent power management and on-site prime power generation, will be incubated in the CIF in partnership with leading data centre technology innovators including Bloom Energy, ZutaCore, Virtual Power Systems (VPS) and Natron. In collaboration with Equinix, these partners will test core and edge technologies with a focus on proving reliability, efficiency and cost to build. These include: Generator-less and UPS-less Data Centres (Bloom Energy) – utilising on-site solid oxide fuel cells enables the data centre to generate redundant cleaner energy on-grid, and potentially eliminates the need for fossil fuel-powered generators and power-consuming Uninterrupted Power Supply (UPS) systems.High-Density Liquid Cooling (ZutaCore) – highly efficient, direct-on-chip, waterless, two-phase liquid cooled rack systems, capable of cooling upwards of 100kW per rack in a light, compact design. Eliminates risk of IT meltdown, minimises use of scarce resources including energy, land, construction and water, and dramatically shrinks the data centre footprint.Software-Defined Power (VPS) with cabinet-mounted Battery Energy Storage (Natron Energy) – cabinet power management and battery energy storage system manages power draw and minimises power stranding to near zero per cent, leading to a potential 30-50% improvement of power efficiency. "ZutaCore is honoured to be featured at the CIF and partner with Equinix to advance the proliferation of liquid cooling on a global scale,” says Udi Paret, President of ZutaCore.   “Together we aim to prove that liquid cooling is an essential technology in realising fundamental business objectives for data centres of today and into the future. HyperCool liquid cooling solutions deliver unparalleled performance and sustainability benefits to directly address sustainability imperatives. With little to no infrastructure change, it consistently provides easy to deploy and maintain, environmentally friendly, economically attractive liquid cooling to support the highest core-count, high power and most dense requirements for a range of customer needs from the cloud to the edge."

World’s first power generating data centre cooling system
Infinidium has announced the launch of its Proprietary Next Generation data centre Cooling and Power Supply Infrastructure that can reduce both Operating & Capital Costs by as much as 50%. The independently verified modular technology named The Vortex Vacuum Chamber is believed to be the most efficient data centre cooling system ever designed which is coupled with an Extra-Low Voltage Direct Current smart Nanogrid. The PTC Patent pending configuration utilises biomimicry for heat management and decentralised active battery storage which enhances adjacent renewable output while drastically lowering electrical conversion losses. The system eliminates complex cooling infrastructure, generators and other auxiliary equipment which individually can offset all Infinidium capital costs. The systems can be rapidly placed in existing facilities with minimal retrofitting and permitting requirements. The inner chamber open air configuration will also enable robotic assembly and operation amongst other innovative enhancements being developed by the company. Traditional data centre energy consumption is a major growing global concern as COVID-19 and expansion of the Cloud has created unsustainable new demand. Infinidium technology can potentially achieve the highest FLOPS/Watt output and create the smallest environmental footprint to date. The company is actively seeking strategic alliances and capital partners for the mass deployment of the technology.

Rittal’s DCiB provides data solutions for Oxford University’s GLAM Division
This article was written by Andrew Wreford -  Rittal's Product Manager for IT Systems, on its recent project providing solutions for Oxford University's GLAM Division. Oxford University's Gardens, Libraries and Museums division (GLAM) forms one of the greatest concentrations of university collections in the world. GLAM holds over 21 million objects, specimens and printed items, constituting one of the largest and most significant collections in the world. Faced with the challenges of increased data demand, the Museum of Natural History – one of the museums within GLAM – wanted to upgrade its IT infrastructure to house core network switches, responsible for running the services. A major rewiring project was undertaken with the aim of significantly improving the data connectivity for computers, phones and next generation devices. The wiring presented a challenge in itself as the historically significant listed building was not best designed to accommodate the space for conventional hardware. This required ingenious methods to work with the fabric of the building. Faced with these challenges, Anjanesh Babu the technical project lead in the Gardens, Libraries and Museums IT team, researched options available. The traditional approach was for the designated network core of a building to be stripped bare and rebuilt with air conditioning and electrics to meet the requirements for the equipment. However, given the nature of the building, this would present a number of challenges, including space and cooling loss through the surfaces.  The design approach was led by GLAM Sustainability strategy. Oxford University Museum of Natural History by Ian Wallman Anjanesh Babu, Technical Lead for the Project, approached Rittal's IT team who quickly identified the “Data Centre in a Box” (DCiB) concept as a possible option. DCiB replicates the key data centre capabilities but on a smaller scale and has been developed to enable equipment to be deployed in non-traditional Data Centre environments. The turnkey package concept provides IT Racks, demand-orientated climate control, PDU, monitoring and fire suppression. It provides a complete solution from product selection, through to installation and ongoing maintenance. When installed in the Museum of Natural History, the cooling footprint would be significantly lower than the traditional full-room air conditioning and the absence of any work to the space to accommodate the system would mean that the building would remain relatively untouched. A site visit by Rittal’s Area Sales Manager for IT was arranged, and  the requirements gathered. “The system was to be located in the museum’s basement which had restricted access with very narrow staircase & doorways. In addition to this, the building’s listed status would mean that any cooling equipment would have to be positioned cleverly and with the utmost consideration, not only to aesthetic but to any noise pollution emitted” recalls Joel. The IT Area Sales Manager and members of the Rittal IT development team, Clive Partridge and Andrew Wreford, worked with Anjanesh Babu to identify key areas that needed to be achieved. “Given the kW loads & environment of the proposed location, it became clear that the DCiB’s LCU option was the best way to go, and we quickly built up a package including racks, accessories, cooling, fire suppression, PDUs & monitoring. To mitigate the access restrictions, we used the ‘rack splitting / re-joining’ service which enabled us to resolve the challenge of space limitations of the project” says Rittal’s Technical IT Manager, Clive Partridge. Rittal provided an end-to-end solution from the manufacture of kit, to the installation, commissioning & hand-over. To overcome the issues with the listed building status, Rittal’s IT team worked in collaboration with Babu and the lead contractor, Monard Electrical, to find a suitable home for the condenser. Technical project lead from GLAM, Anjanesh Babu, reflected on the options deployed: “RIttal’s DCiB allowed the museum to utilise the proposed location without having to make costly building modifications, thus saving time, energy and effort.” By adopting “in-rack” precision cooling instead of “in-room” cooling, the location is more environmentally efficient and this controls operational expenditure. Cooling via the high-performance LCU option provides temperature consistency, allows better care of their equipment along with nearly silent operations. Not only is the installation providing energy efficiency and longevity for the museum, there is the added benefit of noise reduction in the room compared to an existing server room utilising in-room cooling. Haas Ezzet, Head of IT Gardens and Museums (GLAM) at the University of Oxford, contextualises this piece of work as being part of the “Museum’s drive towards greater environmental sustainability. The approach piloted here, of focussing climate control specifically to the area needed, the data cabinet, rather than the entire space in which it is house, will optimise energy consumption and afford a blueprint for other spaces within GLAM and beyond.”

The adoption of alternative data centre cooling to keep climate change in check
DataQube, together with Primaria, is championing the adoption of alternative data centre cooling refrigerants in response to European regulations to phase out greenhouse gases. Field trials are currently underway to establish the feasibility of replacing legacy HFCs – (fluorinated hydrocarbons) coolants with a next-generation refrigerant that efficiently carries heat and delivers a lower environmental impact. The two main refrigerants currently used in data centre cooling systems are R134a and especially R410a. Whilst both have an ozone depletion potential (ODP) of zero, their global warming potential (GWP) ratings of 1430 and 2088 respectively are a thousand times higher than carbon dioxide. R-32 on the other hand, because of its efficient heat conveying capabilities which can reduce total energy usage by up to 10% and due to its chemical structure has a GWP rating that is up to 68% lower at just 675. “The environmental impact of the data centre industry is significant, estimated at between 5-9% of global electricity usage and more than 2% of all CO2 emissions.” Says David Keegan, CEO of DataQube.  “In light of COP 26 targets, the industry as a whole needs to rethink its overall energy usage if it is to become climate neutral by 2030, and our novel system is set to play a major part in green initiatives.” “For data centre service providers it’s important that their operations are state of the art when it comes to energy efficiency and GWP (of the refrigerants used) since it impacts both their balance sheet and their sustainability,” comments Henrik Abrink, Managing Director of Primaria “With the development and implementation of R-32 in the DataQube cooling units we have taken a step further to deliver high added value on both counts in a solution that is already proving to be the most energy efficient edge data centre system on the market.” Unlike conventional data centre infrastructure, DataQube, because of its unique person-free layout, in an alternative way reduces power consumption by as much as 56% and CO2 emissions by as much as 56% as the energy transfer is primarily dedicated to powering computers. Exploiting next generation cooling products such as R-32 together with immersive cooling in its core infrastructure offers the potential to reduce these figures further. DataQube’s efficient use of space, combined with optimised IT capacity makes for a smaller physical footprint because less land, raw materials and power are needed from the outset.  Moreover, any surplus energy may be reused for district heating, making the system truly sustainable.  

Data centre cooling market size to hit $21.51bn by 2028
Brandessence Market Research has published a new report titled, Global Data Centre Cooling Market Size, Trends, Competitive, Historical & Forecast Analysis, 2021-2028. The report says that the growing need for optimising infrastructure budgets to achieve business goals will drive the growth of the data centre cooling market. The market was valued at $10.42bn in 2021 and is expected to reach $21.51bn by 2028, with a CAGR of 10.9% over the forecast period. Data centre operators use cooling solutions to maintain the temperature in data centres at a level that is within permissible limits. Data centres have to work in an efficient manner all day long to process a lot of data. In terms of data processing, the equipment does the job of dissipating heat energy and this generates a major need for cooling in order to prevent damage which may happen to equipment by overheating. There are two kinds of systems - one is water-based and the other is air-based. Air-based cooling circulates the air in the data centre to maintain the cooling. In water-based cooling, the water is used and is further segmented into immersion cooling along with the water-cooled racks, where the flow of the liquids is across the hot components for maintaining temperature. The telecoms and IT segments are dominant in the global data centre cooling market because of the rise in the levels of penetration and digitalisation of these technologies, such as the cloud and big data. The technologies pose a major demand for the storage of data and its availability. These enterprises are demanding storage that is better, along with better IT facilities and connectivity for catering to these demands in an efficient manner. Furthermore, the proliferation of smart devices and consumer demands for the safeguarding of information and financials has been expected to propel the demand for cooling equipment. The industries have been adopting solutions that are highly efficient as well as beneficial in terms of cost-efficiency. There has been an increase in the number of data centres, and the use of the latest technology has increased. With the increasing number of data centres, there is a demand for data centre cooling solutions. This is expected to boost the growth of the global data centre cooling market. The data centre cooling market is expected to reach unprecedented levels in this period of forecast. Room-based cooling was the largest in the data centre cooling market in terms of market share, and this is because of its effective cooling which is achieved at a lower cost. This has been estimated to maintain a larger market share because of the ducts and pipes in comparison to the other types of cooling. Air-based cooling and air conditioners maintain the temperature to permissible limits. The cooling which is room-based has been gaining mileage because of its cooling in an energy-efficient manner. In terms of regions, North America had been dominating the market size overall because of the advantages it has in terms of the advancements in technology and the recent developments that pertain to this market. Furthermore, companies have been focusing on the implementation of environmentally friendly and cost-effective cooling, which has fuelled the growth of the market. The region of Asia Pacific has been expected to see a good amount of growth because of the penetration of technology in its population.



Translate »