Cooling


B­­T takes the plunge with new liquid cooling trials
BT Group has announced that it is trialling several liquid cooling technologies that could substantially improve energy consumption and efficiency metrics in its networks and IT infrastructure, in pursuit of its commitment to becoming a net zero business by the end of March 2031. The group will trial precision liquid cooled network switches using a solution provided by Iceotope and Juniper Networks QFX Series switches, which are widely used in existing network cloud architectures. Ahead of the trial, they have together demonstrated a replica ‘set-up’ using an HP x86 server at BT’s Sustainability Festival. The demonstration showed how power used to cool a network switch typically deployed in a data centre could be significantly reduced. All electronic and electrical systems generate heat during operation that must be dissipated to maintain working capability. Like most large data centres, network and IT equipment across its estate is currently cooled using air-based systems. As network capacity and demands increase, next generation IT and network hardware will have to work harder and will become hotter. Consequently, the power needed to cool them will increase, driving up energy consumption and operational cost. BT Group is, therefore, exploring numerous alternative cooling techniques and in addition to its trial with Iceotope and Juniper, the company will trial the following liquid cooling systems. Precision liquid cooled networking servers and data centre equipment, with Iceotope and Juniper Full immersion of networking servers in an immersion tank, with Immersion4 Liquid-cooled cold plates of networking equipment in a cooling enclosure, with Nexalus Cooling using sprayed-on partial immersion of data centre equipment, with Airsys. Typically, these techniques bring several benefits including a 40-50% reduction in power needed to cool systems vs air cooling, higher equipment density saving on real estate footprint and therefore further power usage reductions, and reduced material usage-reducing carbon footprint. Further, rather than heat dissipated into the air, liquid cooling systems can channel exhausted heat to be reused to heat other parts of a building. Liquid cooling enabling equipment can also be deployed in more environmentally challenging environments such as areas with more contaminants. Maria Cuevas, Networks Research Director, BT Group, says, “As the UK’s largest provider of fixed-line broadband and mobile services in the UK, it isn’t a surprise that over 90% of our overall energy consumption – and nearly 95% of our electricity - comes from our networks. In a world of advancing technology and growing data demands, it’s critical that we continue to innovate for energy efficiency solutions. Liquid cooling for network and IT infrastructure is one part of a much bigger jigsaw but is an area we’re very excited to explore with our technology partners.”

Iceotope Technologies announced as 'Great Place to Work'
Iceotope Technologies has announced its recent recognition as one of the 2023 Best Workplaces in the UK by Great Place to Work. This prestigious accolade underscores its resolute commitment to fostering a culture of unwavering excellence and creating an environment that magnetises and retains top-tier talent.  The 'Great Place to Work' certification acknowledges Iceotope's commitment and dedication to providing an extraordinary workplace experience for its team. After a thorough evaluation of its workplace practices, policies, and employee feedback, it has been recognised as a company that values collaboration, growth and employee wellbeing.  "We are absolutely thrilled to receive the ‘Great Place To Work’ certification, a recognition of Iceotope’s commitment to enabling our team to thrive professionally and personally," says, David Craig, CEO of Iceotope. "This recognition echoes our core values and our dedication to cultivating a workplace that celebrates innovation and fosters the growth of every individual. We believe that by nurturing a culture of excellence, we can continue to attract and empower exceptional talent." The journey towards this certification mirrors the company's core principles — a hunger for knowledge, curiosity and a commitment to solving real-world problems with innovative solutions. “Iceotope's emphasis on engineering excellence is at the heart of its achievements. Purpose-driven engineering, aligned with customer needs and global sustainability goals, reflects the company's dedication to creating progress that benefits both its clients and the environment,” says David.  As Iceotope celebrates this remarkable achievement, the company sets its sights on elevating standards of employee engagement and satisfaction even further. The commitment to maintaining and enhancing the 'Great Place to Work' status reflects its determination to cultivate a culture characterised by growth, collaboration and outstanding accomplishments. The complete list of winning companies can be found here.   Click here for more latest news.

Spirotech offers cooling solutions for data centres
The complex nature of data centres mean that customer information needs to be secure and protected from outside computer viruses and hackers, otherwise millions of accounts could be accessed and breached. There is another potential danger that can have a far-reaching impact on these huge, linked computer systems, and that is over-heating. It can cause untold damage and result in a major breakdown in service delivery. Key to preventing this from happening is in the design of reliable cooling systems with ‘back-up’ structures in place to ensure continuity of service. Rob Jacques, Spirotech’s Business Director UK, provides an insight into the complexities surrounding data centres and why only a handful of UK companies are ‘geared-up’ to not only specify cooling systems for this niche sector, but to provide a ‘cradle-to-grave’ service. In an age when we want everything instantly, stored data holds the key to so much of our everyday life and at the heart of this needs to be the smooth operation of data centres. Keeping them running around-the-clock requires meticulous design of the cooling systems from the outset, and within the ‘blueprint’, needs to be a back-up / failsafe plan covering such elements as the chiller, pumps and pressurisation. There are additional complexities to be taken into consideration and incorporated into the design, especially in terms of the communication between the plant itself and critical equipment parts. Quite simply, the bigger the computer, the more it has to be cooled. To put this in perspective, the rise of cloud computers means that a huge amount of energy is required to manage and maintain all the data, with tens of megawatts of computer power needed. This means computers occupy thousands of square metres of space. If the chillers or the cooling programmes were to fail, then data could be lost on a large scale. Of course, all equipment is subject to fail at some point and that is when the back-up measures need to kick-in and ensure continuity of an effective cooling system. Spirotech's control systems feedback data from pumps, valves, pressurisation units and degassers. For example, it can be noticed from the vacuum degasser, how much air has been removed over a certain period and when, as well as provide valuable information revealing trends within the system. The same applies to the pressurisation units, information is gathered over its operational lifespan revealing what the pressure has been, report on any leaks and needs to bring in more water. There is a link between the pressurisation units and vacuum degassers. Any faults can be signalled and sent over to whoever needs the data. A poorly designed, installed and maintained pressurisation system can lead to negative pressures around the circuit. Air can be drawn in through automatic air vents, gaskets and micro leaks. High pressure situations can lead to water being emitted through the safety valves and the subsequent frequent addition of further raw refill water. The top control unit has the electronic capabilities to effectively manage pressurisation within the system and be programmed to work in parallel with the back-up system. Air and dirt separators are another key component to maintaining the ongoing health of any heating and ventilation system and keeping pipework clean is essential.  Within this sector, there is a much smaller community serving the data centres. It’s an area not every company wants to be in, or is geared-up to serve. Spirotech has a depth of knowledge through working with data centre installations to provide an all-encompassing service. It listens closely to the needs of the client, design a bespoke system, and as a leading manufacturer, can supply the right equipment for the project. Customers also have peace of mind that it can supply spares during a tight deadline. It’s not just about getting the design right from the outset, it’s also about providing the ongoing technical and maintenance support for the project going forward. Click here for more latest news.

Why hybrid cooling is the future for data centres
Gordon Johnson, Senior CFD Manager, Subzero Engineering Rising rack and power densities are driving significant interest in liquid cooling for many reasons. Yet, the suggestion that one size fits all ignores one of the most fundamental aspects of potentially hindering adoption - that many data centre applications will continue to utilise air as the most efficient and cost-effective solution for their cooling requirements. The future is undoubtedly hybrid, and by using air cooling, containment, and liquid cooling together, owners and operators can optimise and future-proof their data centre environments. Today, many data centres are experiencing increasing power density per IT rack, rising to levels that just a few years ago seemed extreme and out of reach, but today are considered both common and typical while simultaneously deploying air cooling. In 2020 for example, the Uptime Institute found that due to compute-intensive workloads, racks with densities of 20kW and higher are becoming a reality for many data centres. This increase has left data centre stakeholders wondering if air-cooled IT equipment (ITE) along with containment used to separate the cold supply air from the hot exhaust air has finally reached its limits and if liquid cooling is the long-term solution. However, the answer is not as simple as yes or no. Moving forward, it’s expected that data centres will transition from 100% air cooling to a hybrid model, encompassing air and liquid-cooled solutions with all new and existing air-cooled data centres requiring containment to improve efficiency, performance, and sustainability. Additionally, those moving to liquid cooling may still require containment to support their mission-critical applications, depending on the type of server technology deployed. One might ask why the debate of air versus liquid cooling is such a hot topic in the industry right now? To answer this question, we need to understand what’s driving the need for liquid cooling, the other options, and how can we evaluate these options while continuing to utilise air as the primary cooling mechanism. Can air and liquid cooling coexist? For those who are newer to the industry, this is a position we’ve been in before, with air and liquid cooling successfully coexisting, while removing substantial amounts of heat via intra-board air-to-water heat exchangers. This process continued until the industry shifted primarily to CMOS technology in the 1990s, and we’ve been using air cooling in our data centres ever since. With air being the primary source used to cool data centres, ASHRAE (American Society of Heating, Refrigeration, and Air Conditioning Engineers) has worked towards making this technology as efficient and sustainable as possible. Since 2004, it has published a common set of criteria for cooling IT servers with the participation of ITE and cooling system manufacturers entitled ‘TC9.9 Thermal Guidelines for Data Processing Environments’. ASHRAE has focused on the efficiency and reliability of cooling the ITE in the data centre. Several revisions have been published with the latest being released in 2021 (revision 5). This latest generation TC9.9 highlights a new class of high-density air-cooled ITE (H1 class) which focuses more on cooling high-density servers and racks with a trade-off in terms of energy efficiency due to lower cooling supply air temperatures recommended to cool the ITE. As to the question of whether or not air and liquid cooling can coexist in the data centre white space, it’s done so for decades already, and moving forward, many experts expect to see these two cooling technologies coexisting for years to come. What do server power trends reveal? It’s easy to assume that when it comes to cooling, a one-size will fit all in terms of power and cooling consumption, both now and in the future, but that’s not accurate. It’s more important to focus on the actual workload for the data centre that we’re designing or operating. In the past, a common assumption with air cooling was that once you went above 25kW per rack, it was time to transition to liquid cooling. But the industry has made some changes in regards to this, enabling data centres to cool up to and even exceed 35kW per rack with traditional air cooling. Scientific data centres, which include largely GPU-driven applications like machine learning, AI, and high analytics like crypto mining, are the areas of the industry that typically are transitioning or moving towards liquid cooling. But if you look at some other workloads like the cloud and most businesses, the growth rate is rising but it still makes sense for air cooling in terms of cost. The key is to look at this issue from a business perspective, what are we trying to accomplish with each data centre? What’s driving server power growth? Up to around 2010, businesses utilised single-core processors, but once available, they transitioned to multi-core processors, however, there still was a relatively flat power consumption with these dual and quad-core processors. This enabled server manufacturers to concentrate on lower airflow rates for cooling ITE, which resulted in better overall efficiency. Around 2018, with the size of these processors continually shrinking, higher multi-core processors became the norm and with these reaching their performance limits, the only way to continue to achieve the new levels of performance by compute-intensive applications is by increasing power consumption. Server manufacturers have been packing in as much as they can to servers, but because of CPU power consumption, in some cases, data centres were having difficulty removing the heat with air cooling, creating a need for alternative cooling solutions such as liquid. Server manufacturers have also been increasing the temperature delta across servers for several years now, which again has been great for efficiency since the higher the temperature delta, the less airflow that’s needed to remove the heat. However, server manufacturers are, in turn, reaching their limits, resulting in data centre operators having to increase the airflow to cool high-density servers and to keep up with increasing power consumption. Additional options for air cooling Thankfully, there are several approaches the industry is embracing to cool power densities up to and even greater than 35kW per rack successfully, often with traditional air cooling. These options start with deploying either cold or hot aisle containment. If no containment is used typically, rack densities should be no higher than 5kW per rack, with additional supply airflow needed to compensate for recirculation air and hot spots. What about lowering temperatures? In 2021, ASHRAE released their 5th generation TC9.9, which highlighted a new class of high-density air-cooled IT equipment, which will need to use more restrictive supply temperatures than the previous class of servers. At some point, high-density servers and racks will also need to transition from air to liquid cooling, especially with CPUs and GPUs expected to exceed 500W per processor or higher in the next few years. But this transition is not automatic and isn’t going to be for everyone. Liquid cooling is not going to be the ideal solution or remedy for all future cooling requirements. Instead, the selection of liquid cooling instead of air cooling has to do with a variety of factors, including specific location, climate (temperature/humidity), power densities, workloads, efficiency, performance, heat reuse, and physical space available. This highlights the need for data centre stakeholders to take a holistic approach to cooling their critical systems. It will not and should not be an approach where only air or only liquid cooling is considered moving forward. Instead, the key is to understand the trade-offs of each cooling technology and deploy only what makes the most sense for the application. Click here for more thought leadership.

Paying attention to data centre storage cooling
Authored by Neil Edmunds, Director of Innovation, Iceotope With constant streams of data emerging from the IoT, video, AI and more, it is no surprise we are expected to generate 463EB of data each day by 2025. How we access and interact with data is constantly changing and is going to have a real impact on the processing and storage of that data. In just a few years, it's predicted that global data storage will exceed 200ZB with half of that stored in the cloud. This presents a unique challenge for hyperscale data centres and their storage infrastructure. According to Seagate, cloud data centres choose mass capacity hard disk drives (HDDs) to store 90% of their exabytes. HDDs are tried and tested technology, typically found in a 3.5in form factor. They continue to offer data centre operators cost effective storage at scale. The current top-of-the-range HDD features 20TB capacity. By the end of the decade that is expected to reach 120TB+, all within the existing 3.5in form factor. The practical implications of this show a need for improved thermal cooling solutions. More data storage means more spinning of the disks, higher speed motors, more actuators – all of which translates to more power being used. As disks go up in power, so does the amount of heat produced by them. Next, with the introduction of helium into the hard drives in the last decade, performance has not only improved, thanks to less drag on the disks, but the units are now sealed. There is also ESG compliance to consider. With data centres consuming 1% of global electricity demand and cooling power accounting for more than 35% of a data centre’s total energy consumption, pressure is on data centre owners to reduce this consumption. Comparison of cooling technologies Traditionally, data centre environments use air cooling technology. The primary way of removing heat with air cooling methods is by pulling increasing volumes of airflow through the chassis of the equipment. Typically, there is a hot aisle behind the racks and a cold aisle configuration in front of the racks which dissipates the heat by exchanging warm air with cooler air. Air cooling is widely deployed and well understood. It is also well engrained into nearly every data centre around the world. However, as the volume of data evolves, it is becoming increasingly likely that air cooling will no longer be able to ensure an appropriate operating environment for energy dense IT equipment. Technologies like liquid cooling are proving to be a much more efficient way to remove heat from IT equipment. Precision liquid cooling, for example, circulates small volumes of dielectric fluid across the surface of the server, removing almost 100% of the heat generated by the electronic components. There are no performance throttling hotspots and no front to back air cooling, or bottom to top immersion constraints which are present in tank solutions. While initial applications of precision liquid cooling have been in a sealed chassis for cooling server components, given the increased power demands of HDD, storage devices are also an ideal application. High density storage demands With high density HDD, traditional air cooling pulls air through the system from front to back. What typically occurs in this environment is that disks in the front become much cooler than those in the back. As the cold air comes and travels through the JBOD device, the air gets hotter. This can result in a 20°C or more temperature differential between the discs at the front and back of the unit depending on the capacity of the hard drive. For any data centre operator, consistency is key. When disks are varying by nearly 20°C from front to back, there is inconsistent wear and tear on the drives leading to unpredictable failure. The same goes for variance across the height of the rack, as lower devices tend to consume the cooler air flow coming up from the floor tiles. Liquid cooling for storage While there will always be variances and different tolerances taking place within any data centre environment, liquid cooling can mitigate for these variances and improve consistency. In 2022, Meta published a study showcasing how an air cooled, high density storage system was reengineered to utilise single phase liquid cooling. The study found that precision liquid cooling was a more efficient means of cooling the HDD racks with the following results: The variance in temperature of all HDDs was just 3°C, regardless of location inside the JBODs. HDD systems could operate reliably in rack water inlet temperatures up to 40°C. System-level cooling power was less than 5% of the total power consumption. Mitigating acoustic vibrational issues. While consistency is a key benefit, cooling all disks at a higher water temperature is important too. This means data centre operators do not need to provide chilled water to the unit. Reduced resource consumption – electrical, water, space, audible noise – all lead to greater reduction in TCO and improved ESG compliance. Both of which are key benefits for today’s data centre operators. As demand for data storage continues to escalate, so will the solutions needed by hyperscale data centre providers to efficiently cool the equipment. Liquid cooling for high density storage is proving to be a viable alternative as it cools the drives at a more consistent temperature and removes vibration from fans, with lower overall end-to-end power consumption and improved ESG compliance. At a time when data centre operators are under increasing pressure to reduce energy consumption and improve sustainability metrics, this technology may not only be good for the planet, but also good for business. Enabling innovation in storage systems Today’s HDDs are designed with forced air cooling in mind, so it stands to reason that air cooling will continue to play a role in the short term. For storage manufacturers to embrace new alternatives demonstrations of liquid cooling technology, like the one Meta conducted, are key to ensuring adoption. Looking at technology trends moving forward, constantly increasing fan power on a rack will not be a long term sustainable solution. Data halls are not getting any larger and costs to cool a rack are increasing. The need for more data storage capacity at greater density is exponentially growing. Storage designed for precision liquid cooling will be smaller, use fewer precious materials and components, perform faster and fail less often. The ability to deliver a more cost effective HDD storage solution in the same cubic footprint, delivers not only a TCO benefit but contributes to greater ESG value as well. Making today's technology more efficient and removing limiting factors for new and game changing data storage methods can help us meet the global challenges we face and is a step forward towards enabling a better future. Click here for more thought leadership.

Castrol and Hypertec accelerate immersion cooling technology
Castrol has announced its collaboration with Hypertec. To accelerate the widespread adoption of Hypertec’s immersion cooling solutions for data centres, supported by Castrol’s fluid technology, both companies will collaborate to develop and test the immersion cooling technology at Castrol’s global headquarters in Pangbourne, UK. Castrol announced in 2022 that it will invest up to £50m investment in its headquarters at Pangbourne. It is pleased to have the first systems in place and fully functional for research to begin on furthering immersion cooling technologies across systems, servers and fluids to provide world class, integrated solutions to customers. Hypertec is the first server OEM to join Castrol in its drive to accelerate immersion cooling technology. The two will leverage Castrol’s existing collaboration with Submer, a leader in immersion cooling technology, who has provided its SmartPod and MicroPod tank systems to the Pangbourne facility, which have been modified to test new fluids and new server technologies. Working together, Castrol will be able to continue to develop its offers for data centre customers and look to accelerate the adoption of immersion cooling as a path to explore more sustainable and more efficient data centre operations. With immersion cooling, water usage and the power consumption needed to operate and cool server equipment can be significantly reduced. Click here for latest data centre news.

Vertiv's guidance on data centres during extreme heat
Summer in the northern hemisphere has just started, but already devastating heatwaves have washed over much of the US, Mexico, Canada, Europe and Asia. Widespread wildfires in Canada have triggered air quality alerts across that country and much of the eastern half of the US and similar extreme heat events across Asia have caused widespread power outages, Europe also continues to break heat records as the fastest warming continent. The data centre cooling experts at Vertiv have issued updated guidance for managing the extreme heat. Climate change has made the past eight years the hottest on record, but with an El Niño weather pattern compounding the issue this year, many forecasts anticipate record-breaking temperatures in 2023. The sizzling outdoor temperatures and their aftermath create significant challenges for data centre operators who already wage a daily battle with the heat produced within their facilities. There are steps organisations can take to mitigate the risks associated with extreme heat. These include: Clean or change air filters: The eerie orange haze that engulfed New York was a powerful visual representation of one of the most immediate and severe impacts of climate change. For data centre operators, it should serve as a reminder to clean or change air filters in their data centre thermal management systems and HVAC systems. Those filters help to protect sensitive electronics from particulates in the air, including smoke from faraway wildfires. Accelerate planned maintenance and service: Extreme heat and poor air quality tax more than data centre infrastructure systems. Electricity providers often struggle to meet the surge in demand that comes with higher temperatures, and outages are common. Such events are not the time to learn about problems with UPS system or cooling unit. Cleaning condenser coils and maintaining refrigerant charge levels are examples of proactive maintenance that can help to prevent unexpected failures. Activate available efficiency tools: Many modern UPS systems are equipped with high efficiency eco-modes that can reduce the amount of power the system draws from the grid. Heatwaves like those seen recently push the grid to its limits, meaning any reductions in demand can be the difference between uninterrupted service and a devastating outage. Leverage alternative energy sources: Not all data centres have access to viable alternative energy, but those that do should leverage off-grid power sources. These could include on/off-site solar arrays or other alternate sources, such as off-site wind farms and lithium-ion batteries, to enable peak shifting or shaving. Use of generators is discouraged during heat waves unless an outage occurs. Diesel generators produce more greenhouse gas and emissions associated with climate change than backup options that use alternative energy. In fact, organisations should postpone planned generator testing when temperatures are spiking. “These heatwaves are becoming more common and more extreme, placing intense pressure on utility providers and data centre operators globally,” says John Niemann, Senior Vice President for the Global Thermal Management Business for Vertiv. “Organisations must match that intensity with their response, proactively preparing for the associated strain not just on their own power and cooling systems, but on the grid as well. Prioritising preventive maintenance service and collaborating with electricity providers to manage demand can help reduce the likelihood of any sort of heat-related equipment failure.” “Again this year, parts of Europe are experiencing record setting heat, and in our business we specifically see the impact on data centres. Prioritising thermal redundancy and partnering with a service provider with widespread local presence and first-class restoration capabilities can make the difference in data centre availability,” says Flora Cavinato, Global Service Portfolio Director. “Swift response times and proactive maintenance programs can help organisations to sustain their business operations while effectively optimising their critical infrastructure.” Click here for more on Vertiv.

Forestry & Land Scotland embrace cloud technology
Nutanix has announced that Forestry & Land Scotland (FLS) has upgraded its data centre infrastructure to a hyperconverged infrastructure (HCI), selecting the Nutanix Cloud Platform to support a workload of 300 virtual machines. FLS opted for Nutanix Cloud Clusters (NC2) on Microsoft Azure. With Nutanix NC2, it has been able to migrate the whole data centre to Azure without the time, effort and expense of re-engineering applications for native deployment. Founded in 2018 as part of the Scottish devolution process, FLS manages over 1.5 million acres of national forests and land. To meet the short term IT needs of a newly devolved Scottish government agency, at the same time, supporting its move to the public cloud in line with a cloud-first government policy, it was required to rapidly revamp its legacy on-premises data centre. FLS was already using Microsoft Azure to provide for disaster recovery of its on-premise data centre, so naturally, the organisation first looked at re-engineering for native operation of its applications on that platform. It soon realised that NC2 for Azure would be a better, quicker and more cost-effective approach, enabling it to stretch its existing environment seamlessly into the cloud and migrate workflows at its own pace, without having to transform or re-engineer the code in any way. The migration also offered immediate benefits in terms of both performance and on-demand scalability. It resulted in a significantly smaller data centre footprint, in terms of both physical space and power and cooling requirements. As with the original data centre project, Mahlitz, Nutanix was able to help by arranging a proof of concept trial of Nutanix NC2 on Microsoft Azure involving actual FLS production workloads.

Schneider Electric delivers data centre project for Loughborough University
Schneider Electric has delivered a new data centre modernisation project for Loughborough University, in collaboration with its elite partner, on365. The project saw Schneider Electric and on365 modernise the university’s IT infrastructure with new energy efficient technologies, including an EcoStruxure Row Data Center, InRow Cooling solution, Galaxy VS UPS and EcoStruxure IT software, enabling the university to harness the power of resilient IT infrastructure, data analytics and digital services to support new breakthroughs in sporting research. As Loughborough University is known for its sports-related subjects and is home to world-class sporting facilities, IT is fundamental to its operations, from its high-performance computing (HPC) servers which support analytical research projects, to a highly virtualised data centre environment that provides critical applications including finance, administration and security. To overcome a series of data centre challenges, including requirements for a complete redesign, modernisation of legacy cooling systems, improved cooling efficiencies, and greater visibility of its distributed IT assets, the university undertook the project at its Haslegrave and Holywell Park data centres. Delivered in two phases, the project firstly saw on365 modernise the Haslegrave facility by replacing an outdated raised floor and deploying an EcoStruxure Row Data Center solution. The deployment of this significantly improved the overall structure, enabling an efficient data centre design. During the upgrade, it also brought other parts of the infrastructure under the IT department’s control, using new InRow DX units to deliver improved cooling reliability, and provide it with greater ability to cope with unplanned weather such as heat waves, which had adversely affected its IT and cooling operations in the past. Use of this solution also created a new space for future IT expansions and extended a ‘no single points of failure’ design throughout the facility. This made the environment more suitable for a new generation of compact and powerful servers, and the solution was replicated at Holywell Park thereafter. Further improvements in resilience and efficiency were also achieved by Schneider Electric’s Galaxy VS UPS with lithium-ion batteries. “At the foundational level of everything which is data-driven at the university, the Haslegrave and Holywell data centres are the power behind a host of advancements in sports science, and our transition towards a more sustainable operation,” says Mark Newall, IT Specialist at the University of Loughborough. “Working with Schneider Electric and on365 has enabled our data centre to become more efficient, effective and resilient.” The university has also upgraded the software used to manage and control its infrastructure. It has deployed the company’s EcoStruxure IT platform, providing it with enhanced visibility and data-driven insights that help identify and mitigate potential faults before they become critical. This, in conjunction with a new three-year Schneider Electric services agreement delivered via on365, has given the university 24x7 access to maintenance support. The university also utilises a large distributed edge network environment, which has in excess of 60 APC Smart-UPS protecting it. As part of its services agreement, all critical power systems are monitored and maintained via EcoStruxure IT, providing real-time visibility and helping IT personnel to manage the campus’ network more efficiently.

Ongoing drought may potentially stunt Spanish data centre market
Amid one of the driest springs in Spain, a sector expert has warned that a lack of free cooling capacity could hinder the burgeoning growth of the nation’s data centre market. As hyperscalers and colocation facilities alike grapple with power-related challenges in the FLAP-D markets, data providers are gravitating towards Europe’s tier 2 markets of Zurich, Milan, Madrid and Berlin, with a 2022 projection from CBRE, forecasting that these will triple in size by autumn 2023. Of these, Madrid was highlighted as the main beneficiary, with 47MW set to come online in 2022 and 2023. However, following reports that the Spanish water reserve fell below 50% in May, Aggreko has warned that interruptions to free cooling processes have the potential to stifle the market’s ongoing growth. Billy Durie, Global Sector Head for Data Centres at Aggreko, says, “Spain, and Madrid in particular, is becoming an increasingly attractive location for data centre facilities. The Spanish government’s ‘Digital Spain 2026’ policy is a huge bonus for data providers, while the nation’s wider commitment to realising renewable energy means that energy shortages are less severe compared to other European nations. “That said, Spain is currently enduring one of the worst droughts recorded this century. Without water, free cooling processes simply aren’t possible, which has the potential to stunt the wider development of the market if left unchecked. For this reason, it’s critical that data centre operators ensure that contingency plans are in place in the meantime to maintain business continuity.” Aggreko recently published a report, Uptime on the Line, which explores the challenges facing European data providers, based on research insights from 700 European data centre consultants. Within the report, changing temperature demands has been highlighted as a key area of concern, with extreme weather posing a threat to data centre cooling systems. Here, it highlights on-site cooling packages as a potential solution, with connection points being installed during the colder months of the year, allowing chillers to be quickly brought in to maintain uptime. Billy concludes, “Right now, one of the main factors making Madrid such an attractive location for new data centres is the lack of power-related challenges associated with the FLAP-D markets. However, this unique position faces the threat of being undermined by a lack of free cooling capacity. “Extreme weather is by no means a new phenomenon, and seems to only become more common year on year. For this reason, I’d strongly recommend operators to incorporate temporary chillers as a part of their contingency strategy going forwards, to allow the Spanish data centre market to continue to thrive.”



Translate »