Thursday, April 24, 2025

Features


Logpoint appoints Michael Haldbo as CFO
Logpoint has announced the appointment of Michael Haldbo as Chief Financial Officer (CFO). Reporting to Logpoint's CEO, Jesper Zerlang, Michael will be responsible for taking the company successfully through the next step of the Logpoint journey to become a European cyber security powerhouse. “We’re excited that Michael is joining the Logpoint team as we grow beyond scaleup and into an established cyber security company. Michael has extensive experience in taking leadership over transformation projects and M&A,” says Jesper Zerlang. “With our recent acquisition by Summa Equity, we have proven that Logpoint has the capabilities and critical mass to take us to the next level, and as we mature the business model, he is an evident choice to support and protect the business financially.” Michael Haldbo has 20 years of international and nordic experience in financial planning, analysis and strategy execution. He served as CFO at Signicat, Europe’s leading provider of digital identity solutions. Michael has also held financial executive roles at other companies in the IT and payment-related sector, including Nets and Unwire. “Logpoint has such a strong value proposition with world-class cyber security solutions, competitive pricing models, and the agility and flexibility that enable us to challenge the big mastodons in the SIEM market and become the number one vendor in Europe with a global range,” says Michael Haldbo. “From my perspective, Logpoint ticks all the boxes, scaleup, growth market, a strong business model, transitioning into SaaS and private equity owned. The frosting on the cake is that Logpoint solutions address a major societal challenge, namely the ever-growing cyber threat in the wake of COVID-19 and the war in Ukraine.” Click here for more latest news.

Data centres’ net zero plans blown off track by the energy crisis
According to research published by Schneider Electric, 81% of business leaders at UK and Irish data centres say the energy crisis will impact their organisation’s ability to meet its emissions reduction plans. Of that figure, around half of organisations say they are delaying planned investment in sustainability and net zero plans (49%). Four in ten of the same organisations (40%) say they now have more immediate business challenges to meet, while 43% claim that emission reduction targets are no longer an issue for their stakeholders. More than one in five (22%) of these firms claim that taking practical action to meet targets is difficult. Decarbonisation helps businesses reduce energy use and lower energy costs at a time when energy prices remain volatile.  Crucially, the survey of more than 1,500 large organisations reveals that business leaders still recognise the importance of working to emissions reduction targets, as nearly one third (32%) of data centre business leaders believe that climate change and net zero ambitions will become more of a priority over the next three years. Only a small minority (11%) believe that national net zero commitments will be diluted in that time. “Business leaders tell us that the energy crisis should be seen alongside the many other challenges they have faced over the last twelve months, including economic pressures, cyber security and skills shortages. Yet our research suggests that some of the UK and Ireland’s data centres are ‘kicking the carbon emissions can down the road’, as a result of the energy crisis,” says Mark Yeeles, Vice President, Secure Power Division, Schneider Electric UK and Ireland. “As fears grow about progress against global commitments made under the Paris Agreement, and the UK’s Climate Change Committee warns of a lack of progress on emissions cuts, the UK and Ireland need data centres to play their part and stick to their net zero and emissions reduction targets,” says Mark Yeeles. The survey also reveals that 32% of data centre managers believe that energy prices will fall over the next three years, while more than seven out of ten (71%) think their organisation will still be addressing the energy crisis in 12 months’ time. Presenting the survey findings, Mark Yeeles urged data centres to re-engage with their emissions reduction ambitions, “It’s not all doom and gloom, as our research shows, business leaders still believe in their climate change ambitions – they simply need to push the subject back up the corporate agenda. “The technology required to help businesses decarbonise is already available – and the return on investment for these solutions has never been more attractive, with payback periods measured in months rather than years. Organisations still have time to meet their net zero commitments by understanding and addressing energy use, investing in renewable energy and energy saving technology, and embedding sustainability and carbon reduction targets in their business plans. “What’s more, those that invest in green skills and green jobs will reap the rewards of a diverse workforce for decades to come. At Schneider Electric, we’ve seen this for ourselves through our apprenticeship and graduate programmes.” Click here for more latest news.

Nasuni and Presidio expand partnership and sign multi-year agreement
Presidio has announced an extensive partnership with Nasuni. Nasuni is optimising AWS Cloud use and reducing OpEX with Presidio’s proactive recapture into savings management (PRISM) program. In addition, Nasuni has signed a multi-year business agreement to simplify how companies store, protect and manage file data in hybrid cloud environments. A top concern of CIOs is cost optimisation according to industry analysts. To better monitor cloud spending, reduce financial risk and operational burden for its cloud and finance teams, Nasuni is leveraging Presidio’s fully managed PRISM program. Presidio manages cost optimisation and uses proprietary data science models to automatically scale cloud commitments up or down on behalf of customers at no risk to them. With its managed services taking care of operational management of Nasuni’s file data cloud environment, its team is saving time and able to focus on enhancing the Nasuni product and new innovative features. Organisations are looking to move their legacy file storage infrastructure to the cloud to centralise control of and make files easily accessible on premises or in the cloud globally to strategically use data and optimise productivity. With the Nasuni File Data Platform’s intelligent edge caching, customers can leverage the power of cloud object storage while maintaining local performance, which can translate into reduced storage costs by 60% over legacy storage as well as the ability to recover from ransomware attacks in minutes. Presidio’s team of technical experts can help customers better manage their file data environment with Nasuni in a hybrid cloud environment through any or multiple cloud providers. Click here for more latest news.

AirTrunk releases report on powering a clean energy future  
As corporations and governments pursue the challenge of achieving a low-carbon future in Asia Pacific & Japan (APJ), AirTrunk has released its ‘Powering a Clean Energy Future’ report that identifies hyperscale data centres as key drivers in APJ’s energy transition to 24/7 clean energy (CE). The report highlights how a hyperscale data centre’s size, electricity demand profile, innovation capabilities and proven experience in procuring renewable energy puts them in a prime position for partnership to accelerate the transition. Through energy system modelling, the report also determines the most effective technology pathways and costs to reaching 24/7 (CE), providing holistic analysis of what is required. AirTrunk, Head of Energy and Climate, Joscha Schmitz, says, “24/7 clean energy is crucial to achieving climate targets by fully decarbonising power grids. As the major hyperscale data centre provider in APJ, we released this report with the intention to build momentum towards achieving 24/7 clean energy in the region. “24/7 clean energy is more advanced in the European and North American markets due to resource availability and market maturity. The report outlines opportunities to successfully deliver clean energy technology in APJ, which is the fastest growing region, but the one experiencing the most difficulty in managing the energy transition,” says Joscha. The report also recognises the need for more industry collaboration and highlights the six steps key industry players and governments must do to fully realise the potential of 24/7 CE in APJ, including: Increase and strengthen grid interconnection between markets Accelerate ‘green molecules’ and other new firming and storage technologies Diversify renewables portfolio with local firming solutions Leverage on-site infrastructure to support local grids and power markets Shift non-latency-sensitive loads to lower cost markets Start the discussion to achieve 24/7 clean energy in a cost-optimal way AirTrunk, Chief Technology Officer, Damien Spillane, says, “Major corporations and governments in APJ have made significant emissions reductions commitments, however in the current climate, it remains challenging to achieve these. That’s why we are calling on energy providers, sustainability groups, corporations and governments to work together, and with us, to facilitate a clean energy future for all. “We take our responsibility as a key enabler of the transition seriously and will continue to focus our efforts on decarbonisation as we progress toward net zero emissions by 2030,” says Damien.

Why hybrid cooling is the future for data centres
Gordon Johnson, Senior CFD Manager, Subzero Engineering Rising rack and power densities are driving significant interest in liquid cooling for many reasons. Yet, the suggestion that one size fits all ignores one of the most fundamental aspects of potentially hindering adoption - that many data centre applications will continue to utilise air as the most efficient and cost-effective solution for their cooling requirements. The future is undoubtedly hybrid, and by using air cooling, containment, and liquid cooling together, owners and operators can optimise and future-proof their data centre environments. Today, many data centres are experiencing increasing power density per IT rack, rising to levels that just a few years ago seemed extreme and out of reach, but today are considered both common and typical while simultaneously deploying air cooling. In 2020 for example, the Uptime Institute found that due to compute-intensive workloads, racks with densities of 20kW and higher are becoming a reality for many data centres. This increase has left data centre stakeholders wondering if air-cooled IT equipment (ITE) along with containment used to separate the cold supply air from the hot exhaust air has finally reached its limits and if liquid cooling is the long-term solution. However, the answer is not as simple as yes or no. Moving forward, it’s expected that data centres will transition from 100% air cooling to a hybrid model, encompassing air and liquid-cooled solutions with all new and existing air-cooled data centres requiring containment to improve efficiency, performance, and sustainability. Additionally, those moving to liquid cooling may still require containment to support their mission-critical applications, depending on the type of server technology deployed. One might ask why the debate of air versus liquid cooling is such a hot topic in the industry right now? To answer this question, we need to understand what’s driving the need for liquid cooling, the other options, and how can we evaluate these options while continuing to utilise air as the primary cooling mechanism. Can air and liquid cooling coexist? For those who are newer to the industry, this is a position we’ve been in before, with air and liquid cooling successfully coexisting, while removing substantial amounts of heat via intra-board air-to-water heat exchangers. This process continued until the industry shifted primarily to CMOS technology in the 1990s, and we’ve been using air cooling in our data centres ever since. With air being the primary source used to cool data centres, ASHRAE (American Society of Heating, Refrigeration, and Air Conditioning Engineers) has worked towards making this technology as efficient and sustainable as possible. Since 2004, it has published a common set of criteria for cooling IT servers with the participation of ITE and cooling system manufacturers entitled ‘TC9.9 Thermal Guidelines for Data Processing Environments’. ASHRAE has focused on the efficiency and reliability of cooling the ITE in the data centre. Several revisions have been published with the latest being released in 2021 (revision 5). This latest generation TC9.9 highlights a new class of high-density air-cooled ITE (H1 class) which focuses more on cooling high-density servers and racks with a trade-off in terms of energy efficiency due to lower cooling supply air temperatures recommended to cool the ITE. As to the question of whether or not air and liquid cooling can coexist in the data centre white space, it’s done so for decades already, and moving forward, many experts expect to see these two cooling technologies coexisting for years to come. What do server power trends reveal? It’s easy to assume that when it comes to cooling, a one-size will fit all in terms of power and cooling consumption, both now and in the future, but that’s not accurate. It’s more important to focus on the actual workload for the data centre that we’re designing or operating. In the past, a common assumption with air cooling was that once you went above 25kW per rack, it was time to transition to liquid cooling. But the industry has made some changes in regards to this, enabling data centres to cool up to and even exceed 35kW per rack with traditional air cooling. Scientific data centres, which include largely GPU-driven applications like machine learning, AI, and high analytics like crypto mining, are the areas of the industry that typically are transitioning or moving towards liquid cooling. But if you look at some other workloads like the cloud and most businesses, the growth rate is rising but it still makes sense for air cooling in terms of cost. The key is to look at this issue from a business perspective, what are we trying to accomplish with each data centre? What’s driving server power growth? Up to around 2010, businesses utilised single-core processors, but once available, they transitioned to multi-core processors, however, there still was a relatively flat power consumption with these dual and quad-core processors. This enabled server manufacturers to concentrate on lower airflow rates for cooling ITE, which resulted in better overall efficiency. Around 2018, with the size of these processors continually shrinking, higher multi-core processors became the norm and with these reaching their performance limits, the only way to continue to achieve the new levels of performance by compute-intensive applications is by increasing power consumption. Server manufacturers have been packing in as much as they can to servers, but because of CPU power consumption, in some cases, data centres were having difficulty removing the heat with air cooling, creating a need for alternative cooling solutions such as liquid. Server manufacturers have also been increasing the temperature delta across servers for several years now, which again has been great for efficiency since the higher the temperature delta, the less airflow that’s needed to remove the heat. However, server manufacturers are, in turn, reaching their limits, resulting in data centre operators having to increase the airflow to cool high-density servers and to keep up with increasing power consumption. Additional options for air cooling Thankfully, there are several approaches the industry is embracing to cool power densities up to and even greater than 35kW per rack successfully, often with traditional air cooling. These options start with deploying either cold or hot aisle containment. If no containment is used typically, rack densities should be no higher than 5kW per rack, with additional supply airflow needed to compensate for recirculation air and hot spots. What about lowering temperatures? In 2021, ASHRAE released their 5th generation TC9.9, which highlighted a new class of high-density air-cooled IT equipment, which will need to use more restrictive supply temperatures than the previous class of servers. At some point, high-density servers and racks will also need to transition from air to liquid cooling, especially with CPUs and GPUs expected to exceed 500W per processor or higher in the next few years. But this transition is not automatic and isn’t going to be for everyone. Liquid cooling is not going to be the ideal solution or remedy for all future cooling requirements. Instead, the selection of liquid cooling instead of air cooling has to do with a variety of factors, including specific location, climate (temperature/humidity), power densities, workloads, efficiency, performance, heat reuse, and physical space available. This highlights the need for data centre stakeholders to take a holistic approach to cooling their critical systems. It will not and should not be an approach where only air or only liquid cooling is considered moving forward. Instead, the key is to understand the trade-offs of each cooling technology and deploy only what makes the most sense for the application. Click here for more thought leadership.

Consult Red celebrates 20 years of innovation
Consult Red has marked its 20th anniversary of continuous success, growth and innovation. Over the past two decades, it has helped its clients transform the media, telecommunication and IoT technology landscapes. Since its inception in 2003, the company has remained steadfast in its commitment to delivering trusted consultancy and high quality engineering services, while embracing technological advancements and industry trends. Throughout the years, it has built an enviable reputation for its dedication to excellence, customer satisfaction and innovative solutions. "We’ve reached this significant milestone, thanks to our valued clients and the work of our talented and dedicated team," says Raghu Venkatesam, CEO at Consult Red. “We are grateful for the long-term trust and support of our clients, partners and stakeholders, who have been instrumental in our continued growth over the past two decades." Over the past 20 years, the company has achieved numerous milestones and accomplishments, including: Contributing to innovative product launches for our key customers, including set-tops, connected TV devices and embedded software services for media and connectivity operators across Europe, US and Asia. Delivering connected devices and systems for industrial and IoT applications, including vehicle charging, industrial vision, telehealth, power management, consumer devices and wireless connectivity. Establishing as an employee-owned company, giving employees a stake in the business and ensuring long-term stability for clients. Nurturing a talented and diverse global workforce that drives innovation and fosters a culture of collaboration and excellence. Click here for latest data centre news.

Paying attention to data centre storage cooling
Authored by Neil Edmunds, Director of Innovation, Iceotope With constant streams of data emerging from the IoT, video, AI and more, it is no surprise we are expected to generate 463EB of data each day by 2025. How we access and interact with data is constantly changing and is going to have a real impact on the processing and storage of that data. In just a few years, it's predicted that global data storage will exceed 200ZB with half of that stored in the cloud. This presents a unique challenge for hyperscale data centres and their storage infrastructure. According to Seagate, cloud data centres choose mass capacity hard disk drives (HDDs) to store 90% of their exabytes. HDDs are tried and tested technology, typically found in a 3.5in form factor. They continue to offer data centre operators cost effective storage at scale. The current top-of-the-range HDD features 20TB capacity. By the end of the decade that is expected to reach 120TB+, all within the existing 3.5in form factor. The practical implications of this show a need for improved thermal cooling solutions. More data storage means more spinning of the disks, higher speed motors, more actuators – all of which translates to more power being used. As disks go up in power, so does the amount of heat produced by them. Next, with the introduction of helium into the hard drives in the last decade, performance has not only improved, thanks to less drag on the disks, but the units are now sealed. There is also ESG compliance to consider. With data centres consuming 1% of global electricity demand and cooling power accounting for more than 35% of a data centre’s total energy consumption, pressure is on data centre owners to reduce this consumption. Comparison of cooling technologies Traditionally, data centre environments use air cooling technology. The primary way of removing heat with air cooling methods is by pulling increasing volumes of airflow through the chassis of the equipment. Typically, there is a hot aisle behind the racks and a cold aisle configuration in front of the racks which dissipates the heat by exchanging warm air with cooler air. Air cooling is widely deployed and well understood. It is also well engrained into nearly every data centre around the world. However, as the volume of data evolves, it is becoming increasingly likely that air cooling will no longer be able to ensure an appropriate operating environment for energy dense IT equipment. Technologies like liquid cooling are proving to be a much more efficient way to remove heat from IT equipment. Precision liquid cooling, for example, circulates small volumes of dielectric fluid across the surface of the server, removing almost 100% of the heat generated by the electronic components. There are no performance throttling hotspots and no front to back air cooling, or bottom to top immersion constraints which are present in tank solutions. While initial applications of precision liquid cooling have been in a sealed chassis for cooling server components, given the increased power demands of HDD, storage devices are also an ideal application. High density storage demands With high density HDD, traditional air cooling pulls air through the system from front to back. What typically occurs in this environment is that disks in the front become much cooler than those in the back. As the cold air comes and travels through the JBOD device, the air gets hotter. This can result in a 20°C or more temperature differential between the discs at the front and back of the unit depending on the capacity of the hard drive. For any data centre operator, consistency is key. When disks are varying by nearly 20°C from front to back, there is inconsistent wear and tear on the drives leading to unpredictable failure. The same goes for variance across the height of the rack, as lower devices tend to consume the cooler air flow coming up from the floor tiles. Liquid cooling for storage While there will always be variances and different tolerances taking place within any data centre environment, liquid cooling can mitigate for these variances and improve consistency. In 2022, Meta published a study showcasing how an air cooled, high density storage system was reengineered to utilise single phase liquid cooling. The study found that precision liquid cooling was a more efficient means of cooling the HDD racks with the following results: The variance in temperature of all HDDs was just 3°C, regardless of location inside the JBODs. HDD systems could operate reliably in rack water inlet temperatures up to 40°C. System-level cooling power was less than 5% of the total power consumption. Mitigating acoustic vibrational issues. While consistency is a key benefit, cooling all disks at a higher water temperature is important too. This means data centre operators do not need to provide chilled water to the unit. Reduced resource consumption – electrical, water, space, audible noise – all lead to greater reduction in TCO and improved ESG compliance. Both of which are key benefits for today’s data centre operators. As demand for data storage continues to escalate, so will the solutions needed by hyperscale data centre providers to efficiently cool the equipment. Liquid cooling for high density storage is proving to be a viable alternative as it cools the drives at a more consistent temperature and removes vibration from fans, with lower overall end-to-end power consumption and improved ESG compliance. At a time when data centre operators are under increasing pressure to reduce energy consumption and improve sustainability metrics, this technology may not only be good for the planet, but also good for business. Enabling innovation in storage systems Today’s HDDs are designed with forced air cooling in mind, so it stands to reason that air cooling will continue to play a role in the short term. For storage manufacturers to embrace new alternatives demonstrations of liquid cooling technology, like the one Meta conducted, are key to ensuring adoption. Looking at technology trends moving forward, constantly increasing fan power on a rack will not be a long term sustainable solution. Data halls are not getting any larger and costs to cool a rack are increasing. The need for more data storage capacity at greater density is exponentially growing. Storage designed for precision liquid cooling will be smaller, use fewer precious materials and components, perform faster and fail less often. The ability to deliver a more cost effective HDD storage solution in the same cubic footprint, delivers not only a TCO benefit but contributes to greater ESG value as well. Making today's technology more efficient and removing limiting factors for new and game changing data storage methods can help us meet the global challenges we face and is a step forward towards enabling a better future. Click here for more thought leadership.

GovAssure, cyber security and NDR
By Ashley Nurcombe, Senior Systems Engineer UK&I, Corelight We live in a world of escalating digital threats to government IT systems. The public sector has recorded more global incidents and data breaches than any other over the past year, according to a recent Verizon study. That’s why it is heartening to see the launch of the new GovAssure scheme, which mandates stringent annual cyber security audits of all government departments, based on a National Cyber Security Centre (NCSC) framework. Now the hard work starts. As government IT and security leads begin to work through the strict requirements of the Cyber Assessment Framework (CAF), they will find network detection and response (NDR) increasingly critical to these compliance efforts. Why we need GovAssure GovAssure is the government's response to surging threat levels in the public sector. It is not hard to see why it is such an attractive target. Government entities hold a vast range of lucrative citizen data which could be used to carry out follow-on identity fraud. Government services are also a big target for extortionists looking to hold departments hostage with disruptive ransomware. And there's plenty of classified information in there for foreign powers to go after to gain a geopolitical advantage. Contrary to popular belief, most attacks are financially motivated (68%), rather than nation-state attempts at espionage (30%). That means external, organised crime gangs are the biggest threat to government security. However, internal actors account for nearly a third (30%) of breaches, and collaboration between external parties and government employees or partners accounts for 16% of data breaches. When the cause of insider risk is malicious intent rather than negligence, it can be challenging to spot because staff may be using legitimate access rights and going to great lengths to achieve their goals without being noticed. Phishing and social engineering are still among threat actors' most popular attack techniques. They target distracted and/or poorly trained employees to harvest government logins and/or personal information. Credentials are gathered in an estimated third of government breaches, while personal information is taken in nearly two-fifths (38%). Arguably the shift to hybrid working has created more risk here as staff admit being more distracted when working from home (WFH), and personal devices and home networks may be less well protected than their corporate counterparts. The growing cyber attack surface Several other threat vectors are frequently probed by malicious actors, including software vulnerabilities. The new Freedom of Information data reveals a worrying number of government assets are now using outdated software that vendors no longer support. Connected Internet of Things (IoT) devices are an increasingly popular target, especially those with unpatched firmware or factory default/easy to guess passwords. Such devices can be targeted to gain a foothold in government networks and/or to sabotage smart city services. Finally, the government has a significant supply chain risk management challenge. Third-party suppliers and partners are critical to efficiently delivering government services. But they also expand the attack surface and introduce additional risk, especially if third parties aren't properly and continuously vetted for security risks. Take the recent ransomware breach at Capita, an outsourcing giant with billions of pounds of government contracts. Although investigations are still ongoing, as many as 90 of the firm's clients have already reported data breaches due to the attack. What the CAF demands In this context, GovAssure is a long overdue attempt to enhance government resilience to cyber risk. In fact, Government Chief Security Officer, Vincent Devine, describes it as a "transformative change" in its approach to cyber that will deliver better visibility of the challenges, set clear expectations for departments and empower security pros to strengthen the investment case. Yet delivering assurance will not be easy. The CAF lists 14 cyber security and resilience principles, plus guidance on using and applying the principles. These range from risk and asset management to data, supply chain and system security, network resilience, security monitoring and much more. One thing becomes clear, visibility into network activity is a critical foundational capability on which to build CAF compliance programmes. How NDR can help NDR (Network Detection and Response) tools provide visibility. This kind of visibility will enable teams to map assets better, ensure the integrity of data exchanges with third parties, monitor compliance and detect threats before they have a chance to impact the organisation. Although the CAF primarily focuses on finding known threats, government IT leaders should consider going further, with NDR tooling designed to go beyond signature-based detection to spot unknown but potentially malicious behaviour.  Such tools might use machine learning algorithms to learn what regular activity looks like to better spot the signs of compromise. If they do, IT leaders should avoid purchasing black box tools that don't allow for flexible querying or provide results without showing their rationale. These tools can add opacity and assurance/compliance headaches. Open-source tools based on Zeek may offer a better and more reasonably priced alternative. Ultimately, there are plenty of challenges for departments looking to drive GovAssure programmes. Limited budgets, in-house skills, complex cyber threats, and a growing compliance burden will all take its toll. But by reaching out to private sector security experts, there is a way forward. For many, that journey will begin with NDR to safeguard sensitive information and critical infrastructure. Click here for more thought leadership.

Castrol and Hypertec accelerate immersion cooling technology
Castrol has announced its collaboration with Hypertec. To accelerate the widespread adoption of Hypertec’s immersion cooling solutions for data centres, supported by Castrol’s fluid technology, both companies will collaborate to develop and test the immersion cooling technology at Castrol’s global headquarters in Pangbourne, UK. Castrol announced in 2022 that it will invest up to £50m investment in its headquarters at Pangbourne. It is pleased to have the first systems in place and fully functional for research to begin on furthering immersion cooling technologies across systems, servers and fluids to provide world class, integrated solutions to customers. Hypertec is the first server OEM to join Castrol in its drive to accelerate immersion cooling technology. The two will leverage Castrol’s existing collaboration with Submer, a leader in immersion cooling technology, who has provided its SmartPod and MicroPod tank systems to the Pangbourne facility, which have been modified to test new fluids and new server technologies. Working together, Castrol will be able to continue to develop its offers for data centre customers and look to accelerate the adoption of immersion cooling as a path to explore more sustainable and more efficient data centre operations. With immersion cooling, water usage and the power consumption needed to operate and cool server equipment can be significantly reduced. Click here for latest data centre news.

Vertiv's guidance on data centres during extreme heat
Summer in the northern hemisphere has just started, but already devastating heatwaves have washed over much of the US, Mexico, Canada, Europe and Asia. Widespread wildfires in Canada have triggered air quality alerts across that country and much of the eastern half of the US and similar extreme heat events across Asia have caused widespread power outages, Europe also continues to break heat records as the fastest warming continent. The data centre cooling experts at Vertiv have issued updated guidance for managing the extreme heat. Climate change has made the past eight years the hottest on record, but with an El Niño weather pattern compounding the issue this year, many forecasts anticipate record-breaking temperatures in 2023. The sizzling outdoor temperatures and their aftermath create significant challenges for data centre operators who already wage a daily battle with the heat produced within their facilities. There are steps organisations can take to mitigate the risks associated with extreme heat. These include: Clean or change air filters: The eerie orange haze that engulfed New York was a powerful visual representation of one of the most immediate and severe impacts of climate change. For data centre operators, it should serve as a reminder to clean or change air filters in their data centre thermal management systems and HVAC systems. Those filters help to protect sensitive electronics from particulates in the air, including smoke from faraway wildfires. Accelerate planned maintenance and service: Extreme heat and poor air quality tax more than data centre infrastructure systems. Electricity providers often struggle to meet the surge in demand that comes with higher temperatures, and outages are common. Such events are not the time to learn about problems with UPS system or cooling unit. Cleaning condenser coils and maintaining refrigerant charge levels are examples of proactive maintenance that can help to prevent unexpected failures. Activate available efficiency tools: Many modern UPS systems are equipped with high efficiency eco-modes that can reduce the amount of power the system draws from the grid. Heatwaves like those seen recently push the grid to its limits, meaning any reductions in demand can be the difference between uninterrupted service and a devastating outage. Leverage alternative energy sources: Not all data centres have access to viable alternative energy, but those that do should leverage off-grid power sources. These could include on/off-site solar arrays or other alternate sources, such as off-site wind farms and lithium-ion batteries, to enable peak shifting or shaving. Use of generators is discouraged during heat waves unless an outage occurs. Diesel generators produce more greenhouse gas and emissions associated with climate change than backup options that use alternative energy. In fact, organisations should postpone planned generator testing when temperatures are spiking. “These heatwaves are becoming more common and more extreme, placing intense pressure on utility providers and data centre operators globally,” says John Niemann, Senior Vice President for the Global Thermal Management Business for Vertiv. “Organisations must match that intensity with their response, proactively preparing for the associated strain not just on their own power and cooling systems, but on the grid as well. Prioritising preventive maintenance service and collaborating with electricity providers to manage demand can help reduce the likelihood of any sort of heat-related equipment failure.” “Again this year, parts of Europe are experiencing record setting heat, and in our business we specifically see the impact on data centres. Prioritising thermal redundancy and partnering with a service provider with widespread local presence and first-class restoration capabilities can make the difference in data centre availability,” says Flora Cavinato, Global Service Portfolio Director. “Swift response times and proactive maintenance programs can help organisations to sustain their business operations while effectively optimising their critical infrastructure.” Click here for more on Vertiv.



Translate »