Infrastructure Management


atNorth opens sixth data centre in the Nordics
atNorth has formally announced that its third Iceland data centre, ICE03, is now fully operational with an initial capacity of 10MW, following a swift 11 month build. This brings its total number of operational data centres up to six, with one additional site, FIN02, in Finland under construction. The new site is a milestone in the company’s overarching goal to scale ahead of increasing demand for high-performance computing requirements at a time when cost-efficient sustainable infrastructure is in more demand than ever. The site is located in a strategic position. As Iceland is ranked in the top 10 markets for data centre location, atNorth’s Iceland entry recently won the ‘location’ category at the Tech Capital Global Awards, which aims to recognise a geography for its attractiveness and investor friendly climate when dealing with digital infrastructure investors. This is an ideal location for data centres, largely due to its access to a highly skilled workforce and cool climate, which is crucial for cost-effective cooling of data centre infrastructure. The country also has an energy supply run on a closed grid powered by 100% renewable hydro and geothermal energy sources. Iceland also benefits from fully redundant connectivity and now boasts multiple undersea fibre optic cables connecting the country to the UK, Ireland, North America, and mainland Scandinavia. The ICE03 site offers expansions possibilities of up to 50MW and is located 250km north of Reykjavík, where other data centres are predominantly located. This geographical separation offers advantage in terms of disaster recovery and enhanced security. By diversifying the location of its data centres, atNorth reinforces its commitment to ensuring a high level of data protection and business continuity for its clients. Additionally, the town of Akureyri is a thriving technology hub and the new centre will offer job opportunities to an already highly skilled workforce.   Iceland’s cool climate and abundance of energy sources allows businesses to tap into an infrastructure with renewable energy and great connectivity resulting in significant cost efficiencies. ICE03’s accessibility, being only 10 minutes from an international airport, presents a new joint opportunity for atNorth to deliver high-precision services to European businesses as these businesses look to decarbonise and migrate IT operations cost-efficiently.  “We are delighted to be expanding our presence in the Nordics once again with a third site in Iceland,” says Eyjólfur Magnús Kristinsson, CEO, atNorth. “With six operational sites across three Nordic countries and another in development, our commitment to meet increasing demand in the industry through continued expansion is evidenced. Furthermore, our dedication to sustainable best practice supports our goal to become the service provider of choice for eco-friendly high-performance infrastructure.”

Macquarie Data Centres upgrades Sydney and Canberra campuses
Macquarie Data Centres has announced that it completed major upgrades to its data centre campuses in Sydney and Canberra, helping government and enterprise customers expand their capacity and improve security posture and compliance. The multimillion-dollar project includes the addition of two further ultra-secure zones to its Sydney and Canberra campuses plus significant power upgrades and increased operational efficiency to support new and existing customer growth. The zones have been added to the provider’s data centres and meet the requirements of the Australian Federal Government’s Protective Security Policy Framework. The framework considers both the physical and cyber security standards which underpin Macquarie Data Centres’ operations. The upgrades also increased rack capacity and expanded other secure zones in its campuses across the two places, all of which are ready for occupancy. This will help customers plan for additional capacity as they use more data-intensive workloads such as artificial intelligence. “These upgrades give our local and international customers the capacity they need to scale their businesses and expand their Australian footprint,” says David Hirst, Group Executive, Macquarie Data Centres. Current data centre customers include 42% of commonwealth governments, the world’s big hyperscalers and large multinationals. “Capacity planning is one of the key issues organisations face when making data centre investments, whether they’ll have sufficient runway to scale for the data demands that will impact them over time,” says David. “They need expert colocation partners that understand not just capacity, but the related security, compliance, and sovereignty considerations. This investment is a testament to our ability to be that trusted partner.” These major projects were undertaken in operational data centre halls and were completed within six months without any outages or disturbances to the company’s existing customers. The project was completed ahead of time, under budget, and importantly, with zero lost time injury (LTI) or any medical treatment injuries (MTI), continuing Macquarie Data Centre’s record for safety.  “Anyone who works in the data centre industry will know the level of planning, expertise and collaboration needed to undertake a project of this magnitude,” says Gavin Kawalsky, Head of Projects, Macquarie Data Centres. “The project’s success is down to our team’s tireless work, expertise and experience.”

Aruba and Namex to create a new PoP at the Hyper Cloud Data Centre
Aruba has announced an agreement with Namex to activate a new point of presence at the Hyper Cloud Data Centre of Aruba (IT4) in Rome. The largest data centre campus is under construction in Rome, and will extend over an area of 74,000m². When fully operational, it will comprise of five independent data centre buildings for a total of 30MW of IT power, delivered with a redundancy level of 2N or higher. The campus will count on the presence of major national telecommunications infrastructures to guarantee high performance network interconnections. The Hyper Cloud Data Centre has a twofold objective: to meet the needs of the private sector and public administration by offering them customised hyperscale technology solutions, and those of central and southern Italy in terms of digital services. Hosting Namex's point of presence is part of the broader ‘carrier neutral’ strategy adopted by Aruba for its data centres, conceived not only to allow customers to benefit from extremely reliable and high-performance internet connection solutions, but also to favour the development of interconnections between network operators to the benefit of the entire ecosystem. The great availability of space and power, combined with a large and free choice of connectivity options, will make the Aruba campus the ideal infrastructure to host the systems of customers of any size, from SMEs to hyperscalers or cloud service providers, up to the PA. “We are very pleased with the agreement reached with Namex for the opening of its point of presence at the new data centre campus in Rome, which we will inaugurate within a few months. The increase in the quantity and capacity of domestic connections and the ever-increasing importance of services provided in the cloud require an adequate development of telecommunications networks, its interconnection with the main data centres and interchange points throughout the country,” comments Stefano Cecconi, CEO of Aruba. “Ponte San Pietro, Arezzo and Rome, the sites of our Italian data centres, represent three strategic locations, connected to the country's main traffic nodes, hosting tens of thousands of servers providing services of all kinds and thus standing as ideal infrastructures where telcos and service providers, both national and international, can deliver its services and exchange traffic efficiently and reliably to the great benefit of Italian end users.” “The opening of this new data centre campus is very important for us, a further sign of Rome's growth as an internet interconnection hub at the centre of Italy and the Mediterranean. The new point of presence will allow Namex to continue to grow at the high pace of recent years, which have seen a strong growth in traffic and the tripling of the networks connected at our interchange point, reaching over 200 connected providers, including national and international ones,” comments Maurizio Goretti, CEO of Namex. “The presence of our IXP within a large data centre such as Aruba's will allow us to expand the offer of the Roman hub and respond to providers requiring installations in the order of tens of racks and megawatts of power."

Data centre construction oversight could cause costly downtime
Data centre operators are being warned about a costly oversight that could lead to downtime and costly remedial construction work, according to a new sector report. Concern is being raised around incorrectly designed and installed fluing for backup generators leading to overheating and system failure during grid outages. Critical infrastructure malfunction related downtime may not only lead to steep penalties for data centre operators, but also cause reputational damage to the construction and design teams involved. With research from the Mediterranean Center of Social and Educational Research (MSCER) claiming around 30% of data centre construction as remedial work, incorrect design and installation of building services is a disruptive and costly issue that could be avoided. To support the industry with the best practices for flue specification and installation in data centre design and construction, chimney and flue manufacturer Schiedel, has detailed its recommended ‘Critical Path’ process in its new report. Dean Moffatt, Technical Sector Expert at Schiedel, explains, “This paper aims to address the chimney blind spot in the industry by promoting a 'flue first' mindset, using insights from Scheidel's team of experts. It discusses the various factors that make correct specification crucial, outlines key considerations for contractors and architects, and provides a better understanding of what a successful installation entails.” As Savills research cites the need for construction of data centres to more than double by 2025, ensuring all critical infrastructure is installed correctly first time, is essential. With pressure on operators to deliver capacity to meet growing demand, risking the financial penalties and reputational damage that remedial works could pose is not an option. James adds, “At Schiedel, we are committed to providing excellent assistance to those who are part of the data centre development process, especially during the critical construction phases. Our goal is to set a standard for the use of flues in data centres, which is crucial as the industry continues to expand.”

How to prioritise efficiency without compromising on performance
By Martin James, VP EMEA at Aerospike With the energy sector in turmoil and the energy price cap now set to be in place only until April 2023, financial uncertainty is a thread that runs through all businesses, not least those that are running data centres filled with power-hungry servers. Data centre operators have, however, been addressing the issue of energy for years now. Their priority is no longer to pack in as many active servers as possible, but instead to design facilities that deliver efficient, resilient services that can minimise not only their carbon footprint, but their customers’ too. At one point the idea of reducing server count would have been seen as a bad proposition for cloud providers and data centre operators. It would decrease consumption and lower profits. But with increasing concerns about climate change, attitudes have changed. Many operators today have ESG strategies and goals and minimising the impact of escalating energy prices is just part of a broader drive towards reaching net zero carbon. Among the hyperscalers, for example, Amazon announced earlier this year that it had increased the capacity of its renewable energy portfolio by nearly 30%, bringing the total number of its projects to 310 across 19 countries. These help to power its data centres and have made Amazon the world’s largest corporate buyer of renewable energy. As internet usage grows globally, Google has also set its sights on moving to carbon-free energy by 2030. In a blog last Autumn, Google’s CEO Sundar Pichai outlined how in its newest building at its California HQ - the lumber is all responsibly sourced, and the outside is covered in solar panels, which is set to generate about 40% of the energy the building would use. Using solutions to drive down PUE While opting for greener energy sources, data centres of all sizes can also lower their carbon footprint if they reduce what is called ‘power usage effectiveness’ (PUE). This is a measure of how much energy is used by the computing equipment within the facility. TechTarget describes it as dividing the total amount of power entering a data centre by the power used to run the IT equipment within it. PUE ratings have become increasingly important, not just to data centre operators who want to be seen to be efficiently managing their facilities, but also to their customers who will benefit from this improved efficiency. With the drive towards carbon neutrality filtering through the entire ecosystem of data centres and the companies that work in partnership with them, a range of both hardware and software solutions are being used to drive down PUE. A recent IEEE paper focused on CO2 emissions efficiency as a non-functional requirement for IT systems. It compared the emissions efficiency of two databases, one of which was ours, and their costs. It concluded that their ability to reduce emissions would not only have a positive impact on the environment but would also reduce expenditure for IT decision-makers. This makes sense as efficiency starts to take priority over scaling resources to deliver performance. Adding extra servers is no longer environmentally sustainable, which is why data centres are now focused on how they can use fewer resources and achieve a lower carbon footprint without compromising on either scalability or performance. Cut server counts without a performance trade-off A proven method for doing this in data centres is by deploying real-time data platforms. These allow organisations to take advantage of billions of real-time transactions using massive parallelism and a hybrid memory model. This results in a tiny server footprint - our own database requires up to 80% less infrastructure - allowing data centres to drastically cut their server count, lower cloud costs for customers, improve energy efficiency and free up resources that can be used for additional projects. Modern real-time data platforms provide unlimited scale by ingesting and acting on streaming data at the edge. They can combine this with data from systems of record, third party sources, data warehouses, and data lakes. Our own database delivers predictable high performance at any scale, from gigabytes to petabytes of data at the lowest latency. This is why earlier this month, we announced that our latest database running on AWS Graviton2 - the processors housed in data centres that support huge cloud workloads - have been proven to deliver price-performance benefits of up to an 18% increase in throughput while maintaining up to 27% reduction in cost, not to mention vastly improved energy efficiency.  The fiercely competitive global economy demands computing capacity, speed and power, but this can no longer come at the price of our planet. The concerns for data centres are not just to combat escalating energy prices, but to turn the tide on energy usage. Meeting ESG goals means becoming more energy efficient, reducing server footprints and taking advantage of a real-time data platform to ensure performance is unaffected. Every step that data centres take towards net zero is another indicator to customers that they have not just their best interests at heart, but those of the environment too.

Edge ecosystems, 5G and hybrid cloud to drive change in 2023
If data is the fuel powering the global economy, then data centres are the backbone on which it sits. Despite economic slowdowns, the upward trend in cloud infrastructure and mobile connectivity has continued. With latency expectations increasing, the distance that data travels has also never been more critical. Looking ahead to 2023, Pulsant, has identified its six predictions for the rapidly changing world of infrastructure. Ecosystems will power UK business to the edge As the year progresses, organisations engaged in major projects such as smart cities or industrial IoT implementations will seek out ecosystems over a single-vendor approach. Their dependence on data means these projects will require edge computing for advances such as intelligent, interactive transport system, remote, AI powered live video analysis, or highly automated, complex manufacturing. In each case, organisations will want access to more than just a data centre. They will want edge expertise and an ecosystem of companies with specialist understanding of use cases and specific types of connectivity and backhaul. And they will want to avoid being totally reliant on a single vendor. More than one view and multiple options are necessary so organisations can maximise performance and resilience while keeping a lid on cost. Mobile edge computing ecosystems, for example, will facilitate faster and more flexible deployment of location-based services, along with content delivery applications. Connectivity will be about more than a mast 2023 will be the year when connectivity comes more sharply into focus. In the EY survey of UK businesses, 43% admit they struggle to understand how 5G connectivity relates to emerging technologies such as edge computing. This is worse than the global average of 39%. Yet 5G will continue to grow in the UK. Applications focused on real-time and aggregated data analytics need connectivity that has either low jitter, loss and lag or has dedicated high bandwidth. The telcos have been first movers in this market with 5G, but carrier fibre delivers waves that are more dependable. MECs (multi-access edge computing environments) provide IT services, compute and cloud access but this will soon give way to sliced radio networks or shared services at the metropolitan level. There are already live use cases in the transport and energy sectors, but large-scale adoption will follow once edge infrastructure platforms have fully developed their low latency connectivity, high-speed backhaul to the public cloud and local computing capabilities. Businesses looking to implement these technologies will want to benefit from direct connectivity to the world’s top cloud-providers at the same time as processing data locally to achieve the right level of latency and cost-optimisation. Organisations will seek simplicity in cloud connectivity partnerships, to avoid the complexity of using different exchanges and third party networks. Regional data centres will hog more of the limelight Regional data centres will continue their significant growth. ResearchAndMarkets this year said regional data centres outside the M25 and Slough are adding 20,000m2 annually, with overall data centre revenue growth will be 36% up to 2025. The drivers behind these figures include the global explosion of SaaS applications and the demand for edge infrastructure. Increasing numbers of regional businesses want low latency, high-bandwidth connectivity so they can implement AI technologies and reap the benefit of SaaS applications. SaaS companies want to deliver those applications from edge data centres. For most of the UK, this is only possible from data centres strategically sited in the regions. A route-diverse network of edge data centres connected by high-speed fibre with backhaul to all the hyperscaler hubs will become increasingly essential. The development of UK-wide edge computing platforms will continue to shift the way businesses operate and will improve the quality of life for millions of people living outside the main metropolitan areas. It’s already starting to transform content delivery, virtual reality, real-time advertising, and even remote healthcare. Streaming will continue to outrun smart cars for now Smart vehicles are an exciting and massive use case for edge computing, but 2023 is unlikely to be their break-out year. For the foreseeable future, the explosion of video streaming services will outrun smart vehicles. Autonomous vehicles are expected to account for only about 12% of registrations by 2030. The video streaming market, by contrast, could grow by more than 20% between now and 2030, thanks to the ability of edge computing infrastructure to support data-intensive content delivery and high levels of personalisation through AI. But as the development of smart cars continues, edge computing will be at the centre of collaborations between the designers and implementers of the many technologies and systems required. Edge data centres will process and filter the masses of data smart cars and their infrastructure generate and depend on. UK business gets the tools to make hybrid cloud kick Hybrid cloud is going to grow even faster in 2023. The global hybrid cloud market was valued at $85 billion in 2021 and Statista forecasts it to grow to $262 billion in 2027. Hybrid architectures can be notoriously complex and costly to operate, but the advent of next-generation cloud management platforms is removing many of these drawbacks. Organisations choosing a hybrid cloud architecture combine the best of the public or private clouds and on-premises data centres. They can benefit from greater cost-control, faster application-deployment, and the ability to manage all their workloads centrally while extending advanced orchestration capabilities all the way to the edge. With the new toolsets they can manage all their environments from a single interface and gain a full understanding of performance. Instead of watching costs rack up with no gain in performance, this is a year when more organisations will switch between environments according to their own requirements - not the cloud provider’s. Massive data processing needs to de-couple from climate and ecological harms The COP27 climate change conference in Egypt heard that data science will play a major role in helping reduce carbon emissions yet the data centres that process all that information will remain under heavy pressure to reduce energy consumption and move to renewables. Having authentic green credentials is likely to make a significant difference for data centre networks as 2023 unfolds. Potential customers will be looking for adoption of valid emissions-reductions frameworks such as the Science Based Targets initiatives’ Net Zero Standard. This is the kind of robust and credible approach that enterprises will want to see so they are not accused of ignoring the need to reduce greenhouse gas emissions. Data centre operators will be under intense scrutiny, needing to demonstrate they are using valid reporting methodologies that cover everything from facilities to vehicles and the energy they purchase. Data centres demonstrating high levels of renewable energy will clearly be at a major advantage, yet there may well be a push for more information about upstream and downstream climate effects. Data centre operators will need to continue to develop their understanding of these effects to ensure responsible and informed choices.

Data centre outages are costing more, with power failure the culprit
By Paul Brickman, Commercial Director at Crestchic Loadbanks One of the more consistent problems that the data centre industry faces is the issue of outages. While it is well known that data centre outages can cause critical work problems for an enterprise of any size, what is becoming more and more apparent is that these outages are becoming increasingly costly too. Recent findings from Uptime Institute’s 2022 Global Data Centre Survey revealed that the data centre industry is growing immensely, becoming more and more dynamic and resilient. Despite persistent staffing shortages, supply chain delays and other obstacles, there is a renewed focus on being more sustainable. The report indicated, however, that light was not at the end of every tunnel, as it additionally highlighted the fact that downtime in the data centre industry is becoming ever more expensive. Indeed, power failures have been identified as the main cause of this increase in cost. The Global Data Centre Survey focuses on responses from more than 800 owners and operators of data centres. This included those responsible for managing infrastructure at the world’s largest IT organisations. While the aforementioned notes of sustainability, efficiency gains, staff shortages and supply chain issues also dominated the report, the issue of power resiliency remained a persistent and dominant theme throughout. Back up power failure - a growing concern Further analysis in related research from the Uptime Institute identifies that the biggest cause of power-related outages is the failure of uninterruptible power supplies, followed by transfer switch and generator failures. Although this data shows a trend towards improved outage rates, the frequency of these outages is much too high and, with costs also on the rise, the consequences of an outage are getting much more severe. Data centre operators are well aware of the impact that a power outage can have, and many have put measures in place to mitigate these risks. However, with back up power failures identified as the primary cause of power outages, as well as external issues around grid reliability, energy shortfalls, and the transition to more sustainable power sources, it has never been more important that operators test their back up power systems. £1m failures are becoming increasingly common The data highlighted in the report indicates that the costs of outages are on the rise. This is likely down to several factors, such as industry changes, the cost per minute of downtime increasing, and the prevalence of technology that is susceptible to outages. In fact, a quarter of the respondents that were interviewed reported that their most recent outage cost them more than £1 million in not only direct costs, but in indirect costs also. This 25% is a significant percentage increase from 2021, which showcases a continuing upward trend over the last five years. The report states, ‘Uptime’s 2022 annual survey findings are remarkably consistent with previous years. They show that on-site power problems remain the single biggest cause of significant site outages by a large margin.’ Considering that data centre equipment vendors are caught between high demand and lingering supply chain problems, and that attracting, but, moreover, retaining qualified staff remains highly problematic for many operators, it is becoming increasingly clear that using a load bank is an essential cost saving tool. Using a load bank to commission or regularly test the back up power system not only tests the prime movers and the batteries (UPS), but also ensures that the more critical components of the system, such as the alternator and crucially the transfer switches are tested as well. These load bank tests not only prove that the UPS or generators will start, operate, and run efficiently in the case of a power outage, but also that the sets can be safely turned off with no interruptions when mains power is restored. Put simply, in a data centre environment, the business case for using a load bank is clear cut - not testing is an extremely costly risk to take.

Neterra launches a new fibre metro network in Sofia
Neterra has built and launched a new, fast and secure fibre metro network in Sofia, the capital of Bulgaria. It covers the entire capital, including all important business centres, central streets, and boulevards. Through it, the company offers internet for businesses, protection from DDoS attacks, other connectivity services and media streaming. Qualified engineers are responsible for maintenance and provide technical support 24/7. Neterra's new network has several major advantages compared to the networks of other operators. It is the only fibre network that reaches all the data centres in Sofia, enters them, and connects them. This includes both Neterra's data centres and those of other operators. Another benefit is that cables are run deeper underground and in protected conduits to prevent risks of outages. For the Sofia fibre metro network, the company uses the most modern and high-quality equipment - from cables to optical distribution frames (ODF) and connectors. As a result, the connection is of exceptional quality. In the capital of Bulgaria, Neterra maintains over 550 active business services and consciously invests in reliable components. Thanks to the large capacities set in advance, Neterra's metro network is expected to meet the needs of businesses in Sofia for years to come. At the same time, it is connected to the Bulgarian core fibre network of the company, which connects all major Bulgarian cities such as Varna, Veliko Tarnovo, Burgas, Plovdiv and Ruse.

Swindon data centre goes carbon neutral in sustainability push
A data centre in Swindon, Carbon-Z, has become one of the first in the UK to be fully carbon neutral, following an overhaul of its site and work practices. This includes the submersion of all hardware components in cooling liquid and sourcing electricity from green energy providers. Plans are also in place for installing solar panels on the site’s roof. The site was previously known as SilverEdge and is rebranding itself to reflect the change of direction in how it operates and the services it provides to clients. It now hopes to inspire a wider shift towards sustainability within the data centre industry, which accounts for more greenhouse gas emissions annually than commercial flights. Jon Clark, Commercial and Operations Director at Carbon-Z, comments, “As the UK and the world move towards achieving net zero emissions by 2050, our industry is responsible for making data centres greener and more efficient. At Carbon-Z, we continually look for new ways to improve our sustainability, with the goal being to get our data centres to carbon neutral, then carbon zero and then carbon negative. We believe this is possible and hope to see a wider movement among our peers in the same direction over the coming years.” Playing it cool The growing intensity of computing power, as well as high performance demands, has resulted in rapidly rising temperatures within data centres and a negative cycle of energy usage. More computing means more power, more power means more heat, more heat demands more cooling, and traditional air-cooling systems consume massive amounts of power, which in turns contributes to the heating up of sites. To get around this, Carbon-Z operates using liquid immersion cooling, a technology which involves the submersion of hardware components in dielectric liquid (which does not conduct electricity) and conveys heat away from the heat source. This greatly reduces the need for cooling infrastructure and costs less than traditional air cooling. The smaller amount of energy that is now needed to power the Swindon site can now be sourced through Carbon-Z’s Green Energy Sourcing.  While its clear that immersion cooling is quickly catching on - it is predicted to grow from $243 million this year to $700 million by 2026 - the great majority of the UK’s more than 600 data centres are not making use of it, and continue to operate in a way which is highly energy intensive and carbon emitting. Riding the wave As part of its rebrand, Carbon-Z has also updated the kinds of services it offers to customers to make sure that they are financially, as well as environmentally, sustainable. Its new service, Ocean Cloud, has been designed with this in mind, providing customers dedicated servers and a flat-fee approach to financing. Having a dedicated server within a data centre means that spikes in demand from other tenants has no effect at all on yours, avoiding the ‘noisy neighbour’ problem associated with the multi-tenant model favoured by many large operators. This makes the performance of the server more reliable and energy efficient. Ocean Cloud also solves one of the other major problems with other cloud services - overspend - through its flat-fee approach. Customers are charged a fixed fee that covers the dedicated server and associated storage, as well as hosting and remote support of the hardware infrastructure to reduce maintenance overheads. Jon comments, “We are very proud of Ocean Cloud, as it allows us to offer clients a service that is not only better for the ocean, the planet and for our local communities than other hosted services, but also brings clear operational and cost-related benefits. Striking this balance is crucial to ensure customers are on board with the transition to more sustainable data centre operations, especially at times like these when many companies are feeling the financial pinch off the back of rising inflation.”

How reliable is your backup power?
By Paul Brickman, Commercial Director at Crestchic What does good look like? It’s no surprise that the data centre sector’s reliance on UPS is on the up, and the onus is often on the site manager or maintenance teams to ensure the equipment that provides this power is reliable, well-maintained, and fit for purpose. The maintenance and regular testing of a UPS primary power source is considered best practice and any business that runs this sort of system will likely have a programme of maintenance in place. But this is only half a job done. There remains an astonishing number of data centres that fail to regularly test their backup power system, despite it lying dormant for the majority of the year. Instead, data centres are putting their trust in fate, hoping that the backup system will activate without fail - a fool’s game given the increasing cost of downtime. Why factory testing is not enough UPS systems and backup generators are typically tested at the factory as part of the manufacturing and quality testing process. Some businesses mistakenly think that this will be sufficient to ensure the equipment will operate effectively after installation. The reality is that on-site climatic conditions such as temperature and humidity often vary between locations. These variations in environment, combined with the impact of lifting, moving and transporting sensitive equipment, can mean that the manufacturer-verified testing may be thrown off kilter by on-site conditions or even human intervention during installation. For this reason, it is absolutely critical that backup power systems are commissioned accurately and tested in-situ in actual site conditions using a load bank. Where unplanned downtime is likely to be costly or even devastating to a business’ financial stability - having backup power such as a generator is crucial. Wherever power is generated, there is also a need for a load bank - a device that is used to create an electrical load that imitates the operational or ‘real’ load that a generator would use in normal operational conditions. In short, the load bank is used to test, support, or protect a critical backup power source and ensure that it is fit for purpose in the event that it is called upon. Backup power testing best practice A robust and proactive approach to the maintenance and testing of the power system is crucial to mitigate the risk of failure. However, implementing a testing regime that validates the reliability and performance of backup power must be done under the types of loads found in real operational conditions. What would be considered best practice for testing a backup power system? Ideally, all generators should be tested annually for real-world emergency conditions using a resistive-reactive 0.8pf load bank. Best practice dictates that all gensets (where there are multiple) should be run in a synchronised state, ideally for eight hours but for a minimum of three. Where a reactive-only load bank is used, testing should be increased to four times per year at three hours per test. In carrying out this testing and maintenance, fuel, exhaust and cooling systems and alternator insulation resistance are effectively tested, and system issues can be uncovered in a safe, controlled manner without the cost of major failure or unplanned downtime. Why is resistive-reactive the best approach? Capable of testing both resistive and reactive loads, this type of load bank provides a much clearer picture of how well an entire system will withstand changes in load pattern while experiencing the level of power that would typically be encountered under real operational conditions. Furthermore, the inductive loads used in resistive-reactive testing will show how a system will cope with a voltage drop in its regulator. This is particularly important in any application which requires generators to be operated in parallel (prevalent in larger business infrastructures such as hospital or data banks) where a problem with one generator could prevent other system generators from working properly or even failing to operate entirely. This is something which is simply not achievable with resistive-only testing. Secure your power source The importance of testing is being clearly recognised in many new data centres, with the installation of load banks often being specified at the design stage rather than being added retrospectively. Given that the cost of a load bank is typically only a fraction of that of the systems which it supports, this makes sound commercial sense and enables a preventative maintenance regime, based on regular and rigorous testing and reporting, to be put in place from day one. While testing of power systems is not yet a condition of insurance, some experts believe it is only a matter of time before this becomes the case. At the very least, by adopting a proactive testing regime, data centres can take preventative action towards mitigating the catastrophic risk associated with power loss.



Translate »