Power & Cooling


Critical cooling specialists launch Cloud Diagnostics
Airedale has launched Cloud Diagnostics, an advanced HVAC performance management tool available on your phone, tablet or laptop, in response to the pressure operators and facility managers are under when working with cooling equipment that is increasingly critical to business operations. Airedale has worked with data science experts to develop a new family of cloud-enabled products which can be installed in new and existing equipment like air handling units, chillers and precision air conditioners, allowing them to be connected, monitored and analysed via a secured communication channel to the Airedale Cloud Diagnostics servers. Cloud Diagnostics has been developed to be retrofitted with no disruption to service and offers several key benefits that can really lift the pressure off people tasked with keeping HVAC systems running safely and efficiently. Leak detection An emerging feature of the predictive maintenance ability of Cloud Diagnostics is a leak detection algorithm. By being able to recognise the operational features and patterns that signal a leak, detection can occur at very low levels, saving a client significant costs and environmental damage. Most leaks are detected today at around 20%, which is when a drop in performance becomes more obvious to facilities personnel, but in tests Airedale Cloud Diagnostics was able to report a suspected leak at 5%. This early detection has huge implications not only on cost savings, but also safety and environmental targets Live dashboard The information gathered by Cloud Diagnostics is reported on a live dashboard, which is available on any internet-connected device, and alerts can be delivered immediately via SMS or email. This allows for any issues to be recognized early and responded to immediately, avoiding disruption and expensive call outs. Data aggregation can be configured to report the latest data received as well as extracting KPIs for a comprehensive visual analysis of the unit’s performance. (e.g. chiller average supply water temperature for last 7 rolling days). Predictive maintenance Predictive maintenance is the key to optimising asset management for any critical equipment. HVAC units connected to Cloud Diagnostics are analysed for performance, utilising many algorithms and machine learning techniques, whereby the unit's performance is measured on a variety of relevant factors which are all analysed for deviations against ‘normalised’ behaviour, both instantaneously and over time as well. If a drop in performance against operating conditions is detected, this will act as an early warning system for the customer/maintenance team to investigate further. Being able to identify threats and faults in advance of them happening has a huge cost saving benefit, both in terms of emergency repairs, call out fees and downtime costs to the business. Security Airedale’s products are part of the critical infrastructure of a building or a data centre and therefore it is imperative that they are secured both from physical and network access. Airedale Cloud Diagnostics has been designed with security in mind, utilising the latest technology and security practices. Access to the Airedale Cloud Diagnostics portal is via a web-based portal with valid SSL certificate, using the same technology as internet banking and other secure portals. Reece Thomas, Controls product manager for Airedale says, “Airedale Cloud Diagnostics is something I am incredibly excited about, given the huge cost and environmental benefits it can offer our clients. All that is required to connect a piece of equipment to the service is a gateway into the unit for the system to collect and transfer data, some form of internet connection and a 24Vdc power supply.” Reece continues, “The ability for connected units to be able to learn from and compare against each other utilising intelligent unit modelling means that the performance analysis techniques continually improve and get stronger over time, making things like leak detection a much simpler and more efficient process” Reece concludes, “Another benefit of sharing data anonymously is that Airedale can use the data collected to analyse and determine how to better improve our products based on actual customer usage profiles. The benefits to this are endless and our clients can be absolutely assured of security and anonymity.”

The hidden cost of data
Data underpins every aspect of modern life, with more information generated now than ever before. Keeping data centres cool is crucial for their safe and effective function, but due to the large amounts of waste heat they generate, this requires significant power consumption. To tackle this issue, Katrick Technologies has developed and patented a unique passive cooling system that removes waste heat without external power required. Here, Katrick Co-CEO Vijay Madlani examines the costs of data centre cooling and how new systems can revolutionise efficiency. We generate more data than ever before, with 44 zettabytes of data in storage as of 2020 and this expected to increase to over 200 zettabytes by 2025. To put this into perspective, a single zettabyte is equivalent to one trillion gigabytes. Much of this data is stored in data centres; dedicated facilities containing servers to store large amounts of data. Data centres are an integral part of the global economy, storing everything from our personal information to business and infrastructure data. With the nature of data stored in these centres, and the extreme sensitivity of some content, they require their own infrastructure, security, networks, and backup power supplies to limit the damage of potential problems. Environmental conditions are also highly important to ensure function and keeping data centres at an appropriate temperature round the clock can prevent overheating and failure of critical equipment, especially as they produce large amounts of heat as a by-product. In the UK there are approximately 400-450 data centre facilities, and TechUK estimates they consume 6TWh annually to run, not including the 3-4TWh required for server rooms. This figure is set to rise exponentially as the number of data centres increases, with a 2018 Nature study estimating that they will be responsible for 8,000 TWh of consumption by 2030. Keeping data centres cool uses a significant amount of this energy, with air conditioning and handler units used by 90% of the UK data centre market estimated to use 26 to 41% of the total energy consumed. These figures highlight why it is so crucial to find more efficient solutions for data centres. As the need for these facilities increases, the amount of power required to run them while minimising the risk of failures will also rise. This is the motivation behind the Katrick Technologies passive cooling system. Solutions in technology Katrick’s bespoke end-to-end solution removes excess heat without the need for any external power, keeping centres at a constant ideal temperature and reducing energy consumption. The patented technology offers an innovative zero-carbon alternative to traditional cooling units, while being cost effective and kinder to the environment. The passive cooling system uses a Thermal Vibration Bell (TVB) heat engine to maintain consistent cool temperatures in a data centre environment. The TVB has a chamber containing bi-fluids of different densities and expansion rates. The base fluid is high-density with a lower boiling point, and the fluid above is lower-density  but with a higher boiling point. When these fluids are exposed to a heat source, a dynamic movement is created as the lower fluid boils more rapidly, creating bubbles which move through the fluid above. This converts heat energy into fluid vibrations. These vibrations are then captured by an array of fins in the TVB, which protrude both internally and externally. This occurs through a range of different effects, including density change, bubble velocity, and the generation of convection currents within the fluids as they interact with variable temperature levels. The energy from the fluid vibrations is captured by the fins and transferred to mechanical vibrations, causing the fins to oscillate. This movement dissipates the unwanted heat in the environment, providing cool temperatures to allow the servers to work and avoiding overheating. The novel technology has been trialled at iomart’s data centre in Glasgow as of October 2021, where a 120kW capacity TVB system was installed. Initial results from this trial indicate that implementing Katrick’s TVB engine can reduce power consumption by the site’s cooling system by up to 50% and may even reduce a data centre’s total energy consumption by 25% overall. Alongside the benefits of energy efficiency and sustainability, the bi-fluids used within the passive cooling system have been certified 100% environmentally safe, with next to zero global warming potential and zero ozone depleting potential. They also pose no fire risks or health hazards, allowing employees to safely work with the system with no potential negative consequences to health or fire safety. Katrick’s technology offers a cost-effective solution that is straightforward to implement and maintain long term. The system is designed to be modular and scalable, tailored to the end-user based on their specific requirements and the size of their facility. The reduced requirements for chillers will also lead to reduced maintenance, prolonging the overall life of the site. The energy produced by the system can even be re-routed to additional server capacity with limited supply, making it a profitable option to increase revenue and margin overall. Katrick Technologies’ passive cooling system represents an opportunity for businesses to run data centre’s more efficiently and sustainably. As an industry that is growing and evolving, it is becoming increasingly complex to meet the requirements for powering, running, and securing these centres. Having systems in place to ensure that data centres can be an effective and reliable platform to store vast amounts of often personal and confidential data is now vital and developing and investing in new technologies to enable this is crucial.

Delta to expand presence through partnership with DATABOX - Informática
Delta has announced a new partnership with DATABOX - Informática to provide Delta’s wide range of energy-efficient Uninterruptible Power Supplies (UPS) and Data Centre Infrastructure Solutions to IT resellers and system integrators throughout Portugal. By leveraging their close collaboration as well as Delta’s core competencies in energy-efficient ICT infrastructure and DATABOX – Informática’s deep expertise in the local market, this partnership is expected to meet the demanding requirements for edge computing in Portugal. Commenting on the partnership with Delta, João Pedro Reis, CEO of DATABOX - Informática states: “Delta’s smart energy-efficient solutions are world-renowned for their energy savings and reliability which, combined with the company’s commitment to sustainability, means that we are proud to partner with such an established and recognised brand that complements our values, products and services. By working closely with Delta, we will be able to help our IT resellers and system integrators to deliver highly reliable UPS and data centre solutions to their customers.” Jaime Palma, channel manager of Mission Critical Infrastructure Solutions (MCIS) in Portugal for the Delta Electronics EMEA region, adds: “DATABOX – Informática is a well-established national IT distributor which has an excellent reputation and long-lasting relationships with its suppliers and customers. Its superior local stockholding, sales force and customer capabilities, combined with Delta’s highly quality products, can offer IT resellers and system integrators the ideal solution for their needs. Delta looks forward to expanding the relationship into other Delta portfolio solutions to help Portuguese corporations enhance their competitive edge through higher energy efficiency and higher productivity.” The award-winning UPSs designed by Delta act as advanced power managers, ensuring the availability of an uninterrupted power supply to protect hardware and mission critical applications. High-quality UPSs function as an essential safeguard against many potential energy issues, including voltage surges and spikes, voltage sags, total power failure, and frequency differences. With the rise of edge computing, Delta also offers its InfraSuite Datacenter Infrastructure Solutions to support its customers in building an optimal data centre with fully-integrated infrastructure solutions.

Smart technology to address the data centre energy drain
In this piece, we spoke to Matthew Margetts, Director at Smarter Technologies to find out why data centres require so much energy, and find out what can be done to reduce this consumption, while retaining our data centres. Inside vast factories bigger than aircraft carriers, tens of thousands of circuit boards are racked row upon row. They stretch down windowless halls so long that staff ride through the corridors on scooters. In an increasingly digitalised world, data centres are the information backbone, with demand continuing to grow along with data-intensive technologies. Estimated to account for as much as 1% of worldwide electricity use, data centres are energy-intensive enterprises.   In Ireland, data centres could account for about 25% of the country’s electricity usage by 2030, potentially leading to electricity supply challenges. Fearing the pressure data centres place on the national grid, countries such as the Netherlands and Singapore have gone so far as to stop issuing building permits to data centres.   Why do data centres require so much energy?  - To provide constant power supply with minimum disruptions  - Electricity used by IT devices such as servers, storage drives and network devices is converted into heat, which must be removed from the data centre by cooling equipment that also runs on electricity  - Facilities must be kept at the appropriate temperature   - Additional equipment such as humidifiers and monitors are also required   The energy impact of data centres is undeniable, but so is the need for these facilities to handle the world’s ever-increasing data demands. What can’t be ignored is the energy efficiency trends that have developed in parallel. The IEA reports that although workloads and internet traffic have nearly tripled, data centre energy consumption has flatlined for the past three years.  Here’s what can be done to improve data centre energy efficiency and sustainability:   High-efficiency equipment  The use of server virtualisation and ARM-based processors can help reduce the energy consumption of IT devices. This new technology is designed to perform fewer types of computer instructions, allowing them to operate at a higher speed and resulting in better performance at a fraction of the power. The servers of today are more powerful and efficient than ever before, and the technology continues to improve.  Renewable energy    One of the best ways to match the rise in ICT workload energy is to ensure a corresponding increase in the usage of renewable energy sources.   By moving part of their high-intensity computing hardware to alternative locations using renewable energy, companies can benefit from a more sustainable energy source while taking energy off the national grid. A location like Iceland boasts reliable, low-cost renewable energy.   By moving part of their high-intensity computing hardware to alternative locations using renewable energy, companies can benefit from a more sustainable energy source while taking energy off the national grid. A location like Iceland boasts reliable, low-cost renewable energy.   Big data centre operators such as Google are establishing solar generation plants to offset their data centre usage on the grid, using small panels coupled with battery storage to reduce non-critical functions such as engine heaters, office air-conditioning, fuel polishing and lighting.   Intelligent power distribution management  The key to better energy efficiency in data centres is managing power load and distribution. For example, reducing the number of servers needed during low traffic hours. Rather than leaving all servers idle, some servers can be turned off when not needed while others run at full throttle. Matching the server capacity to real-time demands is made possible through smart monitoring and management tools.   It’s also important to remove “zombie servers”, which are servers that have become redundant and are no longer in use, yet are still powered on and consuming energy. Research shows that 25% of physical servers are zombies, along with 30% of virtual servers. In general, these servers haven’t been shut down because operators don’t know what they contain or what they are used for. To deal with this problem proactively, every server and function must be documented and monitored appropriately using asset management software.   Optimised cooling  In conventional data centres, standard air conditioning uses a significant proportion of the centre’s energy bill. All IT equipment must remain at safe temperatures, which is why proper ventilation and cooling is so important.  Measures managers can take to optimise cooling include the following:  - Proper insulation can help maintain temperatures within the room.  - Strategic equipment layout and streamlined airflow can also improve cooling efficiencies.  - A popular solution is to locate data centres in cool climates and use the outside air to cool the inside. This is known as “free cooling”.   - Piped water is a good conductor of heat. Warm water can be used as a less energy-intensive way to cool data centres.   - Cleaning up workloads and eliminating unnecessary equipment. - Replace older cooling systems with new technology to improve efficiencies.  Machine learning and automation in data centres can also be used to optimise cooling system setpoints for variable outside conditions, which provides a number of marginal energy gains.  Heat transfer technology  Using the heat coming off the servers is like taking advantage of a free resource. For example, an IBM data centre in Switzerland warms a nearby swimming pool with its waste heat.   However, because heat doesn’t travel well, the use of waste heat is generally limited to data centres that can supply nearby customers or cities that already use piped hot water to heat homes.  Energy offsets  The information age is making buildings smarter and more energy efficient. With fairly simple automations such as occupancy sensors that turn off lights and HVAC when no one is in a room, along with informed decision-making as a result of access to real-time utility consumption data, building managers can use smart technology and building management systems to reduce their carbon footprints. This infrastructure is facilitated by data centres, so one could argue that some of the energy being used by data centres is offset by the lower consumption of the smart buildings they service.   Policy making and planning  Decision-makers need to be able to confidently and accurately evaluate future efficiency and mitigation options. Policymakers and energy planners need to be able to:  - Monitor future data centre energy use trends - Understand key energy use drivers - Assess the effectiveness of various policy interventions In order to do this, data analysts need access to reliable data sources on the energy consumption characteristics of IT devices and cooling/power systems.   Smart metering technology is just the start—along with the data from smart meters, energy managers need a platform with data analytics, artificial intelligence and machine learning capabilities in order to make the most of the data they are presented with.   Data centre operations require a safe, efficient and dependable power supply. There’s no doubt that sustainability is going to be the overriding trend that will remain front and centre within the industry for the foreseeable future. Fortunately, the very same smart technology that is necessitating the growth is also helping to make them more energy efficient and future-fit.  

Asperitas and Shell immerse themselves in keeping IT and the planet cool
As governments stumble to get a firm footing in battling climate change, global industry leaders are stepping in to take the lead. In the world of power-hungry data centres, companies big and small are coming together as part of the Open Compute Project (OCP) to build consensus and paint the industry green. In collaboration, Shell and Asperitas believe they have a sustainable solution to keep your IT equipment cool: Immersing it in a specialised cooling fluid to drastically reduce energy consumption, while simultaneously harnessing waste heat energy for reuse. Ready or not, the energy transition is coming. But maybe not the way you expect. This part of the transition isn’t coming from government regulations to try to cap emissions. Rather, it is stemming from industry leaders and technological innovators putting their heads together to lessen our carbon footprint and mitigate our collective impact. For the last two years, Shell and Netherlands-based Asperitas have been doing just that. The duo has set its crosshairs on energy-guzzling data centres that are devouring huge amounts of power and contributing largely to emissions of CO2. The solution: Immersion-based cooling of IT. In other words, sinking electronics and IT equipment into baths of a specially formulated dielectric liquid that can effectively, and very efficiently, cool the components. OCP A couple of years ago, while Asperitas was still in its R&D phase, the data centre cooling specialist recognised its potential to have drastic effect on the industry. It was at this time, in 2018, that the Amsterdam-based startup reached out to start first collaborations with Shell, and in the same year looked to take a leading role in shaping the immersion cooling industry by joining ranks in the Open Compute Project (OCP), a community-based foundation aimed at elevating the IT industry by sharing IP, ideas and best practices in a quest to evolve the industry. The idea being that through shared experiences and collaboration, the group could establish new hardware designs that are optimised and tailored to specific needs, offering end users high efficiency and scalability. Almost two years ago OCP launched the Advanced Cooling Solutions (ACS) initiative within the Rack & Power work group, where Rolf Brink, founder and CEO of Asperitas, became the project leader for the immersion-cooling pillar of the initiative. According to Mr Brink, there wasn’t much consensus within immersion cooling, as there was no common frame of reference, and requirements within the realm were non-existent. In his role at ACS, however, he got the opportunity to work with a community of global industry leaders that together discussed and formulated projects to help the industry move forward. “Up to this point, the dunk and pray strategy was the most common practice in the domain. People would go buy an off-the-shelf server, make some small modifications through the thermal interface material on the CPU and then dunk it in the liquid and hope it kept working,” laughs Brink. “But eventually, through collaborative efforts, we were able to publish a white paper on the minimum requirements for immersion cooling. For the first time, we had established a basic frame of reference for the domain, and we had a good starting point.” Compatibility Soon after, the ACS group kicked off a new project focused on liquid compatibility. “There are two main families of liquids that can be optimised for the immersion cooling sector, hydrocarbons and fluorocarbons. OEMs and superscalers don’t have the expertise, the time, or the interest to go and test hundreds of different liquids and setups to see if it was a viable option. Disrupting Asperitas has really been pushing to make a name for itself and shed light on the new possibilities in the data centre cooling market. For a few years now, the scale-up has been at the cusp of breaking in and drastically disrupting the industry with its technology. Now it seems they’ve caught the attention of the industry, from OEMs to integrators and even leading enterprises like telecommunication specialists and hyperscale cloud providers. Not only have they been named one of the energy sector’s top global innovations of the decade by the World Economic Forum, but recently, OCP officially recognised the Asperitas’ Open Cassette as an OCP Accepted Product Accessory, a real milestone for the company. The system, which requires little more than a standard power outlet and a water line for operation, stores the IT deep into a fluid bath, all contained within its own housing. The IT equipment is then cooled through the process of natural convection where the liquid can absorb up to 1500 times the heat energy compared to more traditional air-cooling solutions, potentially slashing the energy footprint of data centres in half. More interesting yet is the fact that nearly all the waste heat energy is captured in the liquid and can be transferred and reused – making it an even more sustainable option. Shell immersion cooling fluid based on GTL technology For its part in the collaboration, Shell has taken on the responsibility of engineering the specialised fluid used in the immersion cooling process. Shell Immersion Cooling Fluid S5 X is a synthetic, single-phase fluid developed specifically for immersion cooled data servers. The fluid uses Shell’s unique gas-to-liquids technology and has been optimised for Asperitas’ natural-convection-driven immersion cooling servers but can also be used in servers with pumps. The fluid is designed to reduce energy costs and emissions through its high cooling efficiency, excellent flow behaviour and thermodynamic properties. Shell Immersion Cooling Fluid S5 X is compatible with most commonly used server materials and being non-corrosive and virtually free from sulphur, nitrogen and aromatics ensures high server reliability and lifetime.

Mitigating risk to UPS installation
Uninterruptible Power Supplies (UPS) are designed to protect critical loads and to mitigate risk to critical infrastructures, including data centres. However, care needs to be taken as risk to the load can be re-introduced through the UPS itself, writes Tim Ng, Sales Engineer at Centiel UK. The three biggest risks are: purchasing a lesser quality UPS system without realising the implications of doing so, the use of unapproved maintenance procedures and not replacing aging equipment at the appropriate time. As a result, failure can have far reaching consequences for the operation in terms of damaged reputation and lost business due to unexpected downtime. Replacement before failure Most (sensible!) people replace their cars before they get to the point where they keep breaking down. It’s the same with a UPS. Inevitably as equipment ages, components become less reliable and available, and the risk of failure is increased. However, we regularly come across UPS systems that have significantly exceeded their recommended design life and should have been replaced years ago. To be blunt, by continuing to run and maintain an aging UPS you are putting your trust in a system with a much higher probability of failure. Sometimes it’s down to a lack of technical guidance about when to make the decision to replace the UPS, but more often than not, it’s down to securing budget. However, in the event of a major outage and a situation where an ageing UPS fails, a business could lose millions for the sake of a small investment in new equipment. There are also significant gains to be made. For example, modern UPS systems are far more efficient and can slash operational expenditure dramatically. Picture a data centre with 1MW of critical load supported by a UPS system with an efficiency of 90%. Based on an average unit price of 14p/kWh the running cost of this UPS will be around £135,000 per year. Now picture a new UPS supporting the same load at the same p/kWh but with an efficiency of 97% - the running costs are reduced dramatically to around £38,000 per year. If you consider that the average commercial electricity p/kWh is projected to increase by as much as 40% in 2022, this makes a very strong case to replace ageing inefficient equipment. A new UPS could pay for itself in just a few years. Quality counts When selecting a UPS, it is important to make the right choice. If a quote appears to be too good to be true, maybe it is! Quality equipment and components can cost more but there is a reason for this. Manufacturers invest heavily into research and development to ensure that the components selected for use in their products meet a strict set of performance standards. This means that they can deliver the most robust and reliable systems to their clients. Ultimately, the aim of any quality UPS manufacturer is to produce a system that has the highest availability, with reduced running costs and minimised risk of system downtime. Maintenance matters As with any new car, using an unapproved technician to service your UPS will likely invalidate the warranty. A new UPS should always be maintained by an approved factory trained engineer or manufacturer recommended maintenance technician. A UPS requires regular preventative maintenance. An essential part of this is for the correct software updates to be deployed, which ensures optimal functionality. Sometimes unapproved engineers will take on a maintenance contract that they are unable to fully support. As well as invalidating warranties, essential software updates will be missed, which can impact the UPS’s functionality. The consequence of using unapproved engineers can be serious. For example, putting a UPS into bypass for maintenance using an incorrect switching sequence can introduce a fault into the system, causing catastrophic failure. Correct preventative maintenance will also enable ageing components to be identified and replaced early. Environmental factors, such as temperature and dust, can also invalidate warranties and reduce the life span of the UPS. Trained maintenance engineers will monitor these issues and take corrective action where necessary. A further benefit of using approved factory trained engineers is that they can become a trusted advisor. At Centiel, the company’s engineers work closely with clients to advise what actions are required, following preventative maintenance to maximise the performance of their systems. A UPS will protect the critical load to a datacentre for many years. However, to mitigate the risk of the failure of the UPS itself, select a quality solution, ensure it is maintained correctly and replace the equipment before it reaches the end of its design life.

Smart technology to address the data centre energy drain
After the failures of COP26, it became evident that for real change to be implemented, large organisations would have to take matters into their own hands over energy. However, in an increasingly digitalised world, certain technologies, namely data centres, that we view as essential, take centre-stage in data consumption. In this piece, we spoke to Matthew Margetts, Director at Smarter Technologies. Inside vast factories bigger than aircraft carriers, tens of thousands of circuit boards are racked row upon row. They stretch down windowless halls so long that staff ride through the corridors on scooters. In an increasingly digitalised world, data centres are the information backbone, with demand continuing to grow along with data-intensive technologies. Estimated to account for as much as 1% of worldwide electricity use, data centres are energy-intensive enterprises.   In Ireland, data centres could account for about 25% of the country’s electricity usage by 2030, potentially leading to electricity supply challenges. Fearing the pressure data centres place on the national grid, countries such as the Netherlands and Singapore have gone so far as to stop issuing building permits to data centres.   Why do data centres require so much energy?  To provide constant power supply with minimum disruptions Electricity used by IT devices such as servers, storage drives and network devices is converted into heat, which must be removed from the data centre by cooling equipment that also runs on electricity Facilities must be kept at the appropriate temperature  Additional equipment such as humidifiers and monitors are also required   The energy impact of data centres is undeniable, but so is the need for these facilities to handle the world’s ever-increasing data demands. What can’t be ignored is the energy efficiency trends that have developed in parallel. The IEA reports that although workloads and internet traffic have nearly tripled, data centre energy consumption has flatlined for the past three years.  Here’s what can be done to improve data centre energy efficiency and sustainability:   High-efficiency equipment  The use of server virtualisation and ARM-based processors can help reduce the energy consumption of IT devices. This new technology is designed to perform fewer types of computer instructions, allowing them to operate at a higher speed and resulting in better performance at a fraction of the power. The servers of today are more powerful and efficient than ever before, and the technology continues to improve.  Renewable energy   One of the best ways to match the rise in ICT workload energy is to ensure a corresponding increase in the usage of renewable energy sources.   By moving part of their high-intensity computing hardware to alternative locations using renewable energy, companies can benefit from a more sustainable energy source while taking energy off the national grid. A location like Iceland boasts reliable, low-cost renewable energy.   By moving part of their high-intensity computing hardware to alternative locations using renewable energy, companies can benefit from a more sustainable energy source while taking energy off the national grid. A location like Iceland boasts reliable, low-cost renewable energy.   Big data centre operators such as Google are establishing solar generation plants to offset their data centre usage on the grid, using small panels coupled with battery storage to reduce non-critical functions such as engine heaters, office air-conditioning, fuel polishing and lighting.   Intelligent power distribution management  The key to better energy efficiency in data centres is managing power load and distribution. For example, reducing the number of servers needed during low traffic hours. Rather than leaving all servers idle, some servers can be turned off when not needed while others run at full throttle. Matching the server capacity to real-time demands is made possible through smart monitoring and management tools.   It’s also important to remove 'zombie servers', which are servers that have become redundant and are no longer in use, yet are still powered on and consuming energy. Research shows that 25% of physical servers are zombies, along with 30% of virtual servers. In general, these servers haven’t been shut down because operators don’t know what they contain or what they are used for. To deal with this problem proactively, every server and function must be documented and monitored appropriately using asset management software.   Optimised cooling  In conventional data centres, standard air conditioning uses a significant proportion of the centre’s energy bill. All IT equipment must remain at safe temperatures, which is why proper ventilation and cooling is so important.  Measures managers can take to optimise cooling include the following:  Proper insulation can help maintain temperatures within the room. Strategic equipment layout and streamlined airflow can also improve cooling efficiencies. A popular solution is to locate data centres in cool climates and use the outside air to cool the inside. This is known as free cooling.  Piped water is a good conductor of heat. Warm water can be used as a less energy-intensive way to cool data centres.  Cleaning up workloads and eliminating unnecessary equipment.Replace older cooling systems with new technology to improve efficiencies.  Machine learning and automation in data centres can also be used to optimise cooling system setpoints for variable outside conditions, which provides a number of marginal energy gains.  Heat transfer technology  Using the heat coming off the servers is like taking advantage of a free resource. For example, an IBM data centre in Switzerland warms a nearby swimming pool with its waste heat.   However, because heat doesn’t travel well, the use of waste heat is generally limited to data centres that can supply nearby customers or cities that already use piped hot water to heat homes.  Energy offsets  The information age is making buildings smarter and more energy efficient. With fairly simple automations such as occupancy sensors that turn off lights and HVAC when no one is in a room, along with informed decision-making as a result of access to real-time utility consumption data, building managers can use smart technology and building management systems to reduce their carbon footprints. This infrastructure is facilitated by data centres, so one could argue that some of the energy being used by data centres is offset by the lower consumption of the smart buildings they service.   Policy making and planning  Decision-makers need to be able to confidently and accurately evaluate future efficiency and mitigation options. Policymakers and energy planners need to be able to:  Monitor future data centre energy use trendsUnderstand key energy use driversAssess the effectiveness of various policy interventions In order to do this, data analysts need access to reliable data sources on the energy consumption characteristics of IT devices and cooling/power systems.   Smart metering technology is just the start - along with the data from smart meters, energy managers need a platform with data analytics, artificial intelligence and machine learning capabilities in order to make the most of the data they are presented with.   Data centre operations require a safe, efficient and dependable power supply. There’s no doubt that sustainability is going to be the overriding trend that will remain front and centre within the data centre industry for the foreseeable future. Fortunately, the very same smart technology that is necessitating the growth of data centres is also helping to make them more energy efficient and future-fit.  

Delta's UPS and data centre solutions to expand in Portugal
Delta has announced a new partnership with DATABOX - Informática to provide Delta’s wide range of energy-efficient Uninterruptible Power Supplies (UPS) and Data Centre Infrastructure Solutions to IT resellers and system integrators throughout Portugal. By leveraging their close collaboration as well as Delta’s core competencies in energy-efficient ICT infrastructure and DATABOX – Informática’s deep expertise in the local market, this partnership is expected to meet the demanding requirements for edge computing in Portugal. Commenting on the partnership with Delta, João Pedro Reis, CEO of DATABOX - Informática states: “Delta’s smart energy efficient solutions are world-renowned for their energy savings and reliability which, combined with the company’s commitment to sustainability, means that we are proud to partner with such an established and recognised brand that complements our values, products and services. By working closely with Delta, we will be able to help our IT resellers and system integrators to deliver highly reliable UPS and data centre solutions to their customers.” Jaime Palma, channel manager of Mission Critical Infrastructure Solutions (MCIS) in Portugal for the Delta Electronics EMEA region, adds: “DATABOX – Informática is a well-established national IT distributor which has an excellent reputation and long-lasting relationships with its suppliers and customers. Its superior local stockholding, sales force and customer capabilities, combined with Delta’s highly quality products, can offer IT resellers and system integrators the ideal solution for their needs. Delta looks forward to expanding the relationship into other Delta portfolio solutions to help Portuguese corporations enhance their competitive edge through higher energy efficiency and higher productivity.” The award-winning UPSs designed by Delta act as advanced power managers, ensuring the availability of an uninterrupted power supply to protect hardware and mission critical applications. High-quality UPSs function as an essential safeguard against many potential energy issues, including voltage surges and spikes, voltage sags, total power failure, and frequency differences. With the rise of edge computing, Delta also offers its InfraSuite Datacentre Infrastructure Solutions to support its customers in building an optimal data centre with fully-integrated infrastructure solutions.

Six benefits of lithium-ion technology for UPS systems
While known for powering laptops and mobile phones, lithium-ion batteries are now changing the field of Uninterruptible Power Supply (UPS) systems for the better, says Nils Horstbrink, Director Offer Management Power at Vertiv. This rechargeable battery addresses the drawbacks of the traditional Valve-Regulated Lead Acid (VRLA) batteries commonly used for UPS systems. While VRLAs are less expensive, these heavier batteries are larger, and need more frequent replacement. Let’s take a close look at the practical benefits of using lithium-ion batteries for UPS. More compact and lightweight Lithium-ion batteries weigh 40% to 60% lighter and have a 40% smaller footprint than their VRLA counterparts. This translates into a remarkable power density level, where less space is needed to deliver the same amount of power. Lasts longer Lithium-ion batteries have a significantly longer lifespan - around twice or thrice that of VRLA batteries on average. Compared to a traditional VRLA battery technology that typically lasts three to five years, lithium-ion technology can provide a battery service life of eight to 10 years (or longer), often outlasting the UPS itself. This essentially makes the UPS almost maintenance-free, with fewer or possibly no battery replacements throughout its lifespan. Unlike VRLA batteries, lithium-ion batteries offer a high life cycle, making it suitable for many applications where frequent charge and discharge cycles are expected. Contributes to a lower total cost of ownership Regarding the total cost of ownership (TCO), lithium-ion batteries can provide up to 50% savings over their life expectancy. This is primarily due to their longer lifespan, high-temperature resilience, reduced maintenance expenses (with fewer or no battery replacement), and reduced installation expenses. Although VRLA batteries can surely save you money upfront, think of the bigger picture and consider the TCO. Faster to recharge UPS batteries need to be recharged as quickly as possible to full capacity. While VRLA batteries can take over six hours to charge from 0% to 90% of full runtime capacity, lithium-ion batteries take only around two hours to recharge. That reduces the overall risk of experiencing an outage before your UPS batteries have been fully charged. The cutting-edge lithium-ion battery technology has gone a long way to the advantage of many industries. With the many benefits it offers, a lithium-ion powered UPS is a must-have to secure your critical operations and lessen your operational costs. Resilient to higher temperatures Lithium-ion batteries can operate normally at temperatures of up to 40oC without compromising performance. This is a clear advantage of this battery technology over VRLA batteries, which shed off about half their lifespan for every 10oC temperature rise over 25oC. Since lithium-ion batteries have a wide operating temperature range, they become convenient for more extreme, non-traditional settings and facilities that don’t have enough cooling capabilities. Comes with advanced integrated battery management system Lithium-ion batteries come with an advanced integrated battery management system (BMS). This provides an accurate picture of the battery's health and runtime and protects the battery cells against current, temperature, and over- or under-charging. The BMS continuously adjusts battery charging to make the most out of performance and battery life.

How data centres will support renewable power adoption
By Janne Paananen, Technology Manager, Critical Power Systems, Eaton Data centres are central to almost everything in people’s increasingly digitally led lives. From managing the transportation we rely on, powering the supply chains that keep our supermarkets stocked, and communicating with our colleagues and loved ones, all is being made simpler, faster, and more efficient by data centre connectivity. This connectivity underpins our digital, social, and professional infrastructures, and as we saw with the COVID-19 pandemic, is robust in even the most challenging of circumstances. The increasing amount of work being done in data centres also means they are demanding more power than ever. A Swedish study on the global usage of electricity found that data centres and the networks associated with them may lead to information and communications technology (ICT) requiring even up to 21% of our total electricity production by 2030. While data centre-based solutions may often be more energy efficient than the processes they replace, this growth is still a problem in the context of our urgent need to decarbonise power production to meet climate targets. As a result, there is an ongoing global effort to make data centres greener, doing more work with fewer emissions, and it becomes possible to foresee a world where digital energy demand can be met entirely by renewables, something that many ICT companies are aiming for. Investments into renewable energy and supporting technology have been promoted using industry leading environmental and sustainability targets. The challenge of renewables Renewable energy sources bring green electrical energy, but they also bring other issues and engineering challenges. While some renewable energy sources offer predictable production (hydro), we are in fact moving towards a grid dominated by wind and solar. These variable renewable energy (VRE) sources, by nature, fluctuate in their output. It’s easy to see how this leads to potential problems. An electrical grid system must constantly match consumption with electricity production. This is fundamental to grid and frequency stability. But if VRE has fluctuating output, periods of over- and under-supply seem inevitable. Also, as VRE is replacing traditional turbine generators, it reduces system inertia i.e. stored energy in rotating mass resulting in faster and larger frequency deviations when mismatches between production and consumption occur. Grid operators are developing ways to manage that potential mismatch. But consumers can help too. Consumer on-site electrical systems, especially back up power systems, can actually help in grid stabilisation and therefore will enable the successful adoption of renewables on the grid. This help comes in the form of ancillary services that can be delivered ‘back’ to the grid operator. Rethinking the data centre On the one hand, we have an increasingly digitalised world requiring more and more power. On the other, we are seeing an enthusiastic uptake of renewable energy which, if we maintain that momentum, will require innovations in how we maintain security of supply. When thinking about how to build this future, it’s important to remember that these are not independent problems, and that the changes we’re heading towards should be more than a replacement of existing systems; as we transform power systems and digitalise everything from manufacturing to healthcare, we have an opportunity – and responsibility – to not just keep the lights on, but to rethink everything about how these essential services work. Data centres, of course, cannot afford power instability: by necessity, they are ‘always-on’. The services we all rely on need data centres with near-constant uptime. To ensure continuous power, data centres are outfitted with uninterruptible power supplies (UPSs) with batteries and backup generators which step in to keep everything running when grid supply fails. A UPS needs to respond instantly to changes in supply, deliver large amounts of power, and do so with the utmost reliability. In other words, the qualities they need in order to support stable data centre operation also make them perfect for providing ancillary services to the grid, such as quickly adjusting its demand from the grid or feeding in energy. These fast actions can stabilise a grid and contain grid frequency. Making this a reality requires some work: a data centre UPS will need to be aware of how the grid is operating, while the grid will need to be ready to receive supply from data centres as well as deliver power to them. Eaton’s recent research with Microsoft demonstrates that building the systems to make it work is possible and how data centres can support the grid in real-world testing. As an example, Eaton’s headquarters in Dublin is now home to a new UPS that successfully provides fast frequency response services to the local grid by reducing the building’s demand when grid frequency drops. This potential shift in how we use our data centre capabilities will mean a complete rethink of the role of power consumers on the grid. Before, electricity transmission was a one-way street from production to consumption; now, we are seeing how it can be bidirectional and interactive, and everyone has their roles to play, from grid operators to consumers. Before, a system like a UPS was an operational necessity and a necessary expense, now we are seeing how it can be a source of revenue when ancillary services are sold back to the grid operator. As for data centres, we are seeing how their centrality to modern life is more than just digital services. As they begin supporting the renewable energy grid, we may start to see them not just as data centres, but as energy centres helping to decarbonise electricity and creating a digital and sustainable future for all.



Translate »