Advertise on DCNN Advertise on DCNN Advertise on DCNN
Saturday, June 14, 2025

Features


Feature - Overcoming the DCI deluge
Tim Doiron, VP Solution Marketing, Infinera, looks at the ways of overcoming the DCI deluge in the era of artificial intelligence and machine learning. In the Merriam-Webster dictionary, deluge is defined as “an overwhelming amount.” In recent discussions with communication service providers and internet content providers (ICPs), data centre interconnect (DCI) traffic growth is running hot at 50% or more per annum. Any traffic that doubles in less than two years surely qualifies as a deluge, so yes, DCI traffic is a deluge. But is DCI’s accelerated growth driven by artificial intelligence (AI) and machine learning (ML)? The short answer is yes, some of it is – but we are still in the early days of AI/ML, and in particular, generative AI. AI/ML’s contribution to DCI traffic will increase with time. With more applications, more people taking advantage of AI/ML capabilities (think about medical imaging analysis for disease detection), and generative AI creating new images and videos (consider collaboration with artists or marketing/branding), the north-south traffic to/from data centres will continue to grow. And we know that data centres don’t exist in a vacuum. They need connectivity with other data centres – data centres that are increasingly modular and distributed to reduce their impact on local real estate and power grids and to be closer to end users for latency-sensitive applications. One estimate from several years ago holds that 9% of all data centre traffic is east-west – meaning DCI or connectivity to other data centres. Even if this percentage is high for today’s data centres, with more data centres coming online and more AI/ML traffic to/from data centres, Al/ML will help DCI sustain its hot growth rate for years to come. So, how do we address increasing data centre modularity and distribution while also supporting the DCI deluge that is already here and will be sustained in part by accelerating AI/ML utilisation? The answer is threefold: speed up, spread out, and stack it. Speed up with terabit waves Today’s 800G embedded optical engines are moving into the terabit era with the development of 1.2+ Tb/s engines that can transmit 1.2 Tb/s wavelengths hundreds of kilometres and 800G waves up to 3,000 kilometres. Data centre operators that lease fibre can utilise embedded optical engines with high spectral efficiency to maximise data transmission over a single fibre pair and thus avoid the incremental costs associated with leasing incremental fibres. While 400G ZR coherent pluggables are increasingly common in metro DCI applications, 800G coherent pluggables are under development for delivery in early 2025. This latest generation of coherent pluggables based upon 3-nm digital signal processor technology expands capacity-reach significantly with the ability to deliver 800G wavelengths up to 2,000 kilometres. With such capabilities in small QSFP-DD or OSFP packages, IP over DWDM (IPoDWDM) will continue to be realised in some DCI applications, with pluggables deployed directly into routers and switches. Spread out with Super C With advancements in optical line system components like amplifiers and wavelength-selective switches, we can now cost-effectively increase the optical fibre transmission spectrum from 4.8 THz to 6.1 THz to create Super C transmission. With Super C we can realise 27% incremental spectrum enabling up to 50 Tb/s transmission capacity per band. A similar approach can be applied to creating a Super L transmission band. Super C and Super L transmission are cost-effective ways to squeeze more out of existing fibre and to keep up with DCI capacity demands. Spread out with Super C-band spectrum evolution Super C expansion also benefits coherent pluggables in DCI deployments by reclaiming any reduction in total fibre capacity due to the lower spectral efficiency of pluggables. Combining coherent pluggables with Super C enables network operators to leverage the smaller space and power utilisation benefits of pluggables without sacrificing total fibre transmission capacity. Stack it with compact modularity Today’s next-generation compact modular optical platforms support mix-and-match optical line system functionality and embedded and pluggable optical engines. Network operators can start with a single 1RU, 2RU, or 3RU chassis and stack them as needed, matching cost to capacity while minimising complexity. By supporting both line system and optical engine functionality in a common platform, network operators can cost-effectively support small, medium, and large DCI capacity requirements while minimising the amount of equipment required versus dedicated per-function alternatives. As capacity demand grows, additional pluggables, sleds, and chassis can be easily added – all managed as a single network entity for operational simplicity. Stack it with a compact modular platform Bringing it all together While modest today, AI/ML-related DCI traffic will continue to grow – and help buoy an already hot DCI market. To accommodate the rapid connectivity growth between data centres, we will need to continue to innovate with pluggable and embedded optical engines that deliver more capacity with less power and a smaller footprint; with more transmission spectrum on the fibre; and with modular, stackable optical platforms. Commercially available generative AI applications like ChatGPT launched less than two years ago. We are literally just getting started. Hold on tight and grab an optical transmission partner that’s laser focused on what’s next. For more from Infinera, click here.

Feature - Data centre growth requires sustainable thinking
The development of AI is having a huge impact on almost every industry, and none more so than within data centres. It is expected that global data centre electricity demand will have doubled by 2026 due to the growth in AI. So, how do we ensure our data centres are operating as efficiently as possible? Russell Dailey, Global Business Development Manager, Data Centres at Distech Controls, explains. We are generating more and more data in all aspects of lives, whether that’s through our business operations, the use of social media and even our shopping habits with the growth of e-commerce. Our new dependence on web services and digital infrastructure requires a greater number of data centres, and we need them to operate more reliably and efficiently than ever before. According to the International Energy Agency (IEA), in 2022, data centres used 460 terawatt hours of electricity, and it expects this figure to double in just four years. Data centres could be using a total of 1,000 terawatt hours annually by 2026. This demand for electricity has a lot to do with the growth in AI technology. In a similar way to how the growth of e-commerce drove uptake for large industrial warehouses, AI is expected to more than double the need for global data centre storage capacity by 2027, according to JLL’s Data Centres 2024 Global Outlook. As data centres contribute substantially to global electricity consumption, more facilities are seeking to adopt enhanced sustainability strategies. To achieve net zero emissions targets or other environmental objectives, data centre companies must invest heavily in energy efficiency measures. A Building Management System (BMS) can form the cornerstone of these efforts, providing insights into energy usage and helping to reduce unnecessary energy waste with enhanced operational efficiency. Data centres are unique buildings, and a BMS within this environment requires careful planning and implementation. Let’s be open In the past, building systems have traditionally been proprietary and not flexible like open systems. Proprietary systems speak different languages, resulting in incomplete visibility, data, and reliability, and leave you tied to one, often expensive, service provider. However, that is changing, and open systems are becoming ever more popular in commercial buildings and have numerous benefits for data centres.Systems offer monitoring and analytics at the local controller, reducing network complexity, increasing redundancy and security. With Distech Controls, operators can keep their facility at optimal performance through a proven IP-based solution that creates a more secure and flexible network enabling easy integration of systems with a wide range of IT and business applications. Distech Controls’ commitment to open protocols and industry IT standards, combined with its best-in-breed technology, creates a sustainable foundation that supports and evolves with a building system’s life cycle. Efficient and forward thinking Open systems also have an effect on the sustainability of a data centre. They can bring everything together in a cohesive and centralized fashion allowing users to visualise information, assess relationships, establish benchmarks and then optimise energy efficiency accordingly. Distech Controls’ solutions meet even the most demanding data centre control requirements (even remotely) via fully programmable controls and advanced graphical configuration capabilities. They leverage technology such as RESTful API, BACnet IP, connected controllers, and unified systems, to help future-ready your data centre as technology continues to advance. The importance of security The smarter buildings become, the greater the importance of cyber security. There are some fundamentals that building owners and system integrators need to consider when it comes to the security of their BMS. As a starting point, the devices or operational technology (OT) should be on a different network to the IT system as they have separate security requirements and various people need to access them. As an example, contractors overseeing BMS devices do not need access to HR information. Each device should be locked down securely so they can only communicate in the way that is required. There should be no unnecessary inbound or outbound traffic from the devices. This links neatly to monitoring. It is vital to monitor the devices after installation and commissioning to ensure there is no untoward traffic to the devices that could threaten a buildings or company’s security. Some manufacturers, such as, Distech Controls, are ensuring its products are secure straight out of the box. Security features are built directly into hardware and software like TLS 256- bit encryption, built-in HTTPS server and HTTPS certificates. For instance, the ECLYPSE APEX incorporates a secure boot and additional physical security measures to help overcome today’s security challenges. Distech Controls’ solutions are specified by leading web service providers because of their high resiliency, flat IP system architecture and open protocol support. They also incorporate the right technologies to comply with the most stringent cybersecurity standards as well as RESTful API / MQTT for OT/IT interoperability purposes. These attributes allow data centre operators, integrators, and contractors the freedom to choose the best-in-class solutions for their data centre’s infrastructure management services. These advanced features enable significant operational efficiency improvements and energy cost reductions for data centre owners and managers. AI technology is already having a revolutionary effect in business and our personal lives. At Distech Controls, we are utilising its capabilities, and it is clear that this revolution is going to require more data centres. We need to look at ways we can make these specialist buildings as efficient as possible. Utilising an intelligent and open BMS is essential.

A sustainable future for data centres
By Ruari Cairns, Director of Risk Management and European Operations, True (powered by Open Energy Market). In recent years, the data centre industry has witnessed significant growth and innovation, with notable developments such as Google's £1 billion investment in a new data centre in Hertfordshire and Octopus Energy’s commitment to utilising energy from processing centres to heat swimming pools. These advancements underscore the industry’s critical role in supporting our increasingly digitalised world. Since 2010, demand for digital services has increased rapidly with the number of global internet users more than doubling, and internet traffic increasing 25-fold, according to the International Energy Agency (IEA). Data centres serve as the backbone of digital infrastructure, providing the necessary storage, processing, and connectivity to not only allow day-to-day activity, but also to enable businesses to innovate, improve efficiency, and stay competitive in an increasingly digital age. The environmental impact of data centres Data centres are undeniably energy-intensive operations. A report from the IEA reveals that data centres and data transmission networks collectively accounted for approximately 330 Mt CO2 equivalent in 2020. Despite their environmental footprint, data centres are indispensable. Therefore, innovation is vital to decarbonise and make this sector more energy efficient. Challenges surrounding sustainability Encouragingly, major corporations are placing greater emphasis on environmental credentials. A prime example is Google Cloud, which has committed to achieving a carbon-neutral footprint and transitioning to completely carbon-free energy across all its global data centres by 2030. This ambitious target highlights the growing demand for sustainable solutions in the sector. However, regardless of the growing emphasis, data centres face significant challenges in achieving these green objectives. Balancing energy procurement during operational ramp-up periods and navigating regulatory complexities pose strategic hurdles. The insatiable demand for data presents capacity issues and strains on energy availability, necessitating innovation and collaboration for a greener future. Looking ahead to the next 12-18 months, not only will the data centre industry face pressures around sustainability, but it also may encounter capacity constraints and energy availability challenges. Power purchase agreements To achieve sustainability goals, data centres will adopt various strategies in 2024, including securing long-term PPAs for renewable energy procurement. With PPAs, data centres enter into agreements with renewable energy providers to ensure a consistent and sustainable source of electricity. According to a report by BloombergNEF, corporate PPAs for renewable energy reached a record of 23.7 gigawatts in 2020, with data centres being one of the key sectors driving this growth. This trend is likely to continue to grow, with more data centres looking for reliable green energy sources. Energy audits It is also important that data centres increase the regularity with which they perform energy audits. Energy audits identify areas of high energy consumption and support data centres to implement energy-efficient measures. This can include optimising server utilisation, upgrading to energy-efficient hardware, and implementing advanced cooling technologies. According to a study, commissioned by the US Department of Energy, energy audits can lead to energy savings of up to 30%. Infrastructure design Data centres will continue to prioritise energy efficiency in their infrastructure design into 2024. This includes using energy-efficient servers, cooling systems, and power distribution units. Advanced cooling technologies, such as liquid cooling, will also gain traction to improve energy efficiency. Advanced systems and infrastructure can support data centres to optimise cooling operations by adjusting cooling levels based on heat loads and demand. This minimises overcooling and ensures resources are used more efficiently. In some cases, cooling systems use up to 40% of the total energy a data centre needs. By implementing advanced, green, cooling technologies, data centres can make substantial and critical energy and carbon savings. Onsite generation Due to the growing concerns about grid capacity, data centres will increasingly invest in on-site energy generation technologies. This will be primarily through solar and wind turbines, as these are easily tailored to each location’s needs. By generating their own clean energy, data centres can reduce their dependence on the grid and minimise their carbon footprint. Highlighting this point, Google recently signed its first PPA in Ireland for a 58-megawatt solar site to help its offices and data centre in Ireland reach 60% carbon-free energy in 2025. The trend will continue this way as more data centres seek the security that comes with onsite generation. Conclusion As the digital revolution accelerates, the sustainability of data centres becomes key. By prioritising sustainability, navigating regulatory challenges, and adopting strategic energy procurement strategies, data centres can pave the way for a greener and more resilient future. Collaboration between industry stakeholders and government support will be pivotal in driving collective progress towards a sustainable data centre ecosystem.

Can AI limit the environmental damage it’s responsible for?
Data centres, expected to account for 6% of the world’s carbon footprint by 2030, are undergoing a period of transformation, driven by the rise of AI and the pressing need to combat climate change. With such rapid growth comes unforeseen environmental impacts, highlighting the significance of the application of AI technologies in optimising energy use. It is undeniable that the data-intensive workloads generated by AI will see power consumption soar to unprecedented levels. However, the technology itself can help develop the next generation of data centres that are both high-capacity and more sustainable. According to Julien Deconinck, Managing Director at DAI Magister, environmental concerns are driving the development of innovative AI solutions that optimise energy usage in data centres, while reducing operating costs. Julien explains, “Over the next five years, the amount of data generated will surpass the total produced in the past decade, necessitating a significant expansion of storage capacity in data centres worldwide. Another key factor contributing to this rising energy demand is the escalating computational power required for AI training, which is doubling every six months. “Tech giants, recognising the scale of the problem and their significant contribution to it, are racing to mitigate the environmental impact of their operations. These companies face mounting pressure to reduce their carbon footprint and meet neutrality targets. “Most data centres aim to operate in a ‘steady state’, striving to maintain consistent and predictable energy consumption over time to manage costs and ensure reliable performance. As a result, they’re dependent on the local electricity grid, where outputs can fluctuate significantly. AI-driven solutions offer enormous potential to address these challenges by optimising energy usage and predicting and managing demand more effectively. “Integrating renewable energy sources like solar and wind into the grid can improve data centre sustainability, but this presents challenges due to their variable availability. AI addresses this by forecasting renewable energy availability using weather data and predictive analytics. This enables data centres to shift non-critical workloads to peak renewable energy production periods, maximising the use of clean energy and reducing reliance on fossil fuels. “When assessing the efficiency of a facility, the power usage effectiveness (PUE) measure serves as a crucial metric for indicating output. By monitoring and adjusting operational parameters in real time, AI sensors autonomously adjust power supply voltages, reducing consumption without compromising performance. “AI algorithms analysing usage patterns and optimising workload distribution further reduce this energy waste associated with inadequate server management and inconsistent allocation. The optimisation of computing resources in data centres minimises the need for, and use of, excess capacity, both lowering operating costs and maximising performance capabilities.” AI can also pre-empt system issues that can lead to breakdowns or long-term disruption. “AI sensors are facilitating predictive maintenance by analysing real-time data to detect anomalies or deviations in consumption patterns. Once identified, AI systems alert the issue to operators, preventing the activation of energy-intensive emergency cooling systems. “Integration of AI sensors is further beneficial in thermal modelling, enabling dynamic adjustments to systems, accounting for high-intensity computing tasks and external temperature fluctuations by predicting potential hotspots within the facility, based on data collected.” Julien concludes, “Together, AI and green technologies are set to revolutionise data centre operations by allowing them to manage larger capacities while reducing their carbon footprint. This not only supports sustainability objectives but also safeguards the transition to low-carbon, high-capacity data centres as the demand for data storage and processing continues to surge brought about by the rise of AI.”

Comparing on-premises with cloud and colocation
Here, Pulsant provides an overview of 'on-premises', cloud and colocation to help businesses that are looking to expand their digital infrastructure that will aid the success of the businesses in years to come. When it comes to digital infrastructure, deciding between utilising either ‘on-premises’ or cloud remains a divide amongst all business decision makers - and both sides have their benefits, depending on which way you look at it. Cloud, for example, its often believed to be a runaway success. The UK Regulator, Ofcom, produced a report into the domestic cloud service market and noted that the UK cloud infrastructure market is expanding, with overall revenues increasing from a rate of 35% to 40% annually. Despite that, Synergy analysts have discovered that by 2027, enterprise data centres will still account for more than a quarter of data centre capacity. This year, there were 350 UK IT leaders involved in a 2024 research project, which recorded 93% of its respondents who have been involved with a cloud repatriation project in the last three years. Furthermore, 25% of those businesses have already migrated half or more of their cloud-based workloads back to an ‘on-premises’ infrastructure. However, for many businesses, especially those experiencing growth, SMEs and medium-sized enterprises, the reality is there are two types of ‘on-premises’ infrastructures. Some businesses (usually within technology) will host their own data internally within their own data centre, while there are others that will host their entire IT infrastructure within their own office. The latter group of companies are often seeking solutions to manage their servers and treat it as a critical asset which they need to protect. Migrating to serviced offices Businesses that are experiencing growth or are already at a medium-sized level, tend to be already discussing their next steps. Usually driven by the increase in energy prices, higher borrowing, operational costs, and evolving work patterns, they tend to reassess their property portfolios. One focus has been the migration to serviced offices. Lambert Smith Hampton, a commercial property specialist, cites the following Mordor Intelligence forecasting which says that, “The UK market will grow by 8% per year during 2022-27.” There also been an “exceptional rise in demand”, for flexible offices in the UK, according to Savills reports. It also highlights that the increased demand has led to an increase in prices by 15%, and enquiries are also up by 17%. Despite that, issues have begun to arise with serviced offices. With critical business technology being operated within a shared space, restrictions have emerged. Firstly, serviced offices are rarely equipped with the necessary infrastructure and connectivity to host high-growth company needs. The power supply is shared throughout the entire building and can therefore be insufficient. Also, connectivity options and data management capabilities tend to be at a bare minimum. Shared offices also lack the necessary physical and cyber security assets. With the demand in costs, a serviced office typically has less physical security than a data centre and on-site staff aren’t comprehensively trained in cybersecurity. From a technological perspective, serviced offices are more vulnerable to cyber-attacks because they use less secure connections to keep the costs low. Finally, serviced offices have limited space for businesses looking to grow. By having a limitation on room-size, this can restrict business attempts to scale their IT infrastructure, because their office operators need to prioritise space for furniture etc. Colocation data centre When it comes to digital infrastructure, serviced offices often bring challenges for high-growth businesses. It is difficult for them to operate their own, expensive property facility or carry their ever-expanding infrastructure with them. On the other hand, businesses can experience a lack of control over their own technologies, so are forced to reduce their control over systems and key processes. Then there is the case of growth not being a ‘one time deal’ for businesses. It goes beyond legacy on-site strategies, expansions and adapting their infrastructure with every opening of a new facility. With a colocation centre, businesses can move their IT infrastructure, while focusing on their growth and agility. Firstly, businesses can exploit a flexible office space to move, optimising their capital expenditure (CAPEX) investment and operational expenditure (OPEX), by acquiring the minimum they need. In context, this is often a complete migration to OPEX as businesses seek to release themselves of expensive liabilities. The increased agility can fuel businesses to expand their operations. They also receive reassurance of greater security, and as they evolve, their colocation partner can also advocate a wider range of suppliers in connectivity and ecosystems. Business operations can become more resilient as the colocations provides managed connectivity, power and cooling issues which can assist their sustainability concerns. The issue however for businesses, is finding the right colocation partner who are local to them, available for support across the UK and can deliver transparency at a reasonable price. However, once a business finds a suitable colocation supplier, than they can benefit by splitting their IT infrastructure into a dedicated colocation space and benefit with the best of both worlds. They can gain an optimal operation focus with more control over costs and greater flexibility to grow. Adopting an interconnected approach To optimise any IT infrastructure migration, businesses need to clarify their approach within the decision-making process. They can do that in five simple steps: Establish what the data is going to be used for and thus, the primary attributes in successfully managing it. Is speed of access the top concern? Or security? Or real-time analysis? Consider scalability, business continuity and disaster recovery. Define the technologies that will most effectively serve these key purposes. Technically, this means assessing space, power, cooling, and connectivity requirements and accounting for data volume, bandwidth, and downtime. Find and connect with suppliers in those spaces that are prepared to become real partners. In a digital, data-driven age, the software and infrastructure a business is built on matters. Tour facilities and develop service level agreements (SLAs). Develop a detailed migration plan that anticipates delays and establishes clear definitions of success. Install and configure hardware to achieve this, spanning routers, switches, firewalls, load balancers and virtualisation platforms and hypervisors if applicable. Keep an eye on the future: embracing data and taking steps towards managing and optimising it typically accelerates growth for a business. As such, businesses must ensure that any strategy to put the data in the right place now, does not mean it is in the wrong place tomorrow. For more, visit pulsant.com/colocation. For more from Pulsant, click here.

Five tips to illuminate your data centre for peak performance
Data centres are among the most energy-intensive buildings in the world. With no windows, these buildings rely entirely on artificial lighting, making the quality and efficiency of the lighting systems crucial. The right lighting can help data centre facilities managers reduce energy usage and improve efficiency. LED lighting, for example, can reduce energy usage by up to 50%, and LED lighting with higher lumens per watt can further increase these energy savings. Ed Haslett, Divisional Director – Critical Facilities UK & Ireland at Zumtobel Group, shares five key points to consider to improve the efficiency of your data centre lighting. 1. Use a lighting track system for speed, flexibility, maintenance and sustainability Lighting track systems, such as the Tecton track system, eliminate the need for traditional cable runs and junction boxes, making installation faster, easier, and more sustainable. The Tecton system allows the installer to connect a luminaire anywhere along the run, providing unparalleled flexibility compared to other systems. Tecton installations are typically 61% quicker to install than traditional hard-wired solutions and 30% faster than some competitors. This system's flexibility in layout and maintenance allows for easy reconfiguration and repairs, making it an ideal choice for dynamic data centre environments. 2. Use the right luminaire intensities and beam angles to maximise spacing of luminaire points Selecting luminaires with appropriate intensities and beam angles ensures sufficient illumination while minimising the required fixtures. This not only reduces installation costs but also contributes to energy savings. However, it's important to note that the mounting height and intensity can significantly alter the spacing between luminaires. In some designs, it is possible to reduce from seven luminaires in a row to five, cutting commercial costs and decreasing the size of central battery systems, lighting control systems, and wattage loads significantly. However, this may not be feasible in all data centre environments, and careful planning and assessment are necessary. 3. Utilise dedicated emergency spots to reduce the number of emergency luminaires Employing dedicated emergency spotlights instead of traditional emergency luminaires for emergency illumination can significantly reduce the number of emergency luminaires required. The specially designed optics can be used in the design to deliver the required emergency illuminance without having to use more of the mains luminaires to achieve the requirements in emergency mode. Often, three mains luminaires are required to meet requirements in an aisle, where two dedicated emergency spots will do the job more efficiently with a significantly lower wattage requirement. This optimisation not only conserves space by utilising a smaller central battery system footprint, but also enhances the overall efficiency of the lighting system. Having a central battery system allows for an easier to maintain single point of maintenance for the site team, without the need to change out individual batteries in highly sensitive areas on a monthly basis, for individual failures. The batteries for these systems can then be placed in an area not subject to the high temperatures found in data halls, so the systems can have much longer life expectancies than the less robust integral emergency packs commonly used in the UK. Data centres are typically hot environments, so the equipment being used to support these areas need to be capable of working for sustained periods of time within these temperatures. Most integral battery luminaire solutions are not designed or tested to the levels seen in data centres, especially for these buildings which are in operation 24/7. The added benefit of the system monitoring the health of the emergency luminaires for reporting, also allows the site team to carry out maintenance only when required. 4. Implement dark sky compliant external lighting for environmental and nocturnal animal life Data centres, as high-security critical environments, should not stand out as beacons in their surroundings. It's crucial to maintain a low-key presence in the built environment. Both humans and ecological systems rely on periods of darkness to thrive. By adhering to dark sky-compliant guidelines for exterior lighting, data centre facilities managers can ensure minimal light pollution, protecting the environment and contributing to the preservation of nocturnal animal life, whilst maintaining a level of security necessary for the site. This responsible approach to lighting underscores the data centre's commitment to environmental stewardship. Dark sky-compliant exterior lighting restricts the amount of upward-directed light, avoids glare and over-lighting, utilises dimming and other appropriate lighting controls, and minimises short-wavelength (bluish) light in the night-time environment. The International DarkSky Association offers the DarkSky Approved programme, providing third-party certification for products and projects that minimise glare, reduce light trespass, and protect the night sky. 5. Implement intelligent control systems to minimise energy consumption Integrating intelligent lighting control systems allows for dynamic adjustment of lighting levels based on occupancy and required light conditions. This feature ensures optimal illumination while minimising energy consumption and maximising energy efficiency. DALI (Digital Addressable Lighting Interface) control systems can save energy and money by dimming luminaires when not required to be working at full output, maintaining the light levels required for CCTV coverage, or switching off luminaires at specific times or when rooms are empty, potentially reducing the systems energy consumption by up to 82%. By adopting these five key strategies, data centres can effectively harness the benefits of modern LED lighting, leading to significant energy savings, reduced environmental impact, and enhanced operational efficiency. For more information on innovative lighting solutions, visit Zumtobel's website and explore its range of products designed to optimise data centre performance. For more from Zumtobel, click here.

Simplifying higher education IT infrastructure complexity
Universities and higher education institutes often face complex challenges in providing the right services to students and staff, while meeting emissions goals. Modernisation of ICT offers numerous opportunities for efficiency, availability and reduced environmental impact, as Louisa Buckley of Schneider Electric explains. There is no escaping the digital transformation of higher education. It has impacted everyone, forcing leaders to reform almost every element of the learning experience. At the forefront of the transformation is technology. But there’s a problem - the paradox that while there’s more pressure than ever to evolve and innovate, many institutions are behind the curve when it comes to IT infrastructure. Universities and colleges regularly experience challenges with space constraints, ageing infrastructure, and sustainability. The education sector can face significant challenges in supporting education and training across diverse, distributed campuses, over a wide range of disciplines, and often with an equally wide range of legacy, modern and cutting-edge infrastructure, while also protecting the institution and any intellectual property (IP) and confidential information for which it is responsible. Modernisation of IT infrastructure in this context is much more than improving availability or efficiency; it is an enabler of better management, reduced costs, and a clearer, accelerated path to net zero. Unique and common challenges The challenges mentioned are not entirely unique to the education sector, as the tech industry has been dealing with them for decades, and recent developments have allowed IT estates to become more visible, manageable, and optimisable. Advancements in areas such as Internet of Things (IoT) and instrumentation have meant the term ‘smart’ can be applied to ever more categories, from uninterruptable power supplies (UPS) to cooling systems, and buildings. Cloud-based and AI-enhanced management systems, such as data centre infrastructure management (DCIM), can span multiple environments, from on-premises to the cloud and beyond, gathering data, increasing visibility and showcasing insights for optimisation and efficiency. Digital design and modelling These insights begin at design, as digital design and modelling tools allow existing systems to be mapped and understood more extensively, with visualisations helping to highlight the impact of any development, protecting historical and architectural heritage. In operation, the design models go on to serve as digital twins, sandboxes for configuration, optimisation and change management. This level of digitalisation of sensors, equipment, infrastructure, and buildings means that building management systems (BMS) can be integrated with power and cooling systems, which in turn can be managed with onsite renewable energy source (RES) generation to provide a complete picture of consumption, operations, and emissions. Tracking this level of data over time with analytic tools can allow AI-enhanced systems to optimise within specific parameters, on availability, resilience, energy consumption, user needs, and overall emissions. This level of data allows a more complete picture of entire operations for the whole organisation, facilitating meaningful comparisons with other similar organisations, locally or globally, as well as adjacent sectors. Best practice from other areas can be examined and applied. Full Scope 1-3 emissions reporting becomes possible, with a complete picture of environmental impact. Common reporting standards and frameworks can then be adopted, or existing ones more easily applied. Predictive and preventative maintenance An additional benefit of this variety, richness, and depth of data is a greater scope for predictive and preventative maintenance, where anomalies are detected earlier, before they cause an outage, failure, or loss of service. Cybersecurity There are also benefits through modernised ICT infrastructure for cybersecurity, as vulnerabilities through the likes of peripheral devices or systems can be mitigated through network segmentation, whitelisting and traffic management, as implemented through centrally managed policies. Education experience Schneider Electric has extensive experience in cutting-edge new designs, modernisation, and digital transformation projects, and specifically within the education sector. University College Dublin’s (UCD) heritage dates back more than 150 years. Its main Belfield campus has facilities from the 1960s onward and is one of Europe’s leading research-intensive universities. Schneider Electric and partners successfully designed and delivered a new cooling system that provides greater data centre efficiency that has unlocked valuable real estate for redevelopment and new facilities. The Uniflair InRow Direct Expansion (DX) cooling solution is more scalable, efficient, and provides resilient cooling for IT infrastructure. UCD’s solution is based on 10 independent InRow DX cooling units, rightsized to server load to optimise efficiency. The system is scalable to enable UCD’s IT Services Group to add further HPC clusters and accommodate future innovations in technology, including the introduction of increasingly powerful CPUs and GPUs. Similarly, Loughborough University, one of the world’s leading sports-related universities, has undertaken a data centre modernisation project. The next-generation EcoStruxure for Data Centre solution has delivered increased resilience and efficiency, including a services agreement and EcoStruxure IT software to provide 24/7 data-driven insights with proactive maintenance and service support. The project was delivered in two phases with partners: firstly modernising its Haslegrave facility by replacing an outdated raised floor design and deploying an EcoStruxure Row Data Centre solution, an integral part of Schneider's EcoStruxure for Data Centres architecture and IoT-enabled system. This deployment has significantly improved the overall structure, enabling an efficient data centre design. University of Lincoln also faced resilience challenges due to a lack of standby power generating capabilities, affecting its ability to carry out work without service interruption. In modernising its UPS estate, APC UPS added resilience to the university’s network, with 110 Schneider Electric APC Smart-UPS SRT units deployed across the university’s distributed edge facilities, providing power protection and continuity in the event of disruptions or disturbances to the mains power supply. They are managed through APC NetBotz environmental monitoring devices, as well as EcoStruxure IT Expert and Data Centre Expert DCIM. This not only enables the IT team to prioritise ongoing remedial tasks and respond more quickly to unforeseen events and outages, but has also allowed cooling in the data centres and edge facilities to be optimised for greater operational efficiency and lower power consumption. Expected standards These various experiences have allowed each of these leading universities to achieve greater operational efficiency and visibility of overall consumption and impact, as well as operational insights and optimisations that feed into net zero targets and ambitions. As a coordinated strategy for modernisation, increased digitalisation and optimisation provide unparalleled opportunities for educational institutions to meet their unique challenges while improving services to students, faculties and researchers, and reaching net zero ambitions. For more from Schneider Electric, click here.

How can data centres cope with the AI explosion?
By Louis McGarry, Sales & Marketing Director at Centiel UK. AI is at our doorstep. It’s so big, it’s unfathomable. It’s also unregulated. Even the Jeremy Vine show on Radio 2 has recently covered consumer fear of AI and it’s existing ability to replicate people’s voices for fraudulent purposes. This is not something which could happen in the future, it’s already here! The capability of AI is mind blowing and it is already starting to enter our lives in a small way, but this is set to mushroom very rapidly. According to Forbes Advisor: “AI is expected to contribute a significant 21% net increase to the United States GDP by 2030, showcasing its impact on economic growth.” As reported by Grand View Research: “AI continues to revolutionise various industries, with an expected annual growth rate of 37.3% between 2023 and 2030.” Part of this growth is driven by investment. Digital currency providers and tech giants are ploughing billions of dollars into research and investment. These organisations are working to make AI more accessible for consumers and other businesses. In some ways, AI will become the monster we can’t tame, but it will offer many applications for good too, even if we can’t comprehend what these will be yet. From improving healthcare to automating transport, applications have the potential to make our lives easier. According to Forbes Advisor in 2024, “the most popular AI uses include responding to messages, answering financial questions, planning travel itineraries and crafting social media posts as its versatility transforms everyday tasks.” What we do know is that machine learning is power hungry and the growth in AI and blockchain will need to be managed in terms of energy use within the data centre. More space will be needed, and so will more energy to manage the processing power required. The International Energy Agency has stated that data centres currently use about 1% of the global electricity demand. However, McKinsey estimated that by 2030, data centres’ power consumption will almost double and is expected to reach 35 gigawatts of power consumption annually, up from 17 gigawatts in 2022.  Managing current capacity Data centre managers will need to be open minded about taking advantage of available capacity.  Traditionally, UPS have been over sized and lightly loaded.  This means in the UK, currently we have multi-megawatts of protected power which is underutilised.  Instead of building new data centres, in the short-term at least, can we look at how valuable space can be better occupied? There may be legal and moral questions about renting existing space to new AI customers, however, there are also opportunities.  In the UK, we have the best of the best in terms of resilience and architecture for power infrastructure and managing and renting existing space better could offer some answers to avoid adding further strain to the grid at this early stage. Being future-ready Data centres will also need to manage energy better. According to Electricity 2024, a report from the International Energy Agency, electricity consumption from data centres, artificial intelligence (AI) and the cryptocurrency sector could double within the next two years. I believe that in the future, data centres will need to generate their own energy through renewable sources purely to reduce their reliance on the grid and minimise costs. For the first time in history, pressure to adopt a sustainable approach using renewable energy sources and the need to save money go hand-in-hand.  However, any technology deployed must be flexible enough to be able to adapt and accept different energy sources – some of which may not have even been invented yet. Here’s where data centres and the manufacturers that supply them need to be open-minded, flexible and agile. Data centres will still need to be designed to maximise energy efficiency but can they harness renewables? Flat roofs can be used to take advantage of solar energy or is there space for wind turbines in large grounds? From a UPS perspective, Centiel has already taken significant steps to support the need to reduce energy use. Its' sustainable modular UPS StratusPower offers the highest levels of availability and has on-line efficiencies close to 98%.    Uniquely, StratusPower has already been designed with the future use of renewable energy, such as solar and wind, in mind. Currently mains AC power is rectified to create a DC bus that is used to charge batteries and provide an input to an inverter. But what about a future where the DC bus can deliver critical power protection using renewable energy sources? There is little doubt that future grid instability and unreliability will need to be corrected by the use of renewables and StratusPower is ready to meet this future. AI is a monster that we need to look straight in the face. We can’t ignore it. It will change our world beyond our imagination and so we need to be prepared to change too. As well as managing space and energy better, we will need to implement protocols to control AI. To avoid catastrophic consequences, a UPS controlled by AI must never be permitted. A UPS can be enabled to dial out but not to dial in. I also believe making AI or blockchain more visible within data centres will allow it to be managed better for the common good and not just by a small minority who could use it for harm. If understanding is increased and it is in the open, it has more chance to become regulated. Currently understanding the universe, planets and space is easier than comprehending the capability of AI. There is a great deal of fear, but as an industry we must be open, discuss issues and potential solutions and prepare for changes which will come. And they will come quickly. The fact is that we don’t know what the future holds in terms of innovation. With AI and the Blockchain sector set to accelerate electricity use exponentially, data centres need to act now to reduce power consumption. Harnessing renewable energy sources will future-proof businesses and Centiel’s expert team of trusted advisors, can work hand-in-hand with data centres to advise about how to achieve this using the most efficient UPS systems while carefully managing total cost of ownership, avoiding risk and not compromising on availability. Only by working better together as human beings, will we ensure that our data centres can cope with the AI explosion. For further information, visit www.centiel.com. For more from Centiel, click here.

How to choose a network performance monitoring solution
by Paul Gray, Chief Product Officer at LiveAction. Enterprise networks are undergoing huge changes. It’s not uncommon for an enterprise to find that it has quickly outgrown its NPM and that it simply cannot cover the true scope of the enterprise's new geography of activity. LiveAction’s 2024 Network performance and monitoring trends investigated the current state of NPM solutions, surveying hundreds of enterprises on their experiences. Nearly three-quarters (74%) of respondents managed on-premises networks, 70% said they manage cloud environments, 61% said they’re running hybrid architectures and 43% are managing WAN and SD-WAN networks. This kind of environment diversity signifies a huge change from previous generations' traditionally homogenous corporate networks. However, their NPM solutions often cannot keep up with their growing networks, much less their future ambitions. Over half (57%) reported that they lack visibility into exactly the places they now manage - such as cloud SD-WAN, WLAN but also growing areas like Voice and Video. Nearly half (49%) say that their current NPMs cannot generate actionable insights that will allow them to solve the problems quickly. Crucially, 43% of respondents also report that their NPMs struggle to scale as the data volumes and network complexity increase. This isn't just a matter of simple network management, but the long-term health of the larger organisation. To innovate, enterprises have to be able to understand their environments and make sure that they run efficiently and securely so that they can accommodate new transformations. Searching questions So what then? Enterprises will need to consider their options. Choosing a new NPM is more complex than picking it off the shelf. An organisation on the hunt for a new NPM will first need to ask themselves several searching questions about what will fit with their goals and needs. They’ll first have to understand their current needs and how, if applicable, their current NPM is failing them. For example, are there new types and traffic loads the current infrastructure cannot handle? Are they dealing with new security threats that their NPM is repeatedly missing? Are their attempts to decrypt traffic introducing compliance risks? The current environment and network activity's true scope must also be addressed. Whatever NPM solution a business chooses, it must accommodate its various environments, from data centres to multi-clouds. It’s crucial for future intentions as well. If a business wants to hybridise its environment or roll out large IoT deployments, its choice of NPM provider may change significantly. Depending on future transformation goals, a business will need to consider how well a solution can scale to meet practical realities and future ambitions. NPMs for today’s landscape Businesses must choose the solution that works for them; however, some characteristics are critical in today's landscape. Firstly, comprehensive end-to-end network visibility and performance management should be considered fundamental. That includes visibility across SD-WAN, WAN, LAN, public, hybrid and multi-cloud environments. This is especially important because problems might spring up in one part of the network, but may emanate from another. The ability to pinpoint the origin of such problems is an absolute necessity, especially across the diverse range of environments of today’s modern networks. Network Traffic Analysis is also crucial because businesses need to be able to use deep packet capture, netflow, and analytics to monitor network traffic in real time and granularly inspect the contents of individual packets without decrypting them. Application performance and visibility monitoring are critical to correlate data across network devices, applications, cloud environments, and other places to maximise application performance. The ability to visualise network activity will also be important to unlock a strategic view of network activity. This will allow issues to be quickly tracked to their root cause and relevant specialists to gain a holistic view of network activity. Justifying new purchases to the board There’s also a crucial question here about how to justify the purchase of new NPM tools to management. C-suite level executives and managers with control over budget are not tech-savvy. From that point of view, they can view large purchases or infrastructural changes with scepticism. As such, the technical team must justify their budgets on a higher level. The first step on this path involves understanding the perceived risks of adopting a new NPM solution. Implementation may be complex, involve potential disruptions, and be costly. These concerns must be addressed head-on so that they can be rationally weighed against the benefits of a new solution. Equally, those benefits have to be stated clearly. Executives and managers need to know how a particular solution can proactively identify issues, optimise network performance, and contribute to further innovation, ROI, and competitive advantage. Enterprises need not go into a new solution full-bore. Many vendors offer trial periods for their solutions so that customers can see how they run in their environment and otherwise meet their needs. Changes like these can be expensive so no choice should be made lightly. However, by proactively addressing and demystifying the perceived risks, a new NPM solution can help adapt an organisation to growing network demands and lay a foundation for further innovation. For more from LiveAction, click here.

How data centres can prepare for 2024 CSRD reporting
by Jad Jebara, CEO of Hyperview. The CEO of Britain's National Grid, John Pettigrew, recently highlighted the grim reality that data centre power consumption is on track to grow 500% over the next decade. The time to take collective action around implementing innovative and sustainable date centre initiatives is now - and the new initiatives such as the Corporate Sustainability Reporting Directive (CSRD) is the perfect North Star to guide the future of data centre reporting. This new EU regulation will impact around 50,000 organisations, including over 10,000 non-EU entities with a significant presence in the region. The Corporate Sustainability Reporting Directive (CSRD) requires businesses to report their sustainability efforts in more detail, starting this year. If your organisation is affected, you’ll need reliable, innovative data collection and analysis systems to meet the strict reporting requirements. CSRD replaces older EU directives and provides more detailed and consistent data on corporate sustainability efforts. It will require thousands of companies that do business in the EU to file detailed reports on the environmental impact and climate-related risks of their operations. Numerous metrics being assessed are still widely analysed within additional EU-wide initiatives. For instance, the Energy Efficiency Directive (EED) requires reporting on two Information & Communication Technologies (ICT) within the CSRD Directive – ITEEsy and ITEUsy – allowing for enhanced measuring and insight into server utilisation, efficiency, and CO2 impact. Given the anticipated explosion in energy consumption by data centres over the next decade, CSRD will shine a spotlight on the sustainability of these facilities. For example, the law will require organisations to provide accurate data for both greenhouse gases and Scope 1, 2 and 3 emissions. The essential metrics that data centres will need to report on include:   Power usage effectiveness (PUE) – measures the efficiency of a data centre’s energy consumption   Renewable energy factor (REF) – quantifies the proportion of renewable energy sources used to power data centres   IT equipment energy efficiency for servers (ITEEsv) – evaluates server efficiency, focusing on reducing energy consumption per unit of computing power   IT equipment utilisation for servers (ITEUsv) – measures the utilisation rate of IT equipment   Energy reuse factor (ERF) – measures how much waste energy from data centre operations is reused or recycled     Cooling efficiency ratio (CER) – evaluates the efficiency of data centre cooling systems    Carbon usage effectiveness (CUE) – quantifies the carbon emissions generated per unit of IT workload   Water usage effectiveness (WUE) – measures the efficiency of water consumption in data centre cooling   While power capacity effectiveness (PCE) isn’t a mandatory requirement yet, it is a measure that data centres should track and report on as it reveals the total power capacity consumed over the total power capacity built. If not already, now is the time to ensure you have processes and systems in place to capture, verify, and extract this information from your data centres. We also recommend conducting a comprehensive data gap analysis to ensure that all relevant data will be collected. It’s important to understand where your value chain will fall within the scope of CSRD reporting and how that data can be utilised in reporting that’s compliant with ESRS requirements. For example, reports should be machine-readable, digitally tagged and separated into four sections – General, Environmental, Social and Governance. While the immediate impact of CSRD will be in reporting practices, the hope is that, over time, the new legislation will drive change in how businesses operate. The goal is that CSRD will incentivise organisations such as data centre operators to adopt sustainable practices and technologies, such as renewable energy sources and circular economy models. Improving sustainability of data centres    Correctly selecting and leveraging Data Centre Infrastructure Management (DCIM) that offers precise and comprehensive reports on energy usage is a paramount step in understanding and driving better sustainability in data centre operations. From modelling and predictive analytics to benchmarking energy performance - data centres that utilise innovative, comprehensive DCIM toolkits are perfectly primed to maintain a competitive operational advantage while prioritising a greener data centre future. DCIM modelling and predictive analytics tools can empower data centre managers to forecast future energy needs more accurately, in turn helping data centres to optimise operations for maximum efficiency. Modelling and predictive analytics also enables proactive planning, ensuring that energy consumption aligns with actual requirements - preventing unnecessary resource allocation and further supporting sustainability objectives.  Real-time visibility of energy usage gives data centre operators insight into usage patterns and instances of energy waste, allowing changes to be made immediately. Ultimately, eliminating efficiencies faster means less emissions and less energy waste. In addition to enhancing operational efficiency, leveraging these real-time insights aligns seamlessly with emission reduction goals – supporting a more sustainable and conscious data centre ecosystem. Utilising the right DCIM tools can also reduce energy consumption by driving higher efficiency in crucial areas such as cooling, power provisioning and asset utilisation. They can ensure critical components operate at optimal temperatures, reducing the risk of overheating and preventing energy wastage. In addition to mitigating overheating and subsequent critical failures, utilising optimal temperature tools can also improve the lifespan and performance of the equipment. The right DCIM tool kit enables businesses to benchmark energy performance across multiple data centres and prioritise energy efficiency – while also verifying the compliance of data centres with key environmental standards and regulations like CSRD. Cutting-edge DCIM platforms also enables data centres to correctly assess their environmental impact by tracking metrics such as power usage effectiveness (PUE), carbon usage effectiveness (CUE) or water usage effectiveness (WUE). These tools facilitate the integration of renewable energy sources - such as solar panels or wind turbines - into the power supply and distribution of green data centres. As sustainability continues to move up the corporate agenda, expect to see greater integration of DCIM with AI and ML to collect and analyse vast quantities of data, such as sensors, devices, applications and users. In addition to enhancing the ease of data collection, this streamlined approach aligns seamlessly with CSRD emission reduction goals - making compliance with CSRD and similar regulations much easier for data centres. Taking a proactive approach to the data gathering requirements of CSRD and implementing technologies to support better sustainability practice isn’t just about compliance or reporting; it’s also to incentivise data centre operators towards the adoption of sustainable practices and technologies. Ultimately, data centres that are prepared for CSRD will also be delivering greater value for their organisation while paving the way for a more sustainable future.



Translate »