Thursday, April 24, 2025

Features


COMSovereign expands 5G IP portfolio with additional enabling technologies
COMSovereign has outlined its going efforts to expand the value of its intellectual property (IP) portfolio as part of its ongoing business transition. As an innovator in advanced wireless transmission technologies underlying both 4G and 5G wireless networks, it continues to pursue opportunities to monetise the value of its IP. To date, the company holds approximately 130 patents and approximately 25 patent applications pending. These pending patents cover an array of critical wireless networking technologies supporting the latest 5G Mobile Broadband Standard. This includes meeting future wireless network system requirements for increased bandwidth through the support for simultaneous radio transmission and reception utilising approaches such as the Company's Lextrum in-band full duplex (zero division duplex) technology. "As an early player in the 4G and 5G space, COMSovereign's business was built on a solid IP foundation, one that powers the market leading performance of our DragonWave and Fastback products. As part of our ongoing review of the business, we believe our IP portfolio represents an untapped opportunity to create value for our stakeholders. That is why our Board of Directors and our leadership team is actively exploring ways to monetise our IP through multiple paths," says David Knight, interim CEO of COMSovereign.

Ensuring resilience during the energy crisis
By Billy Durie, Global Sector Head for Data Centres at Aggreko It is no secret that Europe’s appetite for data is increasing year-on-year. According to Domo’s Data Never Sleeps 6.0, it is estimated that the average person creates 1.7MB of data per second. This rapid rate of digitalisation has in turn placed the onus upon service providers to ensure that this growth is supported, and that demand continues to be met. However, it is important to note that this has not been without consequence, as the cities housing Europe’s leading data centre markets - Frankfurt, London, Amsterdam, Paris and Dublin (FLAP-D) - are now wrestling with significant grid strain. While it would be unfair to say that this is solely the fault of the data centre sector, it has undoubtedly been a major contributor to this challenge. In the Republic of Ireland, for instance, data centre electricity consumption has spiked by 144% in five years, with supplier EirGrid introducing stringent legislation around applying for new grid connections as a result. Factoring in the effects of the energy crisis, stability of supply has now reached an all time low for the data centre sector, reinforcing the need to ensure that facilities are able to effectively manage these challenges. Assessing industry challenges In an effort to assess how these issues are currently affecting data centre operators, Aggreko surveyed 253 industry professionals across the UK and Republic of Ireland as part of its latest report, The Power Struggle - Data Centres. Those surveyed occupied roles from junior manager up to C-suite executive, with the research taking place in April 2022. The headline findings illustrate the effect that the combination of grid strain and the energy crisis has had on stability of supply. Over 70% of UK businesses cited power security as either ‘a concern’ or ‘a major concern’, while only 6% said it was not. The former figure rises to 80% where the Republic of Ireland is concerned, illustrating the severity of grid strain in this market. Moreover, Aggreko’s survey also indicates that 65% and 60% of UK and Irish businesses respectively have experienced power outages in the past 18 months. With this in mind, it is clear that these concerns are not unfounded. Given that industry standards dictate that downtime is expected to be kept to under 28.8 hours per year, these challenges are evidently creating an unsustainable position for the data centre sector. Comprehensive stress testing With the consequences of a possible outage in mind, there has never been a more crucial time to ensure that facilities are able to function effectively, even under the effects of grid strain. Given that rising rack densities are driving electricity consumption in data centres ever higher, this consideration is more important than ever. This is achieved through loadbank testing equipment before facilities are brought into operation, which takes place over five key stages: Factory acceptance testing: determining whether equipment has been built and operates in accordance with design specifications.Site acceptance testing: ensuring equipment meets specification criteria and is inspected for damage before it enters the facility.Pre-functional testing: verifying the functionality of the equipment, which includes determining whether each device is properly installed, wired, torqued and Megger tested prior to initial energisation.Individual system testing: detecting hotspots or weak components in the equipment, allowing for it to be replaced before the facility is put to work.Integrated system testing: ensuring that all equipment responds appropriately to varying loads, staged machinery failures and any potential utility problems. Through undertaking comprehensive loadbank testing in line with these steps during the commissioning phase, and then annually afterwards, operators can ensure that their facilities stay on top of rising demands while minimising power outage risks. Exploring alternative approaches Loadbank testing is the most effective method of assessing the capabilities of a data centre before the facility goes live. However, it is also important to address the long-term causes for the instability of supply in the first place. With this in mind, it may be time for data centre operators to look beyond their traditional grid connection for power procurement. A possible alternative here could be Energy as a Service (EaaS) or Power Purchase Agreements (PPAs), wherein users generate their own energy on site by way of decentralised energy solutions, paying their supplier by kWh. This allows facilities to reduce reliance on their increasingly unstable grid connection without the need to invest in new equipment. The EaaS concept has grown in popularity among data centre operators in recent years, with 51% and 49% of respondents to Aggreko’s survey in the UK and Ireland respectively considering generating their own energy. Yet drawbacks exist in the form of fixed term pricing, with some operators being subject to one or two-year contracts. Given the volatile nature of the current energy market, it is easy to see how this could be counterintuitive. This is especially concerning given that some operators even issue penalties for excessively high usage. www.aggreko.com

Designing, planning and testing edge data centres
By Carsten Ludwig, Market Manager, Reichle & De-Massari Edge data centres provide computing power on the periphery of cloud and Wide Area Networks, relieving these and improving performance. They are located as closely as possible to points where data is aggregated, analysed, or processed in real-time. Popular content and applications, for example, can be cached closer to less densely networked markets, improving performance and experience. Let’s examine some considerations when designing, planning and testing edge data centres. Location Edge providers may operate dozens or hundreds of edge data centres concurrently across urban and suburban locations, which can be hard to reach and work in, so edge data centres need to be exceptionally robust and secure. Proximity or direct connection to fibre optic links and network node points is imperative. Edge data centres need redundant, synchronous fibre hyperconnectivity in all directions: to the cloud, cellular phone networks, neighbouring data centres and users. These factors pose a significant challenge for planners and design engineers. For planning purposes, edge providers require a tool that reflects all the above-mentioned demands and preconditions. The quality of planning can be improved, if drawings and data for further material sourcing comes from a single tool. Testing in this stage would be recommended for the fibre links, to ensure these are working correctly and delivering the promised performance. The tested quality of the components used determines the performance and functional reliability of the optical links. Technical requirements Edge data centres often have to cope with a lack of space and harsh environmental conditions. They need to be positioned in protected, discrete, dry places and the following must be provided: Interruption-free power supplyFire protectionAir-conditioning and coolingSound, dust and vibration protectionLocking and access control A professional approach to securing high performance from the outset would be using preconfigured and assembled modular systems. These could consist of pre-terminated panels, sub racks or complete racks, if logistics and site design allow. Preconfigured equipment can be delivered by the OEM with relevant test certificates, ensuring a high level of quality and vastly simplifying installation as no testing is required on site. This approach requires a professional installation performance and capability from the service team. Properly configured and tested modules increase quality, reduce the risk of failure significantly and reduce the workload on site. High density and port capacity Afcom's ‘2022 State of the Data Centre’ study noted a significant density increase at the edge. In 2021 the typical respondent implementing, or planning edge locations reported an estimated mean power density of 7kW. In 2022, this was 8.3kW. Edge data centre fibre hyper connectivity requires space for high count fibre cables under floors and in cable ducts. For edge networks moving content such as HDTV programmes closer to the end user high density of more than 100 ports per rack unit is essential. Traditional 72 ports per unit UHD solutions won’t suffice. Current high-density fibre solutions for data centres generally offer up to 72 LC duplex ports per rack unit. However, this can introduce management difficulties.  Pretermination by the OEM would be ideal. Testing on site is possible with adapters required on test equipment to serve new connectivity solutions such as the VSFF connector family. Connectivity can also be secured using intelligent AIM systems for monitoring layer one performance. Besides the connectivity check ‘outside of the data stream’ edge providers have an overview of what’s happening within connectivity ‘inside of the data stream’. There are several ways of realising this, from a low budget approach using TAP modules to high-performing 24/7 signal analysers. Each edge location has a unique design and service to deliver, so the approach has to be selected accordingly. Testing To ensure quality and performance levels, testing is essential. In Reichle & De-Massari's experience, new data centre builds rarely go according to schedule. If part of the process is pulled forward or delayed, it introduces challenges related to component quality and performance. The installation of sensitive equipment such as fibre connectivity that needs to be 100% clean might have to take place in an environment insufficiently free of dust and moisture for example. It’s important to determine what tests can be done up front to avoid hassle on site. Optical connectors and adapters can be checked for insertion loss and other standard KPIs before delivery by the OEM. Even if equipment has been preconfigured, testing on-site in the event of schedule changes isn't just smart - it should be mandatory. That avoids issues, and therefore also delays and a lot of finger-pointing between involved parties. Management Cable management is key. Double check measurements, make sure terminations are top quality, test wherever necessary, label and colour-code, watch out for cramped conduits and make absolutely sure no cables or bundles rest upon others. Bad cable management can result in signal interference and crosstalk, damage and failure, resulting in data transmission errors, performance issues and downtime. Introducing Operation Management systems provides a seamless 24/7 performance status for each location. As these locations are distributed in line with the nature of the new network architecture, the performance management should not only focus on standard applications such as power, cooling and access reports: every aspect of data connectivity needs to be covered. Solutions that monitor data flow (such as TAP modules) are mandatory. Because an edge provider’s service team doesn’t work on site, remote control of all relevant aspects at each location is mandatory and a precondition for customer-relevant performance. Remote control needs to cover all edge locations in one system. On one hand, this helps monitor status of all relevant dimensions such as power supply, temperature conditions, data access, data flow, and security. On the other hand, current and upcoming installations at the edge site are monitored by a single system, giving insight for asset and capacity management and serving as a basis for further extensions and new/changing customers at each site. www.rdm.com

Whitestar Solutions doubles productivity with TREND Networks
Whitestar Solutions has doubled productivity since replacing its existing cable certifier fleet with LanTEK IV testers from TREND Networks. In 2020, the company’s fleet of testers were due for renewal. Upon finding a superior and more cost-effective solution in LanTEK IV cable certifiers, Whitestar Solutions opted to update its fleet by partnering with TREND Networks over its previous supplier. “The first job we used the LanTEK for was for the London Business School,” says Gavin Atkins, Project Supervisor for Whitestar Solutions. “There were over 4000 data points to be installed, but with LanTEK on our side and helping us through, the job ran smoothly and was a complete success.” LanTEK IV is easy to use, with a responsive touchscreen and simple user interface. Moreover, it saves significant amounts of time as it enables the user to test and save a Cat6A link in just seven seconds. “We now have the LanTEK certifier, capable of testing within seven seconds, which is twice as quick as anything we have ever had before. That means our productivity on site has literally doubled overnight,” says John English, Managing Director for Whitestar Solutions. Gavin continues: “When you’re testing thousands of data points, every second counts and that combined with the VisiLINQ reduces the time spent on any job.” VisiLINQ Permanent Link Adapters enable technicians to work smarter, not harder. They make it possible to initiate testing and view the results, all without needing to carry or touch the tester.  Users simply press the VisiLINQ test button and wait for the coloured light to indicate the result. “When you’re testing in a noisy environment you can see the green light flash and, without having to waste time looking down at the screen, you know that it has passed and you can go on to the next project,” says Gavin. Another productivity boosting feature which has benefitted Whitestar Solutions is the TREND AnyWARE Cloud, the fastest cloud test management system in the world. Project managers can also pre-configure all project information in the TREND AnyWARE Cloud for field technicians to download, helping improve accuracy. “Before the cloud, our engineers would be responsible for inputting their own test IDs, which on occasion, a mistake could happen,” explains John. “One digit out, could mean that a 1000-test project would need to be manually changed in the office, which would add time to the project.” John continues: “With our previous supplier, when it came to issuing test results, we either had to send the testers back to the office, or the engineers saved test results to the USB stick, both of which were very time consuming from an admin point of view. But now with LanTEK, the results are seamlessly synced to the cloud, and our project managers can issue test reports to the clients on the day the project is completed.” Even if Whitestar Solutions’ technicians do not have access to Wi-Fi, they can easily connect the LanTEK cable certifier to their phones. This means that wherever they are working in the country, they can transfer the test results to the office to be signed off with minimal fuss. “Working in the education sector, time and speed are of the essence. With our previous supplier, when we had a tester go in for calibration, we could sometimes be a tester down for one to two weeks,” says John. “With TREND, the tester goes in one day, and we’ve got a loan unit the next. Downtime is kept to an absolute minimum.” TREND Networks also provides a lifetime support promise, to offer calibration and repair for the LanTEK IV cable certifier for as long as it is in use. Technical support is also available globally, with a two-hour response time promise, to further help maximise productivity. “Our previous supplier wanted the large investment all in one go, whereas with TREND Networks they were buying into our beliefs, which was building a partnership and providing that financial flexibility,” concludes John. “We are planning to grow over the next three to five years and we’re confident that we can depend on TREND Networks to come on that journey with us.” www.whitestarsolutions.com www.trend-networks.com

A plan to find your security vulnerability before hackers do
By Keith Bromley, Senior Network Visibility and Security Solutions Manager, Keysight Technologies One of the top questions on the minds of network security personnel is "how do I reduce my security risk?". Even for smaller organisations this is important because every network has a weakness. But, do you know where you are the most vulnerable? Wouldn't you like to fix the problem now, before a hacker exploits it? Here is a three-point plan that works to expose intrusions and decrease network security risk: Prevention - reduce as many attacks from entering the network as possibleDetection - find and quickly remediate intrusions that that are discovered within the networkVigilance - periodically test your defences to make sure they are actually detecting and blocking threats Network security - it all starts with prevention Inline security solutions are a high impact technique that businesses can deploy to address security threats. These solutions can eliminate 90% or more of incoming security threats before they even enter your network. While an inline security architecture will not create a fool proof defence against all incoming threats, it provides the crucial data access that security operations (SecOps) teams need to make the real-world security threat load manageable. It is important to note that an inline security solution is more than just adding a security appliance, like an intrusion prevention system (IPS) or a web application firewall (WAF). The solution requires external bypass switches and network packet brokers (NPBs) to access and deliver complete data visibility. This allows for the examination of all data for suspect network traffic. Hunt down intrusions While inline security solutions are absolutely necessary to lowering your risk for a security intrusion, the truth is that something bad will make it into your network. This is why you need a second level of defence that helps you actively search for threats. To accomplish this task, you need complete visibility into all segments of your network. At the same time, not all visibility equipment is created equal. For instance, are your security tools seeing everything they need to? You could be missing more than 60% of your security threats and not even know it. This is because some of the vendors that make visibility equipment (like NPBs) drop packets (without alerting you) before the data reaches critical security tools, like an intrusion detection system (IDS). This missing data contributes significantly to the success of security threats. A combination of taps, bypass switches, and NPBs provide the visibility and confidence you need that you are seeing everything in your network - every bit, byte, and packet. Once you have this level of visibility, threat hunting tools and security information and event management (SIEM) systems can proactively look for indicators of compromise (IOC). Stay vigilant and constantly validate your security architecture The third level of defence is to periodically validate that your security architecture is working as designed. This means using a breach and attack simulation (BAS) solution to safely check your defences against real-world threats. Routine patch maintenance and annual penetration testing are security best practices; but they don't replace weekly or monthly BAS-type functions. For instance, maybe a patch wasn't applied or was applied incorrectly. How do you know? And penetration tests are only good for a specific point in time. Once a few weeks or months have passed, new weaknesses will probably exist. And crucially, were the right fixes applied if a vulnerability was found? For these reasons and more, you need to use a BAS solution to determine the current strength of your defences. While updating your security tools is great, constant vigilance goes a long way to security your organisation. This three-point plan can help you ensure that you are doing the most to make your security tools protect your organisation now and in the future. www.keysight.com

Is loadbank testing necessary for containerised data centres?
By Paul Brickman, Commercial Director for Crestchic Loadbanks Typically housed within shipping containers, containerised data centres can be deployed easily, powered up quickly and scaled without delay in line with changing requirements. For the data centre sector itself, containerised solutions are widely used to accommodate surplus demand for data centres that need to grow but cannot yet do so, and often deliver a continuity of performance when a primary data centre needs critical maintenance or refurbishment. In other sectors they are equally as important. They provide 'pop up' IT and communication services for music festivals and sporting events, support office relocations and major construction sites, and are a staple resource for the military, as well as complex industries such as offshore oil and gas, where data and communications demands are often remote and temporary. Temporary by name, essential by nature The temporary nature of these data centres often results in them being commoditised and overlooked when it comes to the maintenance procedures and performance best practice that would be considered essential for a bricks-and-mortar data centre. But when in operation, these mobile data centres are just as much a necessity as their permanent counterparts, safe housing the same valuable data, and preventing the same financially catastrophic losses engineers so dread when maintaining their primary data centres. It is important to remember that a data centre is a data centre, whether that is a purpose built hyperscale campus, a colocation or a temporary solution in a shipping container, and that means the same risks apply, and the same preventative measures are required. No matter the type of data centre being used, the primary cause of unplanned downtime is power failure, something that the Uptime Institute calls “common, costly and preventable”. In fact, in its most recent Risk and Resilience Report, the Uptime Institute calculated that power failure accounts for around 36% of all outages. It is essential therefore that backup generators for containerised data centres are regularly tested, the same as permanent data centre facilities. Critical applications require guaranteed resilience Music festivals and sporting events aside, the vast majority of containerised data centre applications are critical. Military communications, major construction sites, data centre refurbishments and a temporary expansion of primary data centre capacity may all have a clear expiration date, but the situation is already fragile - risking a power outage in an already difficult environment could be catastrophic. Although mobile data centres are designed to provide facilities with the perfect mix of temporary generators, networking essentials, cooling equipment, servers and UPS, the fact remains that a single point of failure can immobilise the entire data centre. This is an important consideration when deciding which maintenance procedures to uphold, and which if any, can be overlooked. With power outages proven to be the biggest point of failure, correct loadbank testing should be maintained at all times to provide reassurance that if required, the backup power system is capable of accepting the required load and maintaining uptime in the event of a power failure. Understand the possibilities, prevent downtime If it is not the temporary nature of a containerised data centre that prevents the required maintenance, then it is often the location, and the assumption that access will be impossible. After all, these small, highly portable loadbanks are often located in areas that have not been specifically constructed for such essential kit. That said, leading loadbank manufacturers have created backup generator testing equipment that can meet the testing demands of containerised data centres.One example is Crestchic’s trailer mounted loadbank solution that combines the powerful testing capability of its traditional resistive-only loadbanks, with the flexibility of a heavy-duty trailer for applications that require exceptional levels of manoeuvrability. With loadbank testing still achievable for mobile loadbanks, the risk of downtime is, as the Uptime Institute puts it, preventable. www.loadbanks.com

Nokia selects Infovista to deliver cloud-based automated drive testing
Infovista has announced a global partnership with Nokia to accelerate the rollout and acceptance of 5G network deployments. The partnership will see Infovista deliver its cloud-based, automated testing solutions to support the verification, optimisation and benchmarking of new 5G and legacy networks deployed and managed by Nokia.  The complexity of 5G, magnified by the environments in which networks are being deployed, make multi-site testing a costly and time intensive overhead. Infovista’s cloud-based testing solutions enable Nokia to automate and centralise the management of testing routines, significantly streamlining backend reporting and freeing Nokia engineers and service companies from manual drive testing to focus on high value network optimisation tasks.  As part of the global partnership, Nokia will now be able to leverage innovative network testing solutions from Infovista such as the Automated SSV solution. The solution not only predicts where to drive and what tests to perform, but also which hot spots and critical areas should be tested and then autonomously conducting the network testing routine. This automation of network testing processes removes trial and error and significantly impacts the time to market and efficiency of Nokia’s network deployments. Infovista is delivering its TEMS testing solutions to Nokia. The cloud-based automation of network testing routines enables Nokia to centralise and automate reporting and real-time validation of testing, reducing the number of failed drive tests that must be repeated. The solutions remove the need for Nokia to maintain dedicated servers and redundancy, reuse of licenses globally and use commercial off-the-shelf devices, significantly increasing the efficiency of network testing. “5G network complexity combined with the proliferation of device types now requires a set of next generation testing solutions leveraging AI/ML, cloud and automation. For operators that are racing to deploy new 5G sites, while continuing to optimise and expand existing network coverage, this new network testing approach is a gamechanger,” says Faiq Khan, President Global Networks, Infovista. “We’re very pleased to be building on our long-standing partnership with Nokia, extending it from centralised reporting to deliver real innovation with cloud-based automation of critical 5G testing processes.” As part of the partnership, Nokia will have access to the latest AI/ML data-driven approaches and automation innovations in network testing, including Infovista’s patent-pending Precision Drive Testing solution, which powers use cases including network acceptance automation, fine tuning 5G planning propagation models, geolocated troubleshooting and proactive network health monitoring.  www.infovista.com www.nokia.com

Reducing the carbon footprint of data centre standby power
The IEA estimates that the energy consumption of data centres comprises around 1% of final electricity demand globally. This figure is understood to rise as high as 4% in the UK and Ireland to reach 15%. There is a clear consensus and impetus within the industry to move to carbon free technology. For example, Google has stated its objective to operate on 24/7 100% carbon-free energy by 2030. Focus is given, understandably, to how energy demands for key applications such as cooling can be reduced, increasing supply from renewable sources such as solar or wind power. But if data centre operators are to truly run on 100% carbon-free energy, then all methods of energy consumption must be considered. This should include how standby power is generated, with a clear roadmap established to ensure a transition to low and carbon-free energy without compromising the critical role this plays in keeping assets running during energy outages or times of unpredictable supply. Finning has been working closely with a number of major industry players to establish the best solution to achieve this. Balancing reliability with sustainability Current reliance on diesel generators for standby power is well established, primarily because of their practical benefits. They are readily available, reliable as a mature technology and able to quickly ramp-up to seamlessly cover issues with main power supplies. Given that data centres must mitigate any risks that may be posed to maintaining mission critical services for the likes of hospitals and other public services, the advantages in terms of reliability that existing technology offers must be carefully considered as we transition away from fossil fuels. Indeed, in the understandably risk-averse data centre sector we are likely to see a phased transition to a balance between maintaining the benefits of existing technology, with the importance of using carbon-free energy. HVO providing a smart first step A first step in this transition is the growing use of Hydrotreated Vegetable Oil (HVO). Produced from certified waste fats and oils, HVO is manufactured using a synthesised process with hydrogen to offer a more sustainable alternative to fossil fuels. Although not entirely carbon-free, HVO can eliminate up to 90% of the carbon emissions caused when compared with the production and use of conventional diesel. Another key advantage is that it can be used as a drop-in replacement for diesel in many engines, as well as used with existing diesel or biodiesel stocks. For example, provided the fuel meets standard requirements, Caterpillar generators built after the year 2000 are able to run on HVO. Given the ease with which they can be integrated into existing assets, Finning has seen growing interest from data centre operators in the use of HVO in tandem with diesel to lower the carbon footprint of their standby generators. Whilst use of HVO with conventional diesel has been the most common approach, there have been high profile cases where a move to 100% HVO has been taken - allowing even greater reductions in carbon emission. Last year, Microsoft announced that all Cat generator sets at its new data centres being constructed in Sweden would run on HVO, providing the final element to obtaining all of its energy needs from renewable sources. Longer term benefits of hydrogen Whilst HVO is a convenient and highly effective drop-in replacement for existing diesel equipment, it is not entirely carbon-free. Furthermore, the sheer scale of demand for fuel means that it is unlikely to become a permanent solution, due to its production being limited by how much vegetable oil and fat is available globally.  Hydrogen has been touted as a more likely long-term solution, with advances in blue (the splitting of natural gas into hydrogen and CO2, with the resulting carbon captured) and green (splitting water by electrolysis into hydrogen and oxygen) production set to help create the volumes needed to replace fossil fuels on a global scale. Whilst those production volumes are yet to come, plans are in action, and it is an area that data centre operators should explore when looking at the next generation of sustainable equipment. As with HVO, a blended approach may be the likeliest next step with gas gensets configured to allow for a blended fuel containing up to 25% hydrogen. Manufacturers such as Caterpillar are rolling out gas generators configured out-of-the-factory to enable operation on natural gas blended with hydrogen. Adaptation of existing gensets can be done with the use of retrofit kits for some equipment, making it an appealing quick win as hydrogen availability increases. Rising industry interest and development of hydrogen supply infrastructure means that 100% hydrogen gensets are in advanced stages of development and we will likely see trials of this equipment in the near future. Where next for data centres? For those that haven’t already adopted it, HVO is the next natural step to take as trials on 100% hydrogen gensets move forwards. Adding Selective Catalytic Reduction (SCR) can also make a notable difference in minimising emissions. Finning says it is also seeing a growing number of new sites considering the use of a microgrid system which allows a combination of power generation sources (photovoltaic solar modules, energy storage, gas and diesel). Smart microgrids mean efficient power that can be produced where and when it's needed without transmission lines and transformer losses. These high performance, scalable systems are designed and built using standardised building blocks that are easy and quick to install even in challenging environments. With the potential risk if transition of standby power is not delivered smoothly, it’s also critical to ensure you have the right partner to identify the best solution for your specific requirements. Working with a specialist such as Finning means you can not only access this invaluable expertise and advice but also gain insights into the latest testing and performance information as newer technology such as hydrogen gensets are brought to market - helping you to get maximum uptime with minimum carbon emissions.

How data centres can tackle their environmental impact
By Inspired Energy Environmental impact is something more and more data centres are looking into as part of the Climate Neutral Data Centre Pact, where data centres commit to become climate neutral by 2030. Alongside this, the demand for data centres is increasing, with data becoming the new fuel as we continue to innovate and rely more on data and technology in our day to day lives. But how can data centres tackle their environmental impact without affecting efficiency? Tackling the environmental impact With many data centres having already developed net zero and heat decarbonisation plans, here are some key areas of focus to tackle environmental impact without impacting efficiency: Greenhouse gas (GHG) emissions GHG emissions are a main measure for many businesses on their journey to net zero, however data centres are often measured by carbon intensity. This metric provides a relative comparison of GHG emissions characteristics after factoring in the scale of a business and the emission rate. Most businesses are already reporting on their Scope 1 and 2 emissions, and only large LLPs are required to report on some of their Scope 3 emissions under the Streamlined Energy and Carbon Reporting (SECR) scheme. Although some businesses are voluntarily reporting their Scope 3 emissions as it’s likely they will become mandatory in the future. Data centres who report on their Scope 3 are demonstrating their commitment to sustainability as one of the largest energy consumers. Sourcing renewable energy As energy intensive users, data centres are turning towards renewable energy to support their net zero commitments. Some larger data centres are securing renewable energy partnerships to power their sites. Renewable energy procurement can help to reduce or even eliminate Scope 2 emissions and there are a number of purchasing options. A virtual or physical Corporate Power Purchase Agreement (PPA), where renewable energy is purchased directly from an energy generator, or green tariffs secured through an energy provider are also another route to sourcing renewable power, along with on-site generation and renewable energy certificates. Calculating your Power Usage Effectiveness (PUE) Data centres should look to optimise their PUE score to achieve maximum efficiency by reducing the amount of energy used for anything other than running equipment. Data centres looking to reduce their PUE may find it challenging to reduce both costs and improve their environmental impact. However, there are ways to optimise PUE: Allowing server rooms to operate at a higher temperatureReducing the density and therefore energy consumed per square metre - this helps to dissipate heat but can be counteractive to other practicesImprove the flow of cool air in computer rooms through containment solutionsOptimising the production of cool air through combined use of outside air and heat exchangersLocating data centres in the Artic or under the sea Cooling Data centres use water for cooling their equipment in cooling towers. Many servers in a hyperscale data centre require a larger cooling capacity but water consumption varies due to the climate and the type of cooling system used. Large data centres may rely on evaporative cooling which uses less energy, but the process requires more water - often the preferred approach as it’s less expensive. Whilst data centres need to start paying attention to their water consumption and WUE, they should also explore how to use water more efficiently. From increasing cycles of concentration in cooling towering to switching to chemical treatments to demineralise evaporating surfaces and recommissions to reduce cool load. Hardware Hardware can also be a part of the solution when it comes to reducing emissions whilst ensuring performance efficiency. New processors to increase computing power without increasing energy consumption isn’t far off as the ongoing ‘arms race’ between chip-makers encouraged innovation of the energy computing ratio. Not only this but updating old, outdated, broken and inefficient equipment is also part of supporting the overall efforts to tackling environmental impact. Software AI software can also play a role by supporting data centres to better manage their infrastructure and maximising the utilisation of their CPUs. This will help to mitigate and alleviate consumption issues and deliver energy savings. Improving CPU performance will help data centres keep up with demand for data processing and help reduce the number of processors they need to achieve the same or more computing processes. Data storage can also be optimised to reduce environmental impact. Working through these key areas of focus, your data centre can tackle its environmental impact to further its progress to carbon neutral by 2030.

Smart power management in buildings and data centres
By Matthias Gerber, Market Manager LAN Cabling, Reichle & De Massari and Carsten Ludwig, Market Manager DCr, Reichle & De-Massari AG Regulations regarding energy savings required to reach climate goals are becoming increasingly stringent. Recent research from Deloitte shows that buildings are currently responsible for 30-40% of all urban emissions. This must be reduced by 80-90% in order to achieve COP21 targets by 2050. Using intelligence in digital buildings is essential to achieving this. However, according to the BSRIA ‘Trends in the global structured cabling markets’ report (April 2022), no more than 1-2% of buildings deploy cutting edge smart technologies. Buildings everywhere need to be renovated and managed smarter. For buildings to become more energy efficient, they need to become smarter. All energy needs to be used wisely and not wasted on anything that might be considered unnecessary - an intelligent building connects all devices, automates processes and leverages data to improve performance. Intelligent buildings continuously learn, adapt and respond. This can highlight areas where energy usage is being wasted and help find solutions so that HVAC systems, smart lighting, and other in-building systems (sensors) reduce their energy consumption. This approach makes it possible to better manage resources and utilities. Sensors provide the foundation for intelligent buildings. Today, almost every device can function as a sensor. In the past, every system would have its own sensors, but these didn’t exchange data or interact. In the smart home environment, we’re seeing something similar today: numerous ‘island’ solutions, with each manufacturer using their own platform and integrated devices. However, it makes more sense to use an existing sensor in an installed device, instead of adding the same type of sensor to multiple devices. Convergence is now allowing information from individual devices to be used to optimise the performance of other devices and the system as a whole. The converged network brings IT and OT together. Security protocols can run from enterprise servers, removing the need for protection of individual networks. One single interface and dashboard can be used to manage and control lighting, heating, ventilation and security. An ‘all-IP’ network allows all devices to use one common language, supporting integration and optimisation. All building technology and building management devices communicate in the same way, without barriers, over Ethernet/Internet Protocol (Ethernet/IP), with the LAN providing the basis for physical communication. IP-based convergence enables sharing of resources across applications and brings standardisation, availability, reliability and support for new deployments. The ‘digital ceiling’ concept supports ‘All over IP’ implementations. The data network and PoE are extended through an entire building’s ceiling, making it possible to connect building automation devices within defined zones with pre-installed overhead connecting points. The ‘digital ceiling’ will increasingly provide services that building occupants and managers are going to need in the near future and for years to come, enhancing user experience while reducing energy usage, making maintenance and adding new devices faster and easier, lowering installation and device costs, and increasing layout flexibility. A closer look at energy management in the data centre Data centres are responsible for some 2% of greenhouse gas emissions, which is almost the same as the entire global airline industry. For some time, designers and operators have been using the Greenhouse Gas Protocol, developed by businesses, NGOs, government bodies and other stakeholders to evaluate their supply chain and performance. Similar initiatives such as the recently published Climate Neutral Data Centre Pact, initiated by CISPE, point the way ahead for the data centre industry. Besides security, energy and (especially) cooling are key talking points in the data centre industry. Of every kilowatt used in the DC, the biggest portion is turned into heat. You can improve power efficiency by using more efficient equipment and thereby reduce heat production, but there is always some excess heat. The question is how to deal with this heat in a way that results in the least environmental harm.  One approach is using liquid-cooled PCBs so that components on the circuit board don’t heat up and, therefore, don’t pass on heat to the chassis or rack. This uses dedicated horizontal pre-terminated rack boxes which have connections for fibre and copper, cooling fluid and power. However, you will need specialised hardware for this, that will be difficult to swap out when you want to make changes. Cooling precisely at source is very efficient. By cooling individual components, heat is reduced, and their operational lifetime improves. This approach is complicated and requires some preparation based on the needs of individual applications, the nature of the business the hardware is used for, and the business case. Another way of dealing with heat is distributing the components across a wider area. Concentrating as many racks as possible in a huge data centre may bring economies of scale and some practical benefits, but it also concentrates a vast amount of heat production in one space - and isn’t always technically necessary. When hardware becomes outdated or a new user needs to be connected, edge hardware could be moved from, for example, a mid-sized enterprise customer location to a hyperscale facility. Later, it might even be moved from the mid-sized location to an even smaller private location. Intelligent architecture that carefully considers hot and cold aisles is another approach. The less room you need to cool, the less energy you need. Bad examples can be found, with huge rooms being cooled, even though they house just one small containment in a corner. The airflow is a mess in such a case, and the use of energy is extremely inefficient. Cooled air needs to be as close to the equipment as possible and should only cool targeted areas. Wherever DCIM software is used, the company sees power and cooling monitoring engaged as a minimum. In fact, often these are the ONLY monitored KPIs. Not only does this help avoid energy waste but it also improves stability of the system, avoiding malfunctions or even fires. If connectivity doesn't work, performance is harmed, but there’s no physical danger to the system or to people. However, not knowing the status of power and cooling can lead to real damage. It’s also interesting to consider the true price of all the data we’re generating. Many people are wondering why we’re spending so much power on storing. Some don’t realise how much power their phones, tablet and computers use, and how much energy it requires to transport and store all this data. Maybe it would be good if we thought about the environment every time we posted something online!



Translate »