Thursday, April 24, 2025

Features


Sunbird’s DCIM software helps Vodafone drive sustainability
Vodafone is a telecommunications company that’s trusted by more than 300 million mobile customers, 28 million fixed network customers, 22 million television customers, and six million business customers around the world. The company’s focus is on connecting people, places, and things through fixed and mobile networks. Vodafone has a large and sprawling infrastructure, consisting of many data centres. Consequently, Vodafone’s data centre professionals struggled to have a complete understanding of their environment and capacity to enable them to reduce their energy consumption and get the most out of their existing facilities. They needed to know the temperature of their data centres to understand if they could raise temperatures to save energy. They also needed to know more about the power allocated to each rack, the power the rack hardware was actually consuming, the spare power capacity left in each rack, and the amount of spare physical space. It was also important to be less dependent on fossil fuels and find more energy efficient cooling methods. “We wanted to gain insight regarding power usage, cooling, and data and power connections,” says Andrew Marsh, Senior Manager for Infrastructure and Data Centres for Vodafone in the UK. “It was also important to us to be able to get rich business intelligence through an easy-to-read dashboard that would help with the day-to-day operations. We wanted that dashboard to give us all the necessary key performance indicators with charts while allowing us to drill down to get the details behind each chart.” Related to this need was having access to detailed reports, according to Andrew. “We wanted to know what costs are associated with electrical costs and cooling. We wanted the ability to project how much spare real estate we have from a space and power perspective, and we wanted a comprehensive asset list so that when we deploy a new asset, we can make a more educated guess about where to deploy it.” Vodafone began a search for a platform that possessed four capabilities: visualisation, an easy-to-use dashboard, a comprehensive asset inventory, and in-depth reporting capabilities. That led Vodafone to Sunbird’s DCIM solution. “We looked at several products in the marketplace and did a couple of proof of concepts,” says Andrew. “The thing that sold me on Sunbird was the fact that it was an out-of-the-box solution, which meant that I didn’t need to do lots and lots of development to get the look and feel that I needed. Also, there were a number of preconfigured dashboards to help us make intelligent decisions.” Sunbird’s DCIM gives Vodafone all of the capabilities it needed, including asset management, capacity management, change management, environmental monitoring, power monitoring, 3D visualisation, and BI and analytics. Vodafone uses Sunbird’s DCIM to collect, trend, and report on the data from its temperature sensors. Sunbird transforms this data into actionable information that enables it to visualise and understand where overcooling occurs so it can raise temperatures. Vodafone deployed more sensors, going from 16 to 800 sensors in a single location. Sunbird also allows Vodafone to evaluate rack space and equipment across its facilities in real time. Users can instantly identify rack capacity, as well as every device’s precise location, technical specifications, and connections to other assets. Now, deployment and management decisions are made faster and with much less effort. Vodafone leverages Sunbird to measure and track power usage in real time throughout all of its facilities. This allows users to see power utilisation trends and capacity levels across the facilities’ power paths. “Today, we are deploying rooms of 200-300 servers every couple of months,” says Andrew. “Even though I have a small team, everything is going very well thanks to the excellent training that Sunbird conducted. The solution is very intuitive, and support is always there when we need it.” Vodafone is also building solar farms and wind turbines and leveraging free air cooling to reduce its carbon footprint.

Why UK data centres need to focus on optimal energy efficiency
By Jodie Eaton, CEO, Shell Energy UK The volatility of the energy market has been one of the defining features of doing business in 2022. For some, this has proven an inconvenience. For energy intensive industries such as data centres, however, things are far more challenging. Tightly managing overheads will not only prove key to success, but survival. This requires the intricate management of energy consumption and harnessing process efficiencies wherever possible to keep servers running and costs down. But while market volatility is proving a challenge, regulation is also focusing the minds of data centre managers. While no longer strictly aligned to EU policy, the UK’s general direction of travel is towards big reductions in energy use, minimising carbon emissions and transitioning towards net zero. Making a plan Getting hold of statistics and reliable information on UK data centres is difficult - and work needs to be done here for policy makers to obtain a true picture of the sector and its energy consumption. Traditionally, data centre set points for temperature have been between 18 and 21°C. However, there has been no meaningful research to align these targets to meet other region’s targets. Most notable is the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), which recommends a temperature set point of between 18 and 27°C. Understanding how we measure energy usage is important, as it provides an agreed baseline. In the UK, data centres use Power Utilisation Effectiveness (PUE) to gauge consumption. This figure is based on the ratio of total facility energy divided by the amount of energy used to power ICT systems and is expressed as a number. Typically, data centres in the UK operate between 1.5 and 1.8, with the EU average being 1.8. In centres operating at a PUE of two and above, more energy is being used to provide the supporting infrastructure that is supplied to the ICT equipment. This suggests that more energy is being used to ‘cool’ the infrastructure than is strictly necessary - so is a handy measure of efficiency. The data centre estate  It is important for policy makers to understand the nature of the UK data centre ‘estate’. These operations come in all shapes and sizes, ranging from someone’s back bedroom to hyperscale or cloud facilities, containing hundreds of thousands of servers. Due to huge demand, these types of facilities are currently being built at a remarkable pace globally. Latest estimates suggest that around 500 such centres exist globally, with at least another 150 in construction. Most analysts estimate that data centre growth will be in the region of 25% for at least the next decade. Based on this thinking, we could soon see between five to 10 colocation facilities in every major city throughout Europe. How to realise the energy saving potential Energy savings of at least 15 to 25% could be achieved by data centres, with some businesses being able to achieve up to 70%. However, this would need a very radical approach, with businesses willing to make fundamental changes to their operations. Based on the figures above, typical server room energy bills could be reduced by around £10,000 to £25,000 per annum. And while this may not sound like very much, multiply that by 80,000 (the estimated number of UK server rooms) and you achieve national savings of several million pounds. How do I save, what do I need to do? Firstly, the best option is for the UK to adopt the EU Code of Conduct for Data Centres (Energy Efficiency). This details more than 150 best practices that cover management, IT equipment, cooling, power systems, design, and monitoring of server consumption. Secondly, the UK needs to obtain data on energy usage, how much of this energy is used by running cooling systems and how much UPS’ use. Moving ahead, data centres need to measure the amount of energy, and thus the cost of their IT estate to gain true visibility of current consumption. Next, the industry should calculate its PUE - the total amount of energy consumed by the entire facility, which is then divided by the IT load. Given this key baseline, we can start to track progress. Exploration of options to contract renewable energy sources will also support decarbonisation goals, whether this is through on-site generation, renewable energy supply contracts, or power purchase agreements.   Quick energy wins Once you have the PUE covered and other monitoring processes in place, it’s time to start looking at reducing consumption. Many of the quick wins, in terms of greater energy efficiencies, are simple. Some operations run well established cooling systems, which may be very inefficient in terms of energy use. Knowing your current PUE may provide you with new information that will help you to make informed decisions on issues such as cooling systems. Likewise, monitoring energy use by site will enable anomalies to be quickly identified and dealt with. Another option is the installation of occupation sensors that will automatically switch off lights in empty rooms and adjust heating levels in accordance with building use. It can ultimately free up cash to invest in more efficient equipment and embrace smart energy solutions. This is an approach that is gaining traction among manufacturers and data centres alike. There is no single solution when it comes to optimising energy efficiency, and every business will benefit from bringing in expertise to identify the steps it can take to bring its energy use under control. There is still a huge role that mitigation and reduction can play in preserving the competitiveness of the UK data centre sector.

Picking an infrastructure management solution
By Carsten Ludwig, Market Manager, Reichle & De-Massari As we connect more hardware, and networks become more distributed and more complex, monitoring operational aspects of servers and switches, cooling and power equipment, and any other linked IT hardware becomes more difficult. Monitoring solutions have become essential to networks of all kinds. Although often confused, DCIM (Data Centre Infrastructure Management), AIM (Automated Infrastructure Management) and IIM (Intelligent Infrastructure Management) fulfil different, albeit overlapping, functions. Reichle & De-Massari pinpoints key differences and focus on applications and considerations when choosing a DCIM solution. Several factors are leading to demand for more advanced monitoring. One is size: the average data centre surface area is currently 9,000m2  and the world’s largest data centre, Range International Information Group, located in Langfang, China, measures almost 600km2. However, apart from size, distribution across multiple locations is adding to complexity. Edge and cloud facilities, on-premise equipment, ‘traditional’ data centres and hyperscale data centres are all combined to meet specific and ever-changing user needs.  It’s surprisingly common to find network managers carrying out inventory and management of physical infrastructure using Excel sheets - or even paper, pencil and post-its. However, developing realistic expansion plans and carrying out risk analyses are impossible, let alone complying with legislation and best practices governing data security and availability. Making infrastructural changes on the basis of incorrect, out-of-date and unreliable documentation is like walking a tightrope without a safety net. Introducing the right monitoring solution saves on equipment and operational costs, energy, maintenance and repairs, and ensures every port is optimally used. A closer look at the different options When it comes to monitoring, the choice needs to be made between AIM, IIM and DCIM. • IIM connects network devices using real-time network management tools and structured cabling, supporting management and speeding up fault-finding and repairs. It lists and details the entire physical infrastructure, automatically detecting IP devices and provides alerts. • DCIM integrates management of physical infrastructure, energy, risk, facilities and systems. It allows the user to visualise, analyse, manage, and optimise all the elements that make up a modern data centre and the relations between them. It can optimise physical infrastructure performance and efficiency, and help keep the data centre aligned with current needs. DCIM does more than provide alerts - it is essential to generating performance data which, in turn, can serve as the basis for improvements and enhancements which can be fed into a data centre asset management tool. Another important responsibility of data centre managers is cabling and connectivity management. DCIM software solutions are already catering to this requirement by using a software control layer to map and monitor cabling assets and physical connections in real time. In many cases, DCIM is exactly what’s needed - but you do have to be sure that you’ll be using enough features to warrant the investment. You need to ask yourself whether you have a real business case for monitoring everything across your infrastructure. Does the potential benefit outweigh the investment and the time and effort required to implement such a solution? For many large networks the answer will definitely be ‘yes’. For others, however, it will be ‘no’. In many other cases, AIM may be a better fit. • AIM is a specialised solution for tracing and monitoring all changes to a physical network, including switches, servers and patch panels. AIM systems gather data from RFID-based port sensors and provide real-time information about the cabling infrastructure, including asset management, planned and unplanned changes and alarms. These systems improve operational efficiency and facilitate ongoing management of the passive infrastructure. AIM systems offer functions for mapping, managing, analysing and planning cabling and network cabinets. The integrated hardware and software system automatically detects when cords are inserted or removed, and documents the cabling infrastructure, including connected equipment. AIM enables the optimisation of business processes from an IT infrastructure perspective. Since the entire infrastructure is represented in a consistent database in an AIM system, inquiries into resources such as free ports in network cabinets ducting capacity, or cabinet space, can be answered quickly and easily. Other immediate advantages include improved capacity utilisation of existing infrastructure, and simple and exact planning of changes and expansions. AIMs also offer planning tool capabilities to simulate the future expansion of networks, which helps IT managers better prepare the bill of materials required for implementing the project. AIM solutions can reduce incident resolution time, reducing mean-time-to-repair by 30-50%, thereby providing significant savings potentials in terms of both IT resources and reducing lost business output. An ideal solution would offer IIM, AIM and DCIM functionalities, and allow the user to pick, mix, and upgrade in line with their changing requirements. However, it’s also important to realise that the effective digitalisation of all management processes in a data centre will be only be possible once many of the functions described previously are in place. Software that allows equipment and systems in data centres and adjacent locations to ‘talk’ with each other should be in place by default. The extent and type of solution very strongly depends on the business model, service levels agreements, and complexity to be managed. For example, a few edge sites might not merit a fully-fledged software environment, but just a few specific features, whereas other networked locations other might need the full set. Ongoing insights into and analysis of requirements are a must. 

Avoiding ‘bottlenecks’: Moore’s Law must be addressed
The continually rising demand for data, combined with the ongoing improvement of computer performance under Moore’s Law may cause a ‘bottleneck’ in facility construction if supply chain disruption continues. According to Greger Ruud, Data Centre Specialist for Aggreko, the continued doubling of computer speed and capabilities under Moore’s Law is putting the data centre sector under strain as stakeholders look to upgrade existing infrastructure management facilities. With September's Statista reports demonstrating data consumption growth and real estate experts, JLL, predicting persistent supply chain delays causing delivery challenges into 2024, Greger is warning of a potential ‘bottleneck’ in construction if action is not taken. “The ongoing boom in worldwide data centre markets is undoubtedly something to be welcomed, but it does give rise to additional challenges construction stakeholders need to plan around,” he says. “Specifically, the supply chain must keep pace with this level of growth and improvement in computer performance, which is putting pressure on existing data centre apparatus. “It is now not uncommon to see facilities that were brought online as early as 18 months ago needing to upgrade entire halls to deal with this perfect storm of factors. However, a lack of availability for key equipment, including solutions required for load bank testing and utility provision, is causing a bottleneck at the least opportune time. This supply chain disruption must be worked around if the market is to continue meeting ongoing demand and accounting for increasingly powerful computers being developed and used by data consumers.” With refurbishment works a key priority across Europe and taking immovable project completion deadlines into account, the need for new strategies and thought around equipment procurement is becoming pressing. According to Greger, facility construction stakeholders should explore alternative approaches to the ongoing scarcity of load bank testing and utility provision solutions, if further disruption is to be avoided. “Issues experienced sourcing key equipment is already well known to the market, and these challenges are expected to continue into the short-to-medium term,” he concludes. “Consequently, those involved in retrofitting existing facilities cannot afford to stand still and be subjected to delays that may result in additional financial costs and reputational damage. “Testing, power generation and temperature control provision must therefore be immediately available to facilitate this boom, especially if energy provision requirements are continually changing in line with Moore’s Law. Temporary equipment hire could provide contractors with an effective way of keeping this disruption to a minimum, and I would encourage key stakeholders to get in touch with their suppliers and explore this option.”

Global DCIM market to reach £2.9bn by 2026
Amid the COVID-19 crisis, the global market for DCIM is estimated at £2 billion in 2022 and is projected to reach a revised size of £2.9 billion by 2026, according to new research from Research and Markets. Solutions, one of the segments analysed in the report, is projected to record 11.7% CAGR and reach £2.2 billion by the end of the analysis period. After a thorough analysis of the business implications of the pandemic and its induced economic crisis, growth in the services segment is readjusted to a revised 12.4% CAGR for the next seven-year period. The COVID-19 pandemic has led telecom and cloud service providers to experience unprecedented demand, causing concerns regarding the viability of data centres that are involved in hosting services for these operators. Data centres were facing capacity and power challenges even prior to the COVID-19 outbreak, with cooling issues accounting for nearly a third of unplanned outages. Maximising the performance of data centres has always been a complex topic - even in normal conditions - and are now becoming even more challenging after the COVID-19 crisis. The heightened customer demand has resulted in a rising pressure on thermal and energy performance. Data centre operating teams now have the capability of monitoring thermal performance of the premises without being present on site. Creating an immersive 3D digital replica of a data centre can help team members remotely monitor sites, enabling them to gain early alerts and insight into any concerning cooling and thermal metrics. Advanced DCIM platforms have been developed for supporting secure remote network access from remote locations, thereby allowing employees to complete their work from virtually any location. The DCIM market in the US is estimated at £1.1 billion in 2022. China is forecast to reach a projected market size of £191.8 million by the year 2026, trailing a CAGR of 17.4% over the analysis period. Among the other noteworthy geographic markets are Japan and Canada, each forecast to grow at 10.5% and 10.6% respectively over the analysis period. Within Europe, Germany is forecast to grow at approximately 11.4% CAGR.

Wireless LAN backlogs pile up to 10 times normal levels
According to a recently published report from Dell'Oro Group, the overall market is expected to exceed £10 billion by 2026 and see a healthy CAGR over the next five years. Enterprise Wireless LAN backlogs will balloon to over 100% of revenues in 2022, leaving companies to get inventive in their search for Wi-Fi coverage. A boost in unit shipments is not expected until late 2023, with a return to normal unit growth still two years away. "Wireless LAN market sales are being dragged down by manufacturers' record-breaking backlogs," says Siân Morgan, Wireless LAN Research Director at Dell'Oro Group. "Our recent interviews have revealed that the lead time for receiving Wireless LAN access points has stretched to between six months and a year - a significant change from the 'weeks-to-months' that enterprises were waiting at the end of 2021. Supply constraints have shifted, including not just the main Wi-Fi chips but also secondary or even tertiary components. With a limited ability to fulfil the orders flooding in, manufacturers will focus their late 2022 and early 2023 shipments on working down outstanding backlogs: mainly orders for Wi-Fi 6. Unit shipments should start to loosen up later in 2023, about the time Wi-Fi 7 appears on the market. "Enterprises are going to creative lengths to procure Wi-Fi solutions, such as prolonging existing support contracts, using older equipment or even repurposing consumer-grade routers. Systems integrators are recommending ways to enable more applications, squeezing more value from the existing network infrastructure. In sum, now is a time characterised by invention," adds Siân.

Why visibility is important for NetOps, and why it’s in short supply
By Thomas Pore, Director of Product Marketing, LiveAction Network visibility is critical to the success of NetOps and SecOps teams. They’re the ones below deck inspecting packets, troubleshooting application problems, fighting back congestion, and identifying threats to the network.  To run an optimised network requires in-depth visibility of multiple considerations - devices, traffic flows, class of service policies, and anomalous activity detection. However, internal and external challenges can compromise this needed access. The changing nature of enterprise IT One of the biggest challenges that NetOps teams face surrounds the remote worker transformation that has spiked in recent years. This digital-first transformation requires that the network now be formed of cloud instances, APIs, IoT deployments, and other components that reach outside the traditional network to accommodate a more distributed workforce. In fact, it has been estimated that APIs account for 84% of network traffic. With increasingly complex network configurations, higher data output, growing application usage, and climbing network device volume, true network visibility requires addressing new situations. The growing amount of devices and apps on a network bring further complications, changing traffic patterns and complicating visibility. A report on this topic revealed that 81% of network operations professionals deal with network blind spots. The changing nature of work We’ve witnessed a mass migration of entire workforces from stable office infrastructures to dispersed locations: homes, cafes, co-working spaces, and anywhere else there’s Wi-Fi. Seeing into those remote Wi-Fi/LAN connections and the public cloud can be a real challenge for some traditional monitoring tools. For example, SNMP polling, ping, and NetFlow can be used in IaaS clouds but won’t work in PaaS or SaaS deployments.  Cyber threats, noise and false positives Network Visibility is the greatest advantage a SecOps team can use in proactive cyber threat identification. But 91.5% of malware reported in Q2 2021 was sent through encrypted traffic. Legacy tools like DPI and IPS can’t see into encrypted traffic, and decryption methods like SSL verification are often resource-intensive, time-consuming and can pose compliance risks. The solution is in a modern threat detection tool that uses deep packet dynamics (DPD) to scan encrypted traffic for risks without the need for decryption. Another obstacle to achieving network visibility is working within systems that do not offer targeted or audience-based alerting. Systems that use SIEM alerts can experience a regressive effect on network visibility, losing sight of critical issues through alert fatigue caused by waves of benign alerts. A 2019 report from FireEye found that 37% of large enterprises receive an average of 10,000 alerts each month, of which over half were redundant alerts or false positives. Tool sprawl Similarly, the very tools which are supposed to illuminate the network often obscure it when combined. The average NetOps team uses between four and 10 tools to monitor their network. But according to one estimate, almost 25 percent of large enterprises rely on anywhere between eight and 25 network performance monitoring tools. These different technologies, programming languages, and user interfaces require a large time commitment in training from NetOps and SecOps teams. Mixing and matching metrics from different reporting tools can create discrepancies and gaps in reporting knowledge. Once organisations pass a functional tool threshold, budget is wasted, efficiency declines, and ultimately visibility is hampered.  Bringing concision to network visibility Security and network professionals need tools that empower them to function at their highest ability. A LiveAction survey found that 42% of network professionals spend excessive hours troubleshooting across the network and 38% are so backlogged, they don’t identify network performance issues when they arise. When network performance and security suffers, the entire organisation is impacted. To harness the visibility needed for successful network operations, organisations must evaluate monitoring performance metrics in several key scenarios, including a multi-vendor network, a multi-cloud network, a hybrid cloud network, data centre visibility, and distributed remote site visibility. Engineers should prioritise their search for a single monitoring solution and dashboard powerful enough to deliver complete network visibility. This convergence into one view simplifies workflows, makes troubleshooting and network visualisation easier and improves the efficiency of NetOp and SecOps teams.  Amid the constant evolution of changing network architectures, ways of working, devices, apps, tools, and threats, NetOps and SecOps teams must adapt and find solutions that allow them to deliver the same optimal network results. The importance of network visibility cannot be understated in the rise of new cyber threats and the elevation of end-user expectations for network services. Simplifying the job of your network and security professionals improves performance levels and security resilience. Consider the importance of complete network visibility today to allow your engineers and network to reach their optimal potentials.

Choosing high quality optical transceivers
By Marcin Bala, CEO of Salumanus Advancements in technology have led to an even greater need for reliable and stable data transmission, resulting in transceivers becoming an essential part of any network's hardware configuration. Optical transceivers are often regarded as one of the simplest pieces of hardware in a network, but this is not true. Selecting the wrong transceiver or one of poor quality could lead to a number of unforeseen issues. There are many important factors to consider when choosing an optical transceiver for your application, such as data rates, wavelengths and transmission distance. To help narrow down this search, here are three categories of transceivers - SFP, QSFP and CFP - and their core benefits. Compact and flexible Small form-factor pluggable (SFP) transceivers are the most popular optical transceiver type, mainly due to their compact size. The small size allows these transceivers to be compatible with various applications, which is especially useful for tight networking spaces that still require fast transmissions. The SFP module is also hot-pluggable, meaning it can be easily adjusted to existing networks without the need for cable infrastructure redesign. SFP transceivers are very flexible and are compatible with both copper and fibre networks. In copper networks, these transceivers are perfect for connecting transmissions between switches that are up to 100m apart. However, when used in fibre optics, they can have a communication range of around 500m to over 100km. These transceivers are mainly used in Ethernet switches, routers and firewalls. There is also a more advanced version of the SFP, called the SFP+, that is faster than its original counterpart and can support speeds up to 10Gbps. SFP+ are not the only advanced versions of the SFP transceiver - there are also the SFP28 and the SFP56. The SFP28 differs as it can support up to 28.1Gbps, whereas the SFP56 has double the capacity of SFP28 when combined with PAM4 modulation. The SFP transceivers support both single-mode and multi-mode fibre and can transmit data over a duplex or a simplex fibre strand. The flexibility of SFP transceivers makes them compatible with almost all applications that require high speed over long ranges, such as dark fibre, passive optical networks and multiplexing. High density and compact Quad small form-factor pluggable (QSFP) transceivers are used for 40 Gigabit Ethernet (40GbE) data transmission applications. Like SFP transceivers, QSFP ones are also hot-pluggable. However, in comparison to SFP+ optic modules, QSFP ones have four transmission channels, each with a data rate of 10Gbps, allowing for the port-density to be four times higher than that of SFP+ transceivers. The QSFP transceiver, like the SFP, can support bother single-mode and multi-mode applications but is capable of doing this over a 100km distance. QSFP transceivers are ideal for networks that require higher data rates. QSFP28 transceivers can support both high speed and high-density data transmissions, thanks to their ability to provide even higher data rates of 28Gbps on all four channels. QSFP transceivers use four wavelengths that can be enhanced using coarse wavelength division multiplexing (such as CWDM and LanWDM) technology. A popular configuration of the transceiver is the 100G QSFP28 DWDM PAM4 solution. This configuration is capable of connecting multiple data centres within over a distance of 80km. The advantage of using this configuration is that it enables an embedded dense wavelength division multiplexing (DWDM) network to be built using the transceiver directly in the switch. Like the SFP transceiver, this one also has a more advanced version, the QSFP dual density (QSFP-DD). This essentially provides double the channels and double the speed, meaning the transceiver has eight channels capable of 400G (8x50G). Ultra-high bandwidths and high speeds The C form-factor pluggable (CFP) transceiver is a common form factor used for high speed digital signal transmissions. There are four different types of CFP transceivers - CFP, CFP2, CFP4 and CFP8 - all of which can support ultra-high-bandwidth requirements, including next generation high speed Ethernet. The most recent module, CFP8, can support a broad range of polarisation mode dispersions at 400G and is already made to support 800Gbs. On the other hand, the most frequently chosen one is still CFP2. The 100G CFP coherent module supports a range of applications such as 80km interfaces or 2,500km DWDM links. This module is also configurable to optimise power dissipation for a given application. CFP transceivers are mainly used in wide area networks (WANs), wireless base stations, video and other telecommunication network systems. They are widely used in data centres, high performance computing and internet provider systems as they have a long transmission distance and fast speeds. The high variety of transceivers on the market can make it difficult to find the most suitable one for our application, but whether network owners require high bandwidths or strong connections over long distances, there is a transceiver to deliver that. Salumanus has delivered over 500,000 optical modules in the last few years, offering support in choosing the most suitable transceiver for clients’ networks.

Westermo PoE switch supports networks with high power demands
Westermo has introduced a new compact industrial Power over Ethernet (PoE) switch designed to support the ever-growing networking requirements for devices, such as security cameras, wireless access points and monitors. The Lynx 3510 PoE series is capable of supporting networks with greater power demands and is ideal for handling big data, high bandwidth, mission-critical applications typically found within transportation, manufacturing, energy and smart cities. With power and data provided over the same cable, PoE helps to reduce network complexity and offers greater installation flexibility, reliability, and time and cost savings. The Lynx 3510 PoE enhances network capability by supporting the needs of more powered devices, with eight copper ports each providing gigabit speeds and up to 30W output. This is ideal for connecting HD IP CCTV cameras in industrial settings and other power-hungry applications. The Lynx 3510 PoE also offers redundant and fast failover connectivity, with Westermo’s FRNT ring protocol ensuring rapid network recovery should a node or connection be lost. Ensuring the security of industrial data communication networks is of paramount importance, especially with cyber attacks becoming increasingly sophisticated. To reduce risk and increase cyber resilience, the Lynx 3510 PoE has an extensive suite of advanced cyber security features. These can be used to build networks in compliance with the IEC 62443 standard, which defines technical security requirements for data communication network components. “The Lynx 3510 PoE is the first product based on a new very powerful platform. Available both as switch and a router with impressive performance capable to handle the bandwidth of future networks,” says Henrik Jerregård, Senior Product Manager at Westermo. “The Lynx 3510 PoE is extremely reliable and designed to maintain uninterrupted data communications in even the most challenging environmental conditions, and by offering a total power output of 240W, this will help expand the capability of PoE networks.” The DIN rail-mountable Lynx 3510 PoE has been extensively tested to meet a broad range of industry standards relating to electromagnetic compatibility, isolation, vibration and shock. With an IP40-rated fan-less all-metal housing, the ultra-robust switch has a wide operating temperature range. Superior build quality, industrial-grade components, a high level of isolation between interfaces and a redundant power supply helps to extend service life and creates an extremely reliable solution that contributes to a lower total cost of ownership. Helping to reduce complexity, the Lynx 3510 PoE is powered by the WeOS operating system, which ensures continuous operation and support for an expanding range of communication protocols and features, and simplifies installation, operation and maintenance. WeOS provides future-proofed network solutions with high levels of resiliency and security.

Senet and IotaComm partner to deliver advanced wireless networks
Senet and Iota Communications have announced a partnership to deliver LoRaWAN through both 915MHz unlicensed spectrum and through IotaComm’s unique 800MHz FCC-licensed spectrum network connectivity. The initial use cases will be focused on smart building, smart city, and critical infrastructure applications. With this collaboration, and in addition to its use of the Senet platform for application and device management, IotaComm has also become a Senet Radio Access Network (RAN) operator and Senet LPWAN Virtual Network participant. Through a combination of sensors, meters, and its Delphi360 wireless connectivity and data analytics platform, IotaComm provides an end-to-end smart building and smart city solution used by building managers, industrial site managers, and city planners to better manage the health, safety, and sustainability goals of their organisations and facilities. In addition, IotaComm uniquely combines its FCC-licensed spectrum within the LoRaWAN standard to enable carrier-grade, low power wide area connectivity for critical infrastructure applications, such as smart metering and predictive maintenance. To support growing customer demand for power efficient, battery operated indoor and outdoor smart building sensors, IotaComm operates more than 140 tower sites nationwide and plans to deploy 150 LoRaWAN gateways by 2023. For customers preferring added levels of network and application performance, Senet and IotaComm are collaborating to create a new LoRaWAN service using the 800MHz licensed spectrum. IotaComm already owns enough 800MHz spectrum to cover about 90% of the US and plans to deploy multi-access gateways to deliver a premium smart building connectivity offering. IotaComm will use Senet’s cloud-based platform to manage both its public LoRaWAN network and private on-premises networks and application deployments using the 800 MHz FCC-licensed spectrum. “We’re honoured to be working alongside of Senet in the quest to provide the wireless connectivity efficiency and flexibility that industries are requiring,” says Terrence DeFranco, CEO, President of IotaComm. “This partnership fully supports our goals of building the largest national, carrier-grade LPWAN dedicated to the IoT. Together with Senet’s network architecture expertise, we’ll deliver real-time data that results in high value and actionable insights while filling an existing connectivity gap.” By opening their LoRaWAN gateways to data traffic from all solution providers connecting to the Senet Low Power Wide Area Virtual Network (LVN), IotaComm is contributing to the rapid expansion of public carrier-grade LoRaWAN networks across the US and generating new IoT services revenue streams. Unique to the Senet LVN are innovative business models designed to deliver unified LoRaWAN connectivity without the need for roaming contracts and the opportunity for participants, like IotaComm, to share in the revenue generated by all end devices connecting to the gateways they’ve deployed regardless of end customer origin. “Innovation has always been at Senet’s core and our partnership with Iota Communications is another example of Senet leading the market through innovative technology and unique business models that allow users to improve operations and address sustainability goals,” says Bruce Chatterley, CEO at Senet. “Iota Communications brings significant value and differentiation to our portfolio of RAN Provider and LVN partners, and we look forward to collaborating to deliver ground-breaking network solutions to the commercial building energy management and facility operation markets.”



Translate »