Advertise on DCNN Advertise on DCNN Advertise on DCNN
Saturday, June 14, 2025

Features


Data centre operators must develop decentralised energy to achieve resilience
With grid instability and energy security continuing to prove challenging for industry across Europe, a new report is highlighting solutions for data centre operators to navigate a complex energy market and avoid downtime from grid resilience issues. As rising costs resulting from Europe’s energy crisis begin to settle, the report, titled 'Race to Resilience', indicates that ongoing energy instability means power supply remains a major concern for the European data centre market. According to Aggreko, major questions remain over the future security of data centre operators’ energy supply, with power outages, connection delays and rising fees only serving to compound the issue. Moreover, with multiple European countries set to end their energy relief packages for businesses by the end of 2024, and the EU’s gas price cap agreement also ending in February 2024, concern is being raised that this will add to the severity of this situation. In an effort to address these concerns, Aggreko’s report explores how facilities can meet both short and long-term power demands, highlighting a revised approach to decentralised energy as an effective route to improve security of supply, reduce transmission losses and lower carbon emissions. Chris Rason, Managing Director, Aggreko Energy Services, comments, “Energy-related challenges have been a burgeoning issue for the European data centre market over the past decade. While much has changed in this time, it is clear that this issue will not be decreasing in severity any time soon and a re-evaluation of power procurement methods is necessary to guarantee security of supply for the future. “Aggreko’s 'Race to Resilience' report aims to bridge the gap between today’s challenges of exponential energy demand and supply disruptions, with tomorrow’s objectives of security and sustainability.” The report gives practical examples of decentralised solutions in action, including microgrids, gas-powered generators, combined heat and power, and energy storage. One such example features a data centre with existing on-site generation seeking ‘grid-interactive’ capability. Here, as a core principle of demand-side response (DSR), businesses temporarily lower demand when asked by the distribution network operator, using battery storage, Stage V generation or multi-megawatt gas generation to cover these requests. This approach is particularly useful in strictly regulated areas such as the Republic of Ireland, where operators are subject to ‘flexi-supply’ requirements by EirGrid. To give decision-makers a starting point when identifying such models, the report also highlights a series of calculators developed by Aggreko - these include the Data Centre Power Selector, Hire Vs Buy, Grid Compare, and Greener Upgrades calculators. Chris concludes, “The current state of energy instability, compounded by ailing grid infrastructures and pressure to reduce emissions has placed the European data centre market at a fork in the road. Operators face the choice of persisting with the grid and its limitations, or setting forwards on the path to resilience through alternative methods of power procurement. “However, with upgrades through outright purchases bringing their own challenges, bridging solutions offer a risk-free way to set data centres in the right direction. The solutions put forward in Aggreko’s latest report aim to light the way towards new energy models for the European data centre industry.”

IBM to build its first European quantum data centre
The IBM Facility in Ehningen, Germany, has announced plans to open its first Europe-based quantum data centre to facilitate access to cutting-edge quantum computing for companies, research institutions and government agencies. The data centre is expected to be operational in 2024, with multiple IBM quantum computing systems, each with utility scale quantum processors (those of more than 100 qubits). The data centre will be located at IBM’s facility in Ehningen and will serve as IBM Quantum’s European cloud region, for users in Europe to provision services at the data centre for their quantum computing research and exploratory activity. The data centre is being designed to help clients continue to manage their European data regulation requirements, including processing all job data within EU borders. The facility will be IBM’s second quantum data centre and quantum cloud region, after its New York facility. “Europe has some of the world’s most advanced users of quantum computers, and interest is only accelerating with the era of utility scale quantum processors,” says Jay Gambetta, IBM Fellow and Vice President of IBM Quantum. “The planned quantum data centre and associated cloud region will give European users a new option as they seek to tap the power of quantum computing in an effort to solve some of the world’s most challenging problems.” “Our quantum data centre in Europe is an integral piece of our global endeavour,” says Ana Paula Assis, IBM General Manager for EMEA. “It will provide new opportunities for our clients to collaborate side-by-side with our scientists in Europe, as well as their own clients, as they explore how best to apply quantum in their industry.”

Telehouse Europe powers operational and customer experience with series of senior appointments
Global colocation provider, Telehouse International Corporation of Europe, has strengthened its operational and customer experience excellence with a restructuring of its operations department and five new appointments, including two new members to the board of directors.   Previously holding the Senior Director of Customer Experience position, Mark Pestridge has been promoted to the role of Executive Vice President and General Manager with the responsibility of informing and supporting the work of the board, including leading the organisation’s short and long term strategies and overseeing the operations of the business globally. With over 20 years’ experience in the data centre and service provider space, Mark has a solid history of developing strategic partnerships across the industry that achieve strong and consistent business performance. Joining Mark on the Board of Directors as Senior Vice President and Leader of Technical Services is Paul Lewis, former Senior Director of Technical Services, who led the Operational, Construction, and Design Departments at Telehouse. Paul also takes on responsibility for informing and supporting the work of the board, including the setting of the global vision and strategy, delivering of the agreed strategy, overseeing the company’s entire operations and optimising the organisation’s operational capabilities. The restructure of the Telehouse Europe operations department sees the creation of a new Data Centre Services Department, aimed at providing efficient and secure services to its customers. This department is managed by a newly appointed Data Centre Services Senior Director, Scott Longhurst. Scott joined Telehouse in February this year and has over 30 years’ experience in critical infrastructure management and engineering in the data centre and telecommunications industry. In his new role, Scott will ensure that the global colocation provider can continue to build a culture of continuous improvement that places customers at the heart of all its business operations and customer experience initiatives. Telehouse’s new Data Centre Services Department is comprised of three key focus areas, Data Centre Operations headed by newly appointed Alex Mason, and Security Services and Service Delivery headed by Rob Rennie and Simon Smith respectively. Alex, who joined Telehouse in April, focuses on the management and strategic vision of Telehouse’s mission critical facilities environments. He is also responsible for the day-to-day service management and maintenance of all infrastructure, including building and facilities management, on a 24/7 basis. Rob, who has been promoted to Security Services Director, oversees the management and strategic vision of Telehouse’s physical security including the security of the colocation provider’s assets and those of its customers. He is responsible for evaluating risks to Telehouse and its customers and ensuring that robust procedures are in place to mitigate these risks and for driving efficiencies and ongoing improvement of security systems and processes. As the newly promoted Service Delivery Director, Simon takes on the Service Desk responsibility, installation of all white space client solutions such as cage and rack builds, power connections, interconnection cabling, campus wide ducts and interconnection of the Data Centres. Takayo Takamuro, Managing Director & European Chief Executive of Telehouse Europe, commented, “We’re undergoing a transformation of Telehouse that will help us achieve greater operational and customer experience excellence. The new Data Centre Services Department will help us enhance our ability to respond to changing customer needs proactively and ensure ongoing enhancement of the customer journey through end-to-end services. As a global colocation provider known for its unrivalled connectivity, we continuously strive to drive our interconnection strategy forward and into new areas, with all the newly appointed senior members supporting the business to achieve this goal.”

First data centre in Bahrain to be fully powered by clean energy
Beyon’s Chairman, Shaikh Abdulla bin Khalifa Al Khalifa, has announced the completion of Phase 2 of the company’s Solar Park at a ceremony which recently took place in the presence of H.E. s Kamal Bin Ahmed Mohamed, President of Electricity and Water Authority; H.E. Mohamed bin Thamer Al Kaabi, Minister of Transportation and Telecommunications; H.E. Yasser bin Ibrahim Humaidan, Minister of Electricity and Water Affairs; H.E. Mrs. Noor Bint Ali Al Khulaif, Minister of Sustainable Development; and Mr Mohamed Almoayyed Director YK Almoayyed & Sons. The event was held at the Royal Golf Club in Riffa, where members of Beyon’s board of directors, executive team and team members involved in the project were present on the occasion. Beyon’s Chairman welcomed the distinguished guests and extended his appreciation for their attendance at the inauguration of Beyon Solar Park. Speaking on the occasion, he said, “Beyon’s efforts towards sustainability and clean energy production continues, and we have made great progress since the launch of the first phase of the Solar Park in November 2021. Today we are glad to announce the completion of the second phase of the project. “We are also very proud of an unprecedented achievement in the telecommunications and technology sector, as Beyon’s Data Centre became the first in Bahrain to rely entirely on clean energy generated from the company’s Solar Park, which is located in the Beyon Data Oasis. “Our journey in the field of environmental sustainability continues in line with our commitment to Bahrain’s vision launched by His Royal Highness Prince, Salman bin Hamad Al Khalifa, Crown Prince and Prime Minister of the Kingdom of Bahrain, and announced as part of his address during the 26th United Nations Climate Change Conference 2021, held in Glasgow, Scotland, which reiterates the Kingdom’s commitment to achieve zero carbon neutrality by 2060. Thus, we have set clear plans to start implementing the third phase of this project, which will be located in Hamala. Upon completion of this phase, the total clean energy production of Beyon will be approximately 6GWh per year. “On this occasion, I would like to extend my sincere thanks to the Ministries, concerned authorities and our partners for their invaluable support in helping us implement this project and contributing to its success,” Shaikh Abdulla concluded. Beyon’s Solar Park Phase 1 and 2 will generate 3.6GWh of clean energy leading to a carbon footprint saving of over 2000 tonnes and a cost saving of BD105,000 annually.

Sunbird’s DCIM software helps Vodafone drive sustainability
Vodafone is a telecommunications company that’s trusted by more than 300 million mobile customers, 28 million fixed network customers, 22 million television customers, and six million business customers around the world. The company’s focus is on connecting people, places, and things through fixed and mobile networks. Vodafone has a large and sprawling infrastructure, consisting of many data centres. Consequently, Vodafone’s data centre professionals struggled to have a complete understanding of their environment and capacity to enable them to reduce their energy consumption and get the most out of their existing facilities. They needed to know the temperature of their data centres to understand if they could raise temperatures to save energy. They also needed to know more about the power allocated to each rack, the power the rack hardware was actually consuming, the spare power capacity left in each rack, and the amount of spare physical space. It was also important to be less dependent on fossil fuels and find more energy efficient cooling methods. “We wanted to gain insight regarding power usage, cooling, and data and power connections,” says Andrew Marsh, Senior Manager for Infrastructure and Data Centres for Vodafone in the UK. “It was also important to us to be able to get rich business intelligence through an easy-to-read dashboard that would help with the day-to-day operations. We wanted that dashboard to give us all the necessary key performance indicators with charts while allowing us to drill down to get the details behind each chart.” Related to this need was having access to detailed reports, according to Andrew. “We wanted to know what costs are associated with electrical costs and cooling. We wanted the ability to project how much spare real estate we have from a space and power perspective, and we wanted a comprehensive asset list so that when we deploy a new asset, we can make a more educated guess about where to deploy it.” Vodafone began a search for a platform that possessed four capabilities: visualisation, an easy-to-use dashboard, a comprehensive asset inventory, and in-depth reporting capabilities. That led Vodafone to Sunbird’s DCIM solution. “We looked at several products in the marketplace and did a couple of proof of concepts,” says Andrew. “The thing that sold me on Sunbird was the fact that it was an out-of-the-box solution, which meant that I didn’t need to do lots and lots of development to get the look and feel that I needed. Also, there were a number of preconfigured dashboards to help us make intelligent decisions.” Sunbird’s DCIM gives Vodafone all of the capabilities it needed, including asset management, capacity management, change management, environmental monitoring, power monitoring, 3D visualisation, and BI and analytics. Vodafone uses Sunbird’s DCIM to collect, trend, and report on the data from its temperature sensors. Sunbird transforms this data into actionable information that enables it to visualise and understand where overcooling occurs so it can raise temperatures. Vodafone deployed more sensors, going from 16 to 800 sensors in a single location. Sunbird also allows Vodafone to evaluate rack space and equipment across its facilities in real time. Users can instantly identify rack capacity, as well as every device’s precise location, technical specifications, and connections to other assets. Now, deployment and management decisions are made faster and with much less effort. Vodafone leverages Sunbird to measure and track power usage in real time throughout all of its facilities. This allows users to see power utilisation trends and capacity levels across the facilities’ power paths. “Today, we are deploying rooms of 200-300 servers every couple of months,” says Andrew. “Even though I have a small team, everything is going very well thanks to the excellent training that Sunbird conducted. The solution is very intuitive, and support is always there when we need it.” Vodafone is also building solar farms and wind turbines and leveraging free air cooling to reduce its carbon footprint.

Why UK data centres need to focus on optimal energy efficiency
By Jodie Eaton, CEO, Shell Energy UK The volatility of the energy market has been one of the defining features of doing business in 2022. For some, this has proven an inconvenience. For energy intensive industries such as data centres, however, things are far more challenging. Tightly managing overheads will not only prove key to success, but survival. This requires the intricate management of energy consumption and harnessing process efficiencies wherever possible to keep servers running and costs down. But while market volatility is proving a challenge, regulation is also focusing the minds of data centre managers. While no longer strictly aligned to EU policy, the UK’s general direction of travel is towards big reductions in energy use, minimising carbon emissions and transitioning towards net zero. Making a plan Getting hold of statistics and reliable information on UK data centres is difficult - and work needs to be done here for policy makers to obtain a true picture of the sector and its energy consumption. Traditionally, data centre set points for temperature have been between 18 and 21°C. However, there has been no meaningful research to align these targets to meet other region’s targets. Most notable is the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), which recommends a temperature set point of between 18 and 27°C. Understanding how we measure energy usage is important, as it provides an agreed baseline. In the UK, data centres use Power Utilisation Effectiveness (PUE) to gauge consumption. This figure is based on the ratio of total facility energy divided by the amount of energy used to power ICT systems and is expressed as a number. Typically, data centres in the UK operate between 1.5 and 1.8, with the EU average being 1.8. In centres operating at a PUE of two and above, more energy is being used to provide the supporting infrastructure that is supplied to the ICT equipment. This suggests that more energy is being used to ‘cool’ the infrastructure than is strictly necessary - so is a handy measure of efficiency. The data centre estate  It is important for policy makers to understand the nature of the UK data centre ‘estate’. These operations come in all shapes and sizes, ranging from someone’s back bedroom to hyperscale or cloud facilities, containing hundreds of thousands of servers. Due to huge demand, these types of facilities are currently being built at a remarkable pace globally. Latest estimates suggest that around 500 such centres exist globally, with at least another 150 in construction. Most analysts estimate that data centre growth will be in the region of 25% for at least the next decade. Based on this thinking, we could soon see between five to 10 colocation facilities in every major city throughout Europe. How to realise the energy saving potential Energy savings of at least 15 to 25% could be achieved by data centres, with some businesses being able to achieve up to 70%. However, this would need a very radical approach, with businesses willing to make fundamental changes to their operations. Based on the figures above, typical server room energy bills could be reduced by around £10,000 to £25,000 per annum. And while this may not sound like very much, multiply that by 80,000 (the estimated number of UK server rooms) and you achieve national savings of several million pounds. How do I save, what do I need to do? Firstly, the best option is for the UK to adopt the EU Code of Conduct for Data Centres (Energy Efficiency). This details more than 150 best practices that cover management, IT equipment, cooling, power systems, design, and monitoring of server consumption. Secondly, the UK needs to obtain data on energy usage, how much of this energy is used by running cooling systems and how much UPS’ use. Moving ahead, data centres need to measure the amount of energy, and thus the cost of their IT estate to gain true visibility of current consumption. Next, the industry should calculate its PUE - the total amount of energy consumed by the entire facility, which is then divided by the IT load. Given this key baseline, we can start to track progress. Exploration of options to contract renewable energy sources will also support decarbonisation goals, whether this is through on-site generation, renewable energy supply contracts, or power purchase agreements.   Quick energy wins Once you have the PUE covered and other monitoring processes in place, it’s time to start looking at reducing consumption. Many of the quick wins, in terms of greater energy efficiencies, are simple. Some operations run well established cooling systems, which may be very inefficient in terms of energy use. Knowing your current PUE may provide you with new information that will help you to make informed decisions on issues such as cooling systems. Likewise, monitoring energy use by site will enable anomalies to be quickly identified and dealt with. Another option is the installation of occupation sensors that will automatically switch off lights in empty rooms and adjust heating levels in accordance with building use. It can ultimately free up cash to invest in more efficient equipment and embrace smart energy solutions. This is an approach that is gaining traction among manufacturers and data centres alike. There is no single solution when it comes to optimising energy efficiency, and every business will benefit from bringing in expertise to identify the steps it can take to bring its energy use under control. There is still a huge role that mitigation and reduction can play in preserving the competitiveness of the UK data centre sector.

Picking an infrastructure management solution
By Carsten Ludwig, Market Manager, Reichle & De-Massari As we connect more hardware, and networks become more distributed and more complex, monitoring operational aspects of servers and switches, cooling and power equipment, and any other linked IT hardware becomes more difficult. Monitoring solutions have become essential to networks of all kinds. Although often confused, DCIM (Data Centre Infrastructure Management), AIM (Automated Infrastructure Management) and IIM (Intelligent Infrastructure Management) fulfil different, albeit overlapping, functions. Reichle & De-Massari pinpoints key differences and focus on applications and considerations when choosing a DCIM solution. Several factors are leading to demand for more advanced monitoring. One is size: the average data centre surface area is currently 9,000m2  and the world’s largest data centre, Range International Information Group, located in Langfang, China, measures almost 600km2. However, apart from size, distribution across multiple locations is adding to complexity. Edge and cloud facilities, on-premise equipment, ‘traditional’ data centres and hyperscale data centres are all combined to meet specific and ever-changing user needs.  It’s surprisingly common to find network managers carrying out inventory and management of physical infrastructure using Excel sheets - or even paper, pencil and post-its. However, developing realistic expansion plans and carrying out risk analyses are impossible, let alone complying with legislation and best practices governing data security and availability. Making infrastructural changes on the basis of incorrect, out-of-date and unreliable documentation is like walking a tightrope without a safety net. Introducing the right monitoring solution saves on equipment and operational costs, energy, maintenance and repairs, and ensures every port is optimally used. A closer look at the different options When it comes to monitoring, the choice needs to be made between AIM, IIM and DCIM. • IIM connects network devices using real-time network management tools and structured cabling, supporting management and speeding up fault-finding and repairs. It lists and details the entire physical infrastructure, automatically detecting IP devices and provides alerts. • DCIM integrates management of physical infrastructure, energy, risk, facilities and systems. It allows the user to visualise, analyse, manage, and optimise all the elements that make up a modern data centre and the relations between them. It can optimise physical infrastructure performance and efficiency, and help keep the data centre aligned with current needs. DCIM does more than provide alerts - it is essential to generating performance data which, in turn, can serve as the basis for improvements and enhancements which can be fed into a data centre asset management tool. Another important responsibility of data centre managers is cabling and connectivity management. DCIM software solutions are already catering to this requirement by using a software control layer to map and monitor cabling assets and physical connections in real time. In many cases, DCIM is exactly what’s needed - but you do have to be sure that you’ll be using enough features to warrant the investment. You need to ask yourself whether you have a real business case for monitoring everything across your infrastructure. Does the potential benefit outweigh the investment and the time and effort required to implement such a solution? For many large networks the answer will definitely be ‘yes’. For others, however, it will be ‘no’. In many other cases, AIM may be a better fit. • AIM is a specialised solution for tracing and monitoring all changes to a physical network, including switches, servers and patch panels. AIM systems gather data from RFID-based port sensors and provide real-time information about the cabling infrastructure, including asset management, planned and unplanned changes and alarms. These systems improve operational efficiency and facilitate ongoing management of the passive infrastructure. AIM systems offer functions for mapping, managing, analysing and planning cabling and network cabinets. The integrated hardware and software system automatically detects when cords are inserted or removed, and documents the cabling infrastructure, including connected equipment. AIM enables the optimisation of business processes from an IT infrastructure perspective. Since the entire infrastructure is represented in a consistent database in an AIM system, inquiries into resources such as free ports in network cabinets ducting capacity, or cabinet space, can be answered quickly and easily. Other immediate advantages include improved capacity utilisation of existing infrastructure, and simple and exact planning of changes and expansions. AIMs also offer planning tool capabilities to simulate the future expansion of networks, which helps IT managers better prepare the bill of materials required for implementing the project. AIM solutions can reduce incident resolution time, reducing mean-time-to-repair by 30-50%, thereby providing significant savings potentials in terms of both IT resources and reducing lost business output. An ideal solution would offer IIM, AIM and DCIM functionalities, and allow the user to pick, mix, and upgrade in line with their changing requirements. However, it’s also important to realise that the effective digitalisation of all management processes in a data centre will be only be possible once many of the functions described previously are in place. Software that allows equipment and systems in data centres and adjacent locations to ‘talk’ with each other should be in place by default. The extent and type of solution very strongly depends on the business model, service levels agreements, and complexity to be managed. For example, a few edge sites might not merit a fully-fledged software environment, but just a few specific features, whereas other networked locations other might need the full set. Ongoing insights into and analysis of requirements are a must. 

Avoiding ‘bottlenecks’: Moore’s Law must be addressed
The continually rising demand for data, combined with the ongoing improvement of computer performance under Moore’s Law may cause a ‘bottleneck’ in facility construction if supply chain disruption continues. According to Greger Ruud, Data Centre Specialist for Aggreko, the continued doubling of computer speed and capabilities under Moore’s Law is putting the data centre sector under strain as stakeholders look to upgrade existing infrastructure management facilities. With September's Statista reports demonstrating data consumption growth and real estate experts, JLL, predicting persistent supply chain delays causing delivery challenges into 2024, Greger is warning of a potential ‘bottleneck’ in construction if action is not taken. “The ongoing boom in worldwide data centre markets is undoubtedly something to be welcomed, but it does give rise to additional challenges construction stakeholders need to plan around,” he says. “Specifically, the supply chain must keep pace with this level of growth and improvement in computer performance, which is putting pressure on existing data centre apparatus. “It is now not uncommon to see facilities that were brought online as early as 18 months ago needing to upgrade entire halls to deal with this perfect storm of factors. However, a lack of availability for key equipment, including solutions required for load bank testing and utility provision, is causing a bottleneck at the least opportune time. This supply chain disruption must be worked around if the market is to continue meeting ongoing demand and accounting for increasingly powerful computers being developed and used by data consumers.” With refurbishment works a key priority across Europe and taking immovable project completion deadlines into account, the need for new strategies and thought around equipment procurement is becoming pressing. According to Greger, facility construction stakeholders should explore alternative approaches to the ongoing scarcity of load bank testing and utility provision solutions, if further disruption is to be avoided. “Issues experienced sourcing key equipment is already well known to the market, and these challenges are expected to continue into the short-to-medium term,” he concludes. “Consequently, those involved in retrofitting existing facilities cannot afford to stand still and be subjected to delays that may result in additional financial costs and reputational damage. “Testing, power generation and temperature control provision must therefore be immediately available to facilitate this boom, especially if energy provision requirements are continually changing in line with Moore’s Law. Temporary equipment hire could provide contractors with an effective way of keeping this disruption to a minimum, and I would encourage key stakeholders to get in touch with their suppliers and explore this option.”

Global DCIM market to reach £2.9bn by 2026
Amid the COVID-19 crisis, the global market for DCIM is estimated at £2 billion in 2022 and is projected to reach a revised size of £2.9 billion by 2026, according to new research from Research and Markets. Solutions, one of the segments analysed in the report, is projected to record 11.7% CAGR and reach £2.2 billion by the end of the analysis period. After a thorough analysis of the business implications of the pandemic and its induced economic crisis, growth in the services segment is readjusted to a revised 12.4% CAGR for the next seven-year period. The COVID-19 pandemic has led telecom and cloud service providers to experience unprecedented demand, causing concerns regarding the viability of data centres that are involved in hosting services for these operators. Data centres were facing capacity and power challenges even prior to the COVID-19 outbreak, with cooling issues accounting for nearly a third of unplanned outages. Maximising the performance of data centres has always been a complex topic - even in normal conditions - and are now becoming even more challenging after the COVID-19 crisis. The heightened customer demand has resulted in a rising pressure on thermal and energy performance. Data centre operating teams now have the capability of monitoring thermal performance of the premises without being present on site. Creating an immersive 3D digital replica of a data centre can help team members remotely monitor sites, enabling them to gain early alerts and insight into any concerning cooling and thermal metrics. Advanced DCIM platforms have been developed for supporting secure remote network access from remote locations, thereby allowing employees to complete their work from virtually any location. The DCIM market in the US is estimated at £1.1 billion in 2022. China is forecast to reach a projected market size of £191.8 million by the year 2026, trailing a CAGR of 17.4% over the analysis period. Among the other noteworthy geographic markets are Japan and Canada, each forecast to grow at 10.5% and 10.6% respectively over the analysis period. Within Europe, Germany is forecast to grow at approximately 11.4% CAGR.

Choosing high quality optical transceivers
By Marcin Bala, CEO of Salumanus Advancements in technology have led to an even greater need for reliable and stable data transmission, resulting in transceivers becoming an essential part of any network's hardware configuration. Optical transceivers are often regarded as one of the simplest pieces of hardware in a network, but this is not true. Selecting the wrong transceiver or one of poor quality could lead to a number of unforeseen issues. There are many important factors to consider when choosing an optical transceiver for your application, such as data rates, wavelengths and transmission distance. To help narrow down this search, here are three categories of transceivers - SFP, QSFP and CFP - and their core benefits. Compact and flexible Small form-factor pluggable (SFP) transceivers are the most popular optical transceiver type, mainly due to their compact size. The small size allows these transceivers to be compatible with various applications, which is especially useful for tight networking spaces that still require fast transmissions. The SFP module is also hot-pluggable, meaning it can be easily adjusted to existing networks without the need for cable infrastructure redesign. SFP transceivers are very flexible and are compatible with both copper and fibre networks. In copper networks, these transceivers are perfect for connecting transmissions between switches that are up to 100m apart. However, when used in fibre optics, they can have a communication range of around 500m to over 100km. These transceivers are mainly used in Ethernet switches, routers and firewalls. There is also a more advanced version of the SFP, called the SFP+, that is faster than its original counterpart and can support speeds up to 10Gbps. SFP+ are not the only advanced versions of the SFP transceiver - there are also the SFP28 and the SFP56. The SFP28 differs as it can support up to 28.1Gbps, whereas the SFP56 has double the capacity of SFP28 when combined with PAM4 modulation. The SFP transceivers support both single-mode and multi-mode fibre and can transmit data over a duplex or a simplex fibre strand. The flexibility of SFP transceivers makes them compatible with almost all applications that require high speed over long ranges, such as dark fibre, passive optical networks and multiplexing. High density and compact Quad small form-factor pluggable (QSFP) transceivers are used for 40 Gigabit Ethernet (40GbE) data transmission applications. Like SFP transceivers, QSFP ones are also hot-pluggable. However, in comparison to SFP+ optic modules, QSFP ones have four transmission channels, each with a data rate of 10Gbps, allowing for the port-density to be four times higher than that of SFP+ transceivers. The QSFP transceiver, like the SFP, can support bother single-mode and multi-mode applications but is capable of doing this over a 100km distance. QSFP transceivers are ideal for networks that require higher data rates. QSFP28 transceivers can support both high speed and high-density data transmissions, thanks to their ability to provide even higher data rates of 28Gbps on all four channels. QSFP transceivers use four wavelengths that can be enhanced using coarse wavelength division multiplexing (such as CWDM and LanWDM) technology. A popular configuration of the transceiver is the 100G QSFP28 DWDM PAM4 solution. This configuration is capable of connecting multiple data centres within over a distance of 80km. The advantage of using this configuration is that it enables an embedded dense wavelength division multiplexing (DWDM) network to be built using the transceiver directly in the switch. Like the SFP transceiver, this one also has a more advanced version, the QSFP dual density (QSFP-DD). This essentially provides double the channels and double the speed, meaning the transceiver has eight channels capable of 400G (8x50G). Ultra-high bandwidths and high speeds The C form-factor pluggable (CFP) transceiver is a common form factor used for high speed digital signal transmissions. There are four different types of CFP transceivers - CFP, CFP2, CFP4 and CFP8 - all of which can support ultra-high-bandwidth requirements, including next generation high speed Ethernet. The most recent module, CFP8, can support a broad range of polarisation mode dispersions at 400G and is already made to support 800Gbs. On the other hand, the most frequently chosen one is still CFP2. The 100G CFP coherent module supports a range of applications such as 80km interfaces or 2,500km DWDM links. This module is also configurable to optimise power dissipation for a given application. CFP transceivers are mainly used in wide area networks (WANs), wireless base stations, video and other telecommunication network systems. They are widely used in data centres, high performance computing and internet provider systems as they have a long transmission distance and fast speeds. The high variety of transceivers on the market can make it difficult to find the most suitable one for our application, but whether network owners require high bandwidths or strong connections over long distances, there is a transceiver to deliver that. Salumanus has delivered over 500,000 optical modules in the last few years, offering support in choosing the most suitable transceiver for clients’ networks.



Translate »