Advertise on DCNN Advertise on DCNN Advertise on DCNN
Saturday, June 14, 2025

Features


Why visibility is important for NetOps, and why it’s in short supply
By Thomas Pore, Director of Product Marketing, LiveAction Network visibility is critical to the success of NetOps and SecOps teams. They’re the ones below deck inspecting packets, troubleshooting application problems, fighting back congestion, and identifying threats to the network.  To run an optimised network requires in-depth visibility of multiple considerations - devices, traffic flows, class of service policies, and anomalous activity detection. However, internal and external challenges can compromise this needed access. The changing nature of enterprise IT One of the biggest challenges that NetOps teams face surrounds the remote worker transformation that has spiked in recent years. This digital-first transformation requires that the network now be formed of cloud instances, APIs, IoT deployments, and other components that reach outside the traditional network to accommodate a more distributed workforce. In fact, it has been estimated that APIs account for 84% of network traffic. With increasingly complex network configurations, higher data output, growing application usage, and climbing network device volume, true network visibility requires addressing new situations. The growing amount of devices and apps on a network bring further complications, changing traffic patterns and complicating visibility. A report on this topic revealed that 81% of network operations professionals deal with network blind spots. The changing nature of work We’ve witnessed a mass migration of entire workforces from stable office infrastructures to dispersed locations: homes, cafes, co-working spaces, and anywhere else there’s Wi-Fi. Seeing into those remote Wi-Fi/LAN connections and the public cloud can be a real challenge for some traditional monitoring tools. For example, SNMP polling, ping, and NetFlow can be used in IaaS clouds but won’t work in PaaS or SaaS deployments.  Cyber threats, noise and false positives Network Visibility is the greatest advantage a SecOps team can use in proactive cyber threat identification. But 91.5% of malware reported in Q2 2021 was sent through encrypted traffic. Legacy tools like DPI and IPS can’t see into encrypted traffic, and decryption methods like SSL verification are often resource-intensive, time-consuming and can pose compliance risks. The solution is in a modern threat detection tool that uses deep packet dynamics (DPD) to scan encrypted traffic for risks without the need for decryption. Another obstacle to achieving network visibility is working within systems that do not offer targeted or audience-based alerting. Systems that use SIEM alerts can experience a regressive effect on network visibility, losing sight of critical issues through alert fatigue caused by waves of benign alerts. A 2019 report from FireEye found that 37% of large enterprises receive an average of 10,000 alerts each month, of which over half were redundant alerts or false positives. Tool sprawl Similarly, the very tools which are supposed to illuminate the network often obscure it when combined. The average NetOps team uses between four and 10 tools to monitor their network. But according to one estimate, almost 25 percent of large enterprises rely on anywhere between eight and 25 network performance monitoring tools. These different technologies, programming languages, and user interfaces require a large time commitment in training from NetOps and SecOps teams. Mixing and matching metrics from different reporting tools can create discrepancies and gaps in reporting knowledge. Once organisations pass a functional tool threshold, budget is wasted, efficiency declines, and ultimately visibility is hampered.  Bringing concision to network visibility Security and network professionals need tools that empower them to function at their highest ability. A LiveAction survey found that 42% of network professionals spend excessive hours troubleshooting across the network and 38% are so backlogged, they don’t identify network performance issues when they arise. When network performance and security suffers, the entire organisation is impacted. To harness the visibility needed for successful network operations, organisations must evaluate monitoring performance metrics in several key scenarios, including a multi-vendor network, a multi-cloud network, a hybrid cloud network, data centre visibility, and distributed remote site visibility. Engineers should prioritise their search for a single monitoring solution and dashboard powerful enough to deliver complete network visibility. This convergence into one view simplifies workflows, makes troubleshooting and network visualisation easier and improves the efficiency of NetOp and SecOps teams.  Amid the constant evolution of changing network architectures, ways of working, devices, apps, tools, and threats, NetOps and SecOps teams must adapt and find solutions that allow them to deliver the same optimal network results. The importance of network visibility cannot be understated in the rise of new cyber threats and the elevation of end-user expectations for network services. Simplifying the job of your network and security professionals improves performance levels and security resilience. Consider the importance of complete network visibility today to allow your engineers and network to reach their optimal potentials.

Choosing high quality optical transceivers
By Marcin Bala, CEO of Salumanus Advancements in technology have led to an even greater need for reliable and stable data transmission, resulting in transceivers becoming an essential part of any network's hardware configuration. Optical transceivers are often regarded as one of the simplest pieces of hardware in a network, but this is not true. Selecting the wrong transceiver or one of poor quality could lead to a number of unforeseen issues. There are many important factors to consider when choosing an optical transceiver for your application, such as data rates, wavelengths and transmission distance. To help narrow down this search, here are three categories of transceivers - SFP, QSFP and CFP - and their core benefits. Compact and flexible Small form-factor pluggable (SFP) transceivers are the most popular optical transceiver type, mainly due to their compact size. The small size allows these transceivers to be compatible with various applications, which is especially useful for tight networking spaces that still require fast transmissions. The SFP module is also hot-pluggable, meaning it can be easily adjusted to existing networks without the need for cable infrastructure redesign. SFP transceivers are very flexible and are compatible with both copper and fibre networks. In copper networks, these transceivers are perfect for connecting transmissions between switches that are up to 100m apart. However, when used in fibre optics, they can have a communication range of around 500m to over 100km. These transceivers are mainly used in Ethernet switches, routers and firewalls. There is also a more advanced version of the SFP, called the SFP+, that is faster than its original counterpart and can support speeds up to 10Gbps. SFP+ are not the only advanced versions of the SFP transceiver - there are also the SFP28 and the SFP56. The SFP28 differs as it can support up to 28.1Gbps, whereas the SFP56 has double the capacity of SFP28 when combined with PAM4 modulation. The SFP transceivers support both single-mode and multi-mode fibre and can transmit data over a duplex or a simplex fibre strand. The flexibility of SFP transceivers makes them compatible with almost all applications that require high speed over long ranges, such as dark fibre, passive optical networks and multiplexing. High density and compact Quad small form-factor pluggable (QSFP) transceivers are used for 40 Gigabit Ethernet (40GbE) data transmission applications. Like SFP transceivers, QSFP ones are also hot-pluggable. However, in comparison to SFP+ optic modules, QSFP ones have four transmission channels, each with a data rate of 10Gbps, allowing for the port-density to be four times higher than that of SFP+ transceivers. The QSFP transceiver, like the SFP, can support bother single-mode and multi-mode applications but is capable of doing this over a 100km distance. QSFP transceivers are ideal for networks that require higher data rates. QSFP28 transceivers can support both high speed and high-density data transmissions, thanks to their ability to provide even higher data rates of 28Gbps on all four channels. QSFP transceivers use four wavelengths that can be enhanced using coarse wavelength division multiplexing (such as CWDM and LanWDM) technology. A popular configuration of the transceiver is the 100G QSFP28 DWDM PAM4 solution. This configuration is capable of connecting multiple data centres within over a distance of 80km. The advantage of using this configuration is that it enables an embedded dense wavelength division multiplexing (DWDM) network to be built using the transceiver directly in the switch. Like the SFP transceiver, this one also has a more advanced version, the QSFP dual density (QSFP-DD). This essentially provides double the channels and double the speed, meaning the transceiver has eight channels capable of 400G (8x50G). Ultra-high bandwidths and high speeds The C form-factor pluggable (CFP) transceiver is a common form factor used for high speed digital signal transmissions. There are four different types of CFP transceivers - CFP, CFP2, CFP4 and CFP8 - all of which can support ultra-high-bandwidth requirements, including next generation high speed Ethernet. The most recent module, CFP8, can support a broad range of polarisation mode dispersions at 400G and is already made to support 800Gbs. On the other hand, the most frequently chosen one is still CFP2. The 100G CFP coherent module supports a range of applications such as 80km interfaces or 2,500km DWDM links. This module is also configurable to optimise power dissipation for a given application. CFP transceivers are mainly used in wide area networks (WANs), wireless base stations, video and other telecommunication network systems. They are widely used in data centres, high performance computing and internet provider systems as they have a long transmission distance and fast speeds. The high variety of transceivers on the market can make it difficult to find the most suitable one for our application, but whether network owners require high bandwidths or strong connections over long distances, there is a transceiver to deliver that. Salumanus has delivered over 500,000 optical modules in the last few years, offering support in choosing the most suitable transceiver for clients’ networks.

Westermo PoE switch supports networks with high power demands
Westermo has introduced a new compact industrial Power over Ethernet (PoE) switch designed to support the ever-growing networking requirements for devices, such as security cameras, wireless access points and monitors. The Lynx 3510 PoE series is capable of supporting networks with greater power demands and is ideal for handling big data, high bandwidth, mission-critical applications typically found within transportation, manufacturing, energy and smart cities. With power and data provided over the same cable, PoE helps to reduce network complexity and offers greater installation flexibility, reliability, and time and cost savings. The Lynx 3510 PoE enhances network capability by supporting the needs of more powered devices, with eight copper ports each providing gigabit speeds and up to 30W output. This is ideal for connecting HD IP CCTV cameras in industrial settings and other power-hungry applications. The Lynx 3510 PoE also offers redundant and fast failover connectivity, with Westermo’s FRNT ring protocol ensuring rapid network recovery should a node or connection be lost. Ensuring the security of industrial data communication networks is of paramount importance, especially with cyber attacks becoming increasingly sophisticated. To reduce risk and increase cyber resilience, the Lynx 3510 PoE has an extensive suite of advanced cyber security features. These can be used to build networks in compliance with the IEC 62443 standard, which defines technical security requirements for data communication network components. “The Lynx 3510 PoE is the first product based on a new very powerful platform. Available both as switch and a router with impressive performance capable to handle the bandwidth of future networks,” says Henrik Jerregård, Senior Product Manager at Westermo. “The Lynx 3510 PoE is extremely reliable and designed to maintain uninterrupted data communications in even the most challenging environmental conditions, and by offering a total power output of 240W, this will help expand the capability of PoE networks.” The DIN rail-mountable Lynx 3510 PoE has been extensively tested to meet a broad range of industry standards relating to electromagnetic compatibility, isolation, vibration and shock. With an IP40-rated fan-less all-metal housing, the ultra-robust switch has a wide operating temperature range. Superior build quality, industrial-grade components, a high level of isolation between interfaces and a redundant power supply helps to extend service life and creates an extremely reliable solution that contributes to a lower total cost of ownership. Helping to reduce complexity, the Lynx 3510 PoE is powered by the WeOS operating system, which ensures continuous operation and support for an expanding range of communication protocols and features, and simplifies installation, operation and maintenance. WeOS provides future-proofed network solutions with high levels of resiliency and security.

Senet and IotaComm partner to deliver advanced wireless networks
Senet and Iota Communications have announced a partnership to deliver LoRaWAN through both 915MHz unlicensed spectrum and through IotaComm’s unique 800MHz FCC-licensed spectrum network connectivity. The initial use cases will be focused on smart building, smart city, and critical infrastructure applications. With this collaboration, and in addition to its use of the Senet platform for application and device management, IotaComm has also become a Senet Radio Access Network (RAN) operator and Senet LPWAN Virtual Network participant. Through a combination of sensors, meters, and its Delphi360 wireless connectivity and data analytics platform, IotaComm provides an end-to-end smart building and smart city solution used by building managers, industrial site managers, and city planners to better manage the health, safety, and sustainability goals of their organisations and facilities. In addition, IotaComm uniquely combines its FCC-licensed spectrum within the LoRaWAN standard to enable carrier-grade, low power wide area connectivity for critical infrastructure applications, such as smart metering and predictive maintenance. To support growing customer demand for power efficient, battery operated indoor and outdoor smart building sensors, IotaComm operates more than 140 tower sites nationwide and plans to deploy 150 LoRaWAN gateways by 2023. For customers preferring added levels of network and application performance, Senet and IotaComm are collaborating to create a new LoRaWAN service using the 800MHz licensed spectrum. IotaComm already owns enough 800MHz spectrum to cover about 90% of the US and plans to deploy multi-access gateways to deliver a premium smart building connectivity offering. IotaComm will use Senet’s cloud-based platform to manage both its public LoRaWAN network and private on-premises networks and application deployments using the 800 MHz FCC-licensed spectrum. “We’re honoured to be working alongside of Senet in the quest to provide the wireless connectivity efficiency and flexibility that industries are requiring,” says Terrence DeFranco, CEO, President of IotaComm. “This partnership fully supports our goals of building the largest national, carrier-grade LPWAN dedicated to the IoT. Together with Senet’s network architecture expertise, we’ll deliver real-time data that results in high value and actionable insights while filling an existing connectivity gap.” By opening their LoRaWAN gateways to data traffic from all solution providers connecting to the Senet Low Power Wide Area Virtual Network (LVN), IotaComm is contributing to the rapid expansion of public carrier-grade LoRaWAN networks across the US and generating new IoT services revenue streams. Unique to the Senet LVN are innovative business models designed to deliver unified LoRaWAN connectivity without the need for roaming contracts and the opportunity for participants, like IotaComm, to share in the revenue generated by all end devices connecting to the gateways they’ve deployed regardless of end customer origin. “Innovation has always been at Senet’s core and our partnership with Iota Communications is another example of Senet leading the market through innovative technology and unique business models that allow users to improve operations and address sustainability goals,” says Bruce Chatterley, CEO at Senet. “Iota Communications brings significant value and differentiation to our portfolio of RAN Provider and LVN partners, and we look forward to collaborating to deliver ground-breaking network solutions to the commercial building energy management and facility operation markets.”

COMSovereign expands 5G IP portfolio with additional enabling technologies
COMSovereign has outlined its going efforts to expand the value of its intellectual property (IP) portfolio as part of its ongoing business transition. As an innovator in advanced wireless transmission technologies underlying both 4G and 5G wireless networks, it continues to pursue opportunities to monetise the value of its IP. To date, the company holds approximately 130 patents and approximately 25 patent applications pending. These pending patents cover an array of critical wireless networking technologies supporting the latest 5G Mobile Broadband Standard. This includes meeting future wireless network system requirements for increased bandwidth through the support for simultaneous radio transmission and reception utilising approaches such as the Company's Lextrum in-band full duplex (zero division duplex) technology. "As an early player in the 4G and 5G space, COMSovereign's business was built on a solid IP foundation, one that powers the market leading performance of our DragonWave and Fastback products. As part of our ongoing review of the business, we believe our IP portfolio represents an untapped opportunity to create value for our stakeholders. That is why our Board of Directors and our leadership team is actively exploring ways to monetise our IP through multiple paths," says David Knight, interim CEO of COMSovereign.

Ensuring resilience during the energy crisis
By Billy Durie, Global Sector Head for Data Centres at Aggreko It is no secret that Europe’s appetite for data is increasing year-on-year. According to Domo’s Data Never Sleeps 6.0, it is estimated that the average person creates 1.7MB of data per second. This rapid rate of digitalisation has in turn placed the onus upon service providers to ensure that this growth is supported, and that demand continues to be met. However, it is important to note that this has not been without consequence, as the cities housing Europe’s leading data centre markets - Frankfurt, London, Amsterdam, Paris and Dublin (FLAP-D) - are now wrestling with significant grid strain. While it would be unfair to say that this is solely the fault of the data centre sector, it has undoubtedly been a major contributor to this challenge. In the Republic of Ireland, for instance, data centre electricity consumption has spiked by 144% in five years, with supplier EirGrid introducing stringent legislation around applying for new grid connections as a result. Factoring in the effects of the energy crisis, stability of supply has now reached an all time low for the data centre sector, reinforcing the need to ensure that facilities are able to effectively manage these challenges. Assessing industry challenges In an effort to assess how these issues are currently affecting data centre operators, Aggreko surveyed 253 industry professionals across the UK and Republic of Ireland as part of its latest report, The Power Struggle - Data Centres. Those surveyed occupied roles from junior manager up to C-suite executive, with the research taking place in April 2022. The headline findings illustrate the effect that the combination of grid strain and the energy crisis has had on stability of supply. Over 70% of UK businesses cited power security as either ‘a concern’ or ‘a major concern’, while only 6% said it was not. The former figure rises to 80% where the Republic of Ireland is concerned, illustrating the severity of grid strain in this market. Moreover, Aggreko’s survey also indicates that 65% and 60% of UK and Irish businesses respectively have experienced power outages in the past 18 months. With this in mind, it is clear that these concerns are not unfounded. Given that industry standards dictate that downtime is expected to be kept to under 28.8 hours per year, these challenges are evidently creating an unsustainable position for the data centre sector. Comprehensive stress testing With the consequences of a possible outage in mind, there has never been a more crucial time to ensure that facilities are able to function effectively, even under the effects of grid strain. Given that rising rack densities are driving electricity consumption in data centres ever higher, this consideration is more important than ever. This is achieved through loadbank testing equipment before facilities are brought into operation, which takes place over five key stages: Factory acceptance testing: determining whether equipment has been built and operates in accordance with design specifications.Site acceptance testing: ensuring equipment meets specification criteria and is inspected for damage before it enters the facility.Pre-functional testing: verifying the functionality of the equipment, which includes determining whether each device is properly installed, wired, torqued and Megger tested prior to initial energisation.Individual system testing: detecting hotspots or weak components in the equipment, allowing for it to be replaced before the facility is put to work.Integrated system testing: ensuring that all equipment responds appropriately to varying loads, staged machinery failures and any potential utility problems. Through undertaking comprehensive loadbank testing in line with these steps during the commissioning phase, and then annually afterwards, operators can ensure that their facilities stay on top of rising demands while minimising power outage risks. Exploring alternative approaches Loadbank testing is the most effective method of assessing the capabilities of a data centre before the facility goes live. However, it is also important to address the long-term causes for the instability of supply in the first place. With this in mind, it may be time for data centre operators to look beyond their traditional grid connection for power procurement. A possible alternative here could be Energy as a Service (EaaS) or Power Purchase Agreements (PPAs), wherein users generate their own energy on site by way of decentralised energy solutions, paying their supplier by kWh. This allows facilities to reduce reliance on their increasingly unstable grid connection without the need to invest in new equipment. The EaaS concept has grown in popularity among data centre operators in recent years, with 51% and 49% of respondents to Aggreko’s survey in the UK and Ireland respectively considering generating their own energy. Yet drawbacks exist in the form of fixed term pricing, with some operators being subject to one or two-year contracts. Given the volatile nature of the current energy market, it is easy to see how this could be counterintuitive. This is especially concerning given that some operators even issue penalties for excessively high usage. www.aggreko.com

Designing, planning and testing edge data centres
By Carsten Ludwig, Market Manager, Reichle & De-Massari Edge data centres provide computing power on the periphery of cloud and Wide Area Networks, relieving these and improving performance. They are located as closely as possible to points where data is aggregated, analysed, or processed in real-time. Popular content and applications, for example, can be cached closer to less densely networked markets, improving performance and experience. Let’s examine some considerations when designing, planning and testing edge data centres. Location Edge providers may operate dozens or hundreds of edge data centres concurrently across urban and suburban locations, which can be hard to reach and work in, so edge data centres need to be exceptionally robust and secure. Proximity or direct connection to fibre optic links and network node points is imperative. Edge data centres need redundant, synchronous fibre hyperconnectivity in all directions: to the cloud, cellular phone networks, neighbouring data centres and users. These factors pose a significant challenge for planners and design engineers. For planning purposes, edge providers require a tool that reflects all the above-mentioned demands and preconditions. The quality of planning can be improved, if drawings and data for further material sourcing comes from a single tool. Testing in this stage would be recommended for the fibre links, to ensure these are working correctly and delivering the promised performance. The tested quality of the components used determines the performance and functional reliability of the optical links. Technical requirements Edge data centres often have to cope with a lack of space and harsh environmental conditions. They need to be positioned in protected, discrete, dry places and the following must be provided: Interruption-free power supplyFire protectionAir-conditioning and coolingSound, dust and vibration protectionLocking and access control A professional approach to securing high performance from the outset would be using preconfigured and assembled modular systems. These could consist of pre-terminated panels, sub racks or complete racks, if logistics and site design allow. Preconfigured equipment can be delivered by the OEM with relevant test certificates, ensuring a high level of quality and vastly simplifying installation as no testing is required on site. This approach requires a professional installation performance and capability from the service team. Properly configured and tested modules increase quality, reduce the risk of failure significantly and reduce the workload on site. High density and port capacity Afcom's ‘2022 State of the Data Centre’ study noted a significant density increase at the edge. In 2021 the typical respondent implementing, or planning edge locations reported an estimated mean power density of 7kW. In 2022, this was 8.3kW. Edge data centre fibre hyper connectivity requires space for high count fibre cables under floors and in cable ducts. For edge networks moving content such as HDTV programmes closer to the end user high density of more than 100 ports per rack unit is essential. Traditional 72 ports per unit UHD solutions won’t suffice. Current high-density fibre solutions for data centres generally offer up to 72 LC duplex ports per rack unit. However, this can introduce management difficulties.  Pretermination by the OEM would be ideal. Testing on site is possible with adapters required on test equipment to serve new connectivity solutions such as the VSFF connector family. Connectivity can also be secured using intelligent AIM systems for monitoring layer one performance. Besides the connectivity check ‘outside of the data stream’ edge providers have an overview of what’s happening within connectivity ‘inside of the data stream’. There are several ways of realising this, from a low budget approach using TAP modules to high-performing 24/7 signal analysers. Each edge location has a unique design and service to deliver, so the approach has to be selected accordingly. Testing To ensure quality and performance levels, testing is essential. In Reichle & De-Massari's experience, new data centre builds rarely go according to schedule. If part of the process is pulled forward or delayed, it introduces challenges related to component quality and performance. The installation of sensitive equipment such as fibre connectivity that needs to be 100% clean might have to take place in an environment insufficiently free of dust and moisture for example. It’s important to determine what tests can be done up front to avoid hassle on site. Optical connectors and adapters can be checked for insertion loss and other standard KPIs before delivery by the OEM. Even if equipment has been preconfigured, testing on-site in the event of schedule changes isn't just smart - it should be mandatory. That avoids issues, and therefore also delays and a lot of finger-pointing between involved parties. Management Cable management is key. Double check measurements, make sure terminations are top quality, test wherever necessary, label and colour-code, watch out for cramped conduits and make absolutely sure no cables or bundles rest upon others. Bad cable management can result in signal interference and crosstalk, damage and failure, resulting in data transmission errors, performance issues and downtime. Introducing Operation Management systems provides a seamless 24/7 performance status for each location. As these locations are distributed in line with the nature of the new network architecture, the performance management should not only focus on standard applications such as power, cooling and access reports: every aspect of data connectivity needs to be covered. Solutions that monitor data flow (such as TAP modules) are mandatory. Because an edge provider’s service team doesn’t work on site, remote control of all relevant aspects at each location is mandatory and a precondition for customer-relevant performance. Remote control needs to cover all edge locations in one system. On one hand, this helps monitor status of all relevant dimensions such as power supply, temperature conditions, data access, data flow, and security. On the other hand, current and upcoming installations at the edge site are monitored by a single system, giving insight for asset and capacity management and serving as a basis for further extensions and new/changing customers at each site. www.rdm.com

Whitestar Solutions doubles productivity with TREND Networks
Whitestar Solutions has doubled productivity since replacing its existing cable certifier fleet with LanTEK IV testers from TREND Networks. In 2020, the company’s fleet of testers were due for renewal. Upon finding a superior and more cost-effective solution in LanTEK IV cable certifiers, Whitestar Solutions opted to update its fleet by partnering with TREND Networks over its previous supplier. “The first job we used the LanTEK for was for the London Business School,” says Gavin Atkins, Project Supervisor for Whitestar Solutions. “There were over 4000 data points to be installed, but with LanTEK on our side and helping us through, the job ran smoothly and was a complete success.” LanTEK IV is easy to use, with a responsive touchscreen and simple user interface. Moreover, it saves significant amounts of time as it enables the user to test and save a Cat6A link in just seven seconds. “We now have the LanTEK certifier, capable of testing within seven seconds, which is twice as quick as anything we have ever had before. That means our productivity on site has literally doubled overnight,” says John English, Managing Director for Whitestar Solutions. Gavin continues: “When you’re testing thousands of data points, every second counts and that combined with the VisiLINQ reduces the time spent on any job.” VisiLINQ Permanent Link Adapters enable technicians to work smarter, not harder. They make it possible to initiate testing and view the results, all without needing to carry or touch the tester.  Users simply press the VisiLINQ test button and wait for the coloured light to indicate the result. “When you’re testing in a noisy environment you can see the green light flash and, without having to waste time looking down at the screen, you know that it has passed and you can go on to the next project,” says Gavin. Another productivity boosting feature which has benefitted Whitestar Solutions is the TREND AnyWARE Cloud, the fastest cloud test management system in the world. Project managers can also pre-configure all project information in the TREND AnyWARE Cloud for field technicians to download, helping improve accuracy. “Before the cloud, our engineers would be responsible for inputting their own test IDs, which on occasion, a mistake could happen,” explains John. “One digit out, could mean that a 1000-test project would need to be manually changed in the office, which would add time to the project.” John continues: “With our previous supplier, when it came to issuing test results, we either had to send the testers back to the office, or the engineers saved test results to the USB stick, both of which were very time consuming from an admin point of view. But now with LanTEK, the results are seamlessly synced to the cloud, and our project managers can issue test reports to the clients on the day the project is completed.” Even if Whitestar Solutions’ technicians do not have access to Wi-Fi, they can easily connect the LanTEK cable certifier to their phones. This means that wherever they are working in the country, they can transfer the test results to the office to be signed off with minimal fuss. “Working in the education sector, time and speed are of the essence. With our previous supplier, when we had a tester go in for calibration, we could sometimes be a tester down for one to two weeks,” says John. “With TREND, the tester goes in one day, and we’ve got a loan unit the next. Downtime is kept to an absolute minimum.” TREND Networks also provides a lifetime support promise, to offer calibration and repair for the LanTEK IV cable certifier for as long as it is in use. Technical support is also available globally, with a two-hour response time promise, to further help maximise productivity. “Our previous supplier wanted the large investment all in one go, whereas with TREND Networks they were buying into our beliefs, which was building a partnership and providing that financial flexibility,” concludes John. “We are planning to grow over the next three to five years and we’re confident that we can depend on TREND Networks to come on that journey with us.” www.whitestarsolutions.com www.trend-networks.com

A plan to find your security vulnerability before hackers do
By Keith Bromley, Senior Network Visibility and Security Solutions Manager, Keysight Technologies One of the top questions on the minds of network security personnel is "how do I reduce my security risk?". Even for smaller organisations this is important because every network has a weakness. But, do you know where you are the most vulnerable? Wouldn't you like to fix the problem now, before a hacker exploits it? Here is a three-point plan that works to expose intrusions and decrease network security risk: Prevention - reduce as many attacks from entering the network as possibleDetection - find and quickly remediate intrusions that that are discovered within the networkVigilance - periodically test your defences to make sure they are actually detecting and blocking threats Network security - it all starts with prevention Inline security solutions are a high impact technique that businesses can deploy to address security threats. These solutions can eliminate 90% or more of incoming security threats before they even enter your network. While an inline security architecture will not create a fool proof defence against all incoming threats, it provides the crucial data access that security operations (SecOps) teams need to make the real-world security threat load manageable. It is important to note that an inline security solution is more than just adding a security appliance, like an intrusion prevention system (IPS) or a web application firewall (WAF). The solution requires external bypass switches and network packet brokers (NPBs) to access and deliver complete data visibility. This allows for the examination of all data for suspect network traffic. Hunt down intrusions While inline security solutions are absolutely necessary to lowering your risk for a security intrusion, the truth is that something bad will make it into your network. This is why you need a second level of defence that helps you actively search for threats. To accomplish this task, you need complete visibility into all segments of your network. At the same time, not all visibility equipment is created equal. For instance, are your security tools seeing everything they need to? You could be missing more than 60% of your security threats and not even know it. This is because some of the vendors that make visibility equipment (like NPBs) drop packets (without alerting you) before the data reaches critical security tools, like an intrusion detection system (IDS). This missing data contributes significantly to the success of security threats. A combination of taps, bypass switches, and NPBs provide the visibility and confidence you need that you are seeing everything in your network - every bit, byte, and packet. Once you have this level of visibility, threat hunting tools and security information and event management (SIEM) systems can proactively look for indicators of compromise (IOC). Stay vigilant and constantly validate your security architecture The third level of defence is to periodically validate that your security architecture is working as designed. This means using a breach and attack simulation (BAS) solution to safely check your defences against real-world threats. Routine patch maintenance and annual penetration testing are security best practices; but they don't replace weekly or monthly BAS-type functions. For instance, maybe a patch wasn't applied or was applied incorrectly. How do you know? And penetration tests are only good for a specific point in time. Once a few weeks or months have passed, new weaknesses will probably exist. And crucially, were the right fixes applied if a vulnerability was found? For these reasons and more, you need to use a BAS solution to determine the current strength of your defences. While updating your security tools is great, constant vigilance goes a long way to security your organisation. This three-point plan can help you ensure that you are doing the most to make your security tools protect your organisation now and in the future. www.keysight.com

Is loadbank testing necessary for containerised data centres?
By Paul Brickman, Commercial Director for Crestchic Loadbanks Typically housed within shipping containers, containerised data centres can be deployed easily, powered up quickly and scaled without delay in line with changing requirements. For the data centre sector itself, containerised solutions are widely used to accommodate surplus demand for data centres that need to grow but cannot yet do so, and often deliver a continuity of performance when a primary data centre needs critical maintenance or refurbishment. In other sectors they are equally as important. They provide 'pop up' IT and communication services for music festivals and sporting events, support office relocations and major construction sites, and are a staple resource for the military, as well as complex industries such as offshore oil and gas, where data and communications demands are often remote and temporary. Temporary by name, essential by nature The temporary nature of these data centres often results in them being commoditised and overlooked when it comes to the maintenance procedures and performance best practice that would be considered essential for a bricks-and-mortar data centre. But when in operation, these mobile data centres are just as much a necessity as their permanent counterparts, safe housing the same valuable data, and preventing the same financially catastrophic losses engineers so dread when maintaining their primary data centres. It is important to remember that a data centre is a data centre, whether that is a purpose built hyperscale campus, a colocation or a temporary solution in a shipping container, and that means the same risks apply, and the same preventative measures are required. No matter the type of data centre being used, the primary cause of unplanned downtime is power failure, something that the Uptime Institute calls “common, costly and preventable”. In fact, in its most recent Risk and Resilience Report, the Uptime Institute calculated that power failure accounts for around 36% of all outages. It is essential therefore that backup generators for containerised data centres are regularly tested, the same as permanent data centre facilities. Critical applications require guaranteed resilience Music festivals and sporting events aside, the vast majority of containerised data centre applications are critical. Military communications, major construction sites, data centre refurbishments and a temporary expansion of primary data centre capacity may all have a clear expiration date, but the situation is already fragile - risking a power outage in an already difficult environment could be catastrophic. Although mobile data centres are designed to provide facilities with the perfect mix of temporary generators, networking essentials, cooling equipment, servers and UPS, the fact remains that a single point of failure can immobilise the entire data centre. This is an important consideration when deciding which maintenance procedures to uphold, and which if any, can be overlooked. With power outages proven to be the biggest point of failure, correct loadbank testing should be maintained at all times to provide reassurance that if required, the backup power system is capable of accepting the required load and maintaining uptime in the event of a power failure. Understand the possibilities, prevent downtime If it is not the temporary nature of a containerised data centre that prevents the required maintenance, then it is often the location, and the assumption that access will be impossible. After all, these small, highly portable loadbanks are often located in areas that have not been specifically constructed for such essential kit. That said, leading loadbank manufacturers have created backup generator testing equipment that can meet the testing demands of containerised data centres.One example is Crestchic’s trailer mounted loadbank solution that combines the powerful testing capability of its traditional resistive-only loadbanks, with the flexibility of a heavy-duty trailer for applications that require exceptional levels of manoeuvrability. With loadbank testing still achievable for mobile loadbanks, the risk of downtime is, as the Uptime Institute puts it, preventable. www.loadbanks.com



Translate »