6 August 2025
72% of organisations faced IT disruption in past year
 
6 August 2025
BIGLOBE selects DE-CIX for faster European connectivity
 
6 August 2025
Wilmott Group officially joins Rehlko
 
5 August 2025
Kioxia showcases flash storage at FMS 2025
 
5 August 2025
Digital Connexion announces first DGX-ready Chennai data centre
 

Latest News


Riverbed launches new network observability tools
Riverbed, a US-based provider of network performance monitoring, application management, and WAN optimisation software and hardware, has launched a new range of AI-powered network observability tools, aimed at helping enterprise IT teams manage increasingly complex environments. The latest updates include new hardware, enhanced software, and a more flexible licensing model, designed to support high-performance, AI-ready infrastructure. The new release features the Riverbed xx90 appliance family, reportedly delivering up to three times the performance of previous systems for AppResponse, NetProfiler, and Flow Gateway. The appliances are built for high-throughput packet and flow capture across distributed networks, supporting monitoring at over 50Gbps and offering scalable storage beyond 2.4PB. AppResponse 11.21 now enables real-time analysis of encrypted IPSec ESP tunnel traffic and cipher hygiene, while the new version of NetProfiler (10.29), according to the company, brings faster reporting, dynamic flow balancing, and support for Versa SD-WAN. The release also introduces the Riverbed Intelligent Network Observability Essentials bundle, combining key tools for hybrid environments: • Riverbed IQ – AI-powered diagnostics for issue detection and resolution • Workspaces – Role-based dashboards integrating packet, flow, and endpoint data • Grafana plug-in – Allows Riverbed data to be displayed in existing Grafana dashboards • Topology Viewer – Map-based visualisation of networks, applications, and user experience These features are available through the company's Riverbed Flex Subscription. The model, they claim, is intended to improve licensing flexibility, reduce total cost of ownership, and simplify long-term planning. “With today’s launch, we’re introducing next-generation observability systems that align with what our customers need: streamlined toolsets, automation, and cost-efficient performance,” says Dave Donatelli, CEO of Riverbed. “We’re delivering these capabilities with faster appliances, smarter software, and greater flexibility.” Riverbed reports 92% year-on-year growth in observability bookings for the first half of 2025. “As organisations prepare their infrastructure for AI and data-heavy applications, they need monitoring systems that are intelligent and built to scale,” adds Shamus McGillicuddy, Research Vice President at EMA.

Scolmore introduces IEC Lock C21 Locking Connector
Scolmore, a UK-based manufacturer of electrical wiring accessories, circuit protection products, and lighting equipment, has expanded its IEC Lock range with the addition of a new C21 locking connector, compatible with both C20 and C22 inlets.Featuring a side button release, the IEC Lock C21’s design aims to offer extra protection against accidental disconnection, making it an appropriate choice for applications where reliability is essential.Designed to handle the heat, the company says the C21 is a durable, lockable connector built to protect appliances that are sensitive to vibration against power loss. The product is particularly suited to data centres, servers, and other industrial equipment where maintaining the proper device temperature is critical to operational success.

AssetHUB, ITS to speed up fibre rollouts in UK cities
Organisations looking to deliver high-speed connectivity in major cities across the UK have been given a boost following a new partnership between asset reuse specialist AssetHUB and full fibre networks provider ITS. The collaboration will allow Altnets and other interested parties, such as enterprise businesses, government units, local council and carriers, to purchase ITS’s dark fibre assets through AssetHUB’s secure marketplace for buying and selling existing infrastructure. The companies say the addition reflects a "shared commitment to enabling faster, more efficient fibre deployments through smarter infrastructure planning and reuse." “Fibre rollouts in big cities often mean weeks of roadworks, noise, and many other disruptions, which frustrate residents and obstruct already busy streets,” says AssetHUB CEO Rob Leenderts. “Through collaboration and sharing of assets in big cities, Altnets and other fibre builders can avoid unnecessary dig costs and overbuild, as well as speed up deployments to businesses and other amenities that require urgent connectivity upgrades. "Having a centralised platform that clearly maps infrastructure or product availability in dense locations and streamlines asset enquiries can also help network builders decrease the time to market of their services.” Kevin McNulty, Strategy Director at ITS, adds, “Our strategic partnership with AssetHUB is an important step in our ambition to scale through collaboration. "By making our infrastructure discoverable to other parties at the point of planning, we’re supporting faster rollouts, reduced disruption, and greater visibility of critical fibre routes in some of the UK’s most in-demand urban areas.” The partnership also enables ITS to utilise AssetHUB’s marketplace as a buyer, sourcing infrastructure assets to support its own build and expansion plans.

Infoblox unveils 2025 DNS Threat Landscape Report
Infoblox, a provider of cloud networking and security services, today released its 2025 DNS Threat Landscape Report, revealing a dramatic surge in DNS-based cyberthreats and the growing sophistication of adversaries leveraging AI-enabled deepfakes, malicious adtech, and evasive domain tactics. Based on pre-attack telemetry and real-time analysis of DNS queries from thousands of customer environments - with over 70 billion DNS queries per day - the report offers a view into how threat actors exploit DNS to deceive users, evade detection, and hijack trust. "This year's findings highlight the many ways in which threat actors are taking advantage of DNS to operate their campaigns, both in terms of registering large volumes of domain names and also leveraging DNS misconfigurations to hijack existing domains and impersonate major brands," says Renée Burton, Head of Infoblox Threat Intel. "The report exposes the widespread use of traffic distribution systems (TDS) to help disguise these crimes, among other trends security teams must look out for to stay ahead of attackers." Research background Since its inception, Infoblox Threat Intel has identified a total of over 660 unique threat actors and more than 204,000 suspicious domain clusters, meaning a group of domains believed to be registered by the same actor. Over the past 12 months, Infoblox researchers have published research covering 10 new actors. They have uncovered the breadth and depth of malicious adtech, which disguises threats from users through TDS. The report brings together findings from the past 12 months to illuminate attack trends. Particularly, the report sheds light on adtech's role in these attacks. Top findings • 100.8 million newly observed domains in the past year, with 25.1% classified as malicious or suspicious• 95% of threat-related domains observed in only one customer environment• 82% of customer environments queried domains associated with malicious adtech, which rotate a massive number of domains to evade security tools and serve malicious content• Nearly 500k traffic distribution system (TDS) domains were seen in the last 12 months within Infoblox networks• Daily detection of DNS Tunneling, exfiltration, and command and control, including Cobalt Strike, Sliver, and custom tools, which require ML algorithms to detect Uptick in newly observed domains Over the year, threat actors continuously registered, activated, and deployed new domains, often in very large sets through automated registration processes. By increasing their number of domains, threat actors can bypass traditional forensic-based defences, which are built on a "patient zero" approach to security. This reactive approach relies on detecting and analysing threats after they have already been used somewhere else in the world. As attackers leverage increasing levels of new infrastructure, this approach becomes ineffective, leaving organisations vulnerable. Actors are using these domains for an array of malicious purposes, from creating phishing pages and deploying malware through drive-by downloads to engaging in fraudulent activities and scams, such as fake cryptocurrency investment sites. The need for preemptive security These findings underscore a pressing need for organisations to be proactive in the face of AI-equipped attackers. Investing in preemptive security can be the deciding factor in successfully thwarting threat actors. Proactive protection, paired with consistent radar on emerging threats, tips the scales in favour of security teams — allowing them to pull ahead of attackers and interrupt their unlimited supply of domains.

BSDI announces 5,000-acre campus in Montana
Big Sky Digital Infrastructure (BSDI), a Quantica Infrastructure (Quantica) company, has just announced a major project: a 5,000-acre energy and digital infrastructure campus outside Billings, Montana, USA. The initial projected capacity is 500 MW of renewable power and battery energy storage, expandable to 1 GW. The company plans construction of the Big Sky Campus beginning in 2026. “Montana has always been a state that builds its future on the strength of its people and natural resources,” says Damon Obie, a Montana native and co-founder of Big Sky Digital Infrastructure. “The Big Sky Campus represents a unique opportunity to build on the industries that powered our history with the digital economy that will define our future. "This project is about creating opportunities for Montanans, so our communities can thrive in the digital age while staying true to our values and heritage.” John Chesser, co-founder of Big Sky Digital Infrastructure, adds, “A well-planned digital economy can support communities through employment opportunities and infrastructure investments. “This project uses the rising demand for hyperscale, AI, and cloud computing to deliver land, renewable energy, and high-speed fibre in one integrated solution.” “Having worked in the Montana power industry for over twenty years,” comments Charlie Baker, BSDI’s Chief Financial Officer, “I look forward to bringing BSDI’s approach of combining traditional grid power with planned renewable and battery energy storage to help customers meet sustainability and reliability goals. "Improvements to in-state telecommunications that come with this will benefit the whole community including schools, healthcare, and community services.” The site is expected to be connected to hundreds of miles of new fibre-ready underground conduit, enabling diverse routes to major metropolitan areas and aiming to ensure fast, resilient connectivity. The site will also include large-scale renewable energy and battery energy storage to support the campus. Through this project, the BSDI team expects to create construction jobs and permanent positions, boosting local economic development and workforce training.

Siemens earns Platinum in EcoVadis Sustainability Rating
German multinational technology company Siemens has been awarded the Platinum medal in the 2025 EcoVadis Sustainability Rating. This achievement places Siemens among the top 1% of around 130,000 companies assessed worldwide by EcoVadis, a provider of business sustainability ratings. The Platinum medal, according to the company, "underscores Siemens' commitment to sustainability and reflects achievements across all of EcoVadis’ assessment areas: Environment, Ethics, Labour & Human Rights, and Sustainable Procurement." EcoVadis assessed Siemens with a score of 85 points. In addition, Siemens Mobility was assessed separately, achieving a score of 84 points. More than 90% of Siemens’ business enables customers to achieve a positive sustainability impact across three key areas: decarbonisation and energy efficiency, resource efficiency and circularity, and people centricity & society. “Achieving the highest-ever score and being among the top 1% of all rated companies reinforces our position as a sustainability leader and recognises the dedication of our people,” claims Eva Riesenhuber, Global Head of Sustainability at Siemens. “Sustainability is at the core of our business, and we are continuing to scale our impact in the areas of industry, infrastructure, and mobility, while empowering our customers to become more competitive, more resilient, and more sustainable.” Andreas Mehlhorn, Head of Sustainability at Siemens Mobility, adds, “Being awarded the EcoVadis Platinum medal once again is a strong testament to our leading position in the rail industry. "It reflects our commitment to integrating sustainable solutions for our customers by maintaining rigorous sustainability standards across our operations and supply chain.” The EcoVadis business sustainability rating is based on international sustainability standards, including the Ten Principles of the UN Global Compact, the International Labour Organization (ILO) conventions, the Global Reporting Initiative (GRI) standards, and ISO 26000. For more from Siemens, click here.

The hidden cost of overuse and misuse of data storage
Most organisations are storing far more data than they use and, while keeping it “just in case” might feel like the safe option, it’s a habit that can quietly chip away at budgets, performance, and even sustainability goals. At first glance, storing everything might not seem like a huge problem. But when you factor in rising energy prices and ballooning data volumes, the cracks in that strategy start to show. Over time, outdated storage practices, from legacy systems to underused cloud buckets, can become a surprisingly expensive problem. Mike Hoy, Chief Technology Officer at UK edge infrastructure provider Pulsant, explores this growing challenge for UK businesses: More data, more problems Cloud computing originally promised a simple solution: elastic storage, pay-as-you-go, and endless scalability. But in practice, this flexibility has led many organisations to amass sprawling, unmanaged environments. Files are duplicated, forgotten, or simply left idle – all while costs accumulate. Many businesses also remain tied to on-premises legacy systems, either from necessity or inertia. These older infrastructures typically consume more energy, require regular maintenance, and provide limited visibility into data usage. Put unmanaged cloud plus outdated on-prem systems together and you’ve got a recipe for inefficiency. The financial sting of bad habits Most leaders in IT understand storing and securing data costs money. But what often gets overlooked are the hidden costs: the backup of low-value data, the power consumption of idle systems, or the surprise charges that come from cloud services which are not being monitored properly. Then there’s the operational cost. Disorganised or poorly labelled data makes access slower and compliance tougher. It also increases security risks, especially if sensitive information is spread across uncontrolled environments. The longer these issues go unchecked, the more danger there is of a snowball effect. Smarter storage starts with visibility The first step towards resolving these issues isn’t deleting data indiscriminately, it’s understanding what’s there. Carrying out an infrastructure or storage audit can shed light on what’s being stored, who’s using it, and whether it still serves a purpose. Once that visibility is at your fingertips, you can start making smarter decisions about what stays, what goes, and what gets moved somewhere more cost-effective. This is where a hybrid approach of combining cloud, on-premises, and edge infrastructure comes into play. It lets businesses tailor their storage to the job at hand, reducing waste while improving performance. Why edge computing is part of the solution Edge computing isn’t just a tech buzzword; it’s an increasingly practical way to harness data where it’s generated. By processing information at the edge, organisations can act on insights faster, reduce the volume of data stored centrally, and ease the load on core networks and systems. Edge computing technologies make this approach practical. By using regional edge data centres or local processing units, businesses can filter and process data closer to its source, sending only essential information to the cloud or core infrastructure. This reduces storage and transmission costs and helps prevent the build-up of redundant or low-value data that can silently increase expenses over time. This approach is particularly valuable in data-heavy industries such as healthcare, logistics, and manufacturing, where large volumes of real-time information are produced daily. Processing data locally enables businesses to store less, move less, and act faster. The wider payoff Cutting storage costs is an obvious benefit but it’s far from the only one. A smarter, edge-driven strategy helps businesses build a more efficient, resilient, and sustainable digital infrastructure: • Lower energy usage — By processing and filtering data locally, organisations reduce the energy demands of transmitting and storing large volumes centrally, supporting both carbon reduction targets and lower utility costs. As sustainability reporting becomes more critical, this can also help meet Scope 2 emissions goals. • Faster access to critical data — When the most important data is processed closer to its source, teams can respond in real time, meaning improved decision-making, customer experience, and operational agility. • Greater resilience and reliability — Local processing means organisations are less dependent on central networks. If there’s an outage or disruption, edge infrastructure can provide continuity, keeping key services running when they’re needed most. • Improved compliance and governance — By keeping sensitive data within regional boundaries and only transmitting what’s necessary, businesses can simplify compliance with regulations such as GDPR, while reducing the risk of data sprawl and shadow IT. Ultimately, it’s about creating a storage and data environment that’s fit for modern demands. It needs to be fast, flexible, efficient and aligned with wider business priorities. Don’t let storage be an afterthought Data is valuable - but only when it's well managed. When storage becomes a case of “out of sight, out of mind,” businesses end up paying more for less. And what do they have to show for it? Ageing infrastructure and bloated cloud bills. A little housekeeping goes a long way. By adopting modern infrastructure strategies, including edge computing and hybrid storage models, businesses can transform data storage from a hidden cost centre into a source of operational efficiency and competitive advantage. For more from Pulsant, click here.

365 Data Centers, Megaport grow partnership
365 Data Centers (365), a provider of network-centric colocation, network, cloud, and other managed services, has announced a further expansion of its partnership with Megaport, a global Network-as-a-Service provider (NaaS). Megaport has broadened its 365 footprint by adding Points of Presence (PoPs) at several of 365 Data Centers’ colocation facilities - namely Alpharetta, GA; Aurora, CO; Boca Raton, FL; Bridgewater, NJ; Carlstadt, NJ; and Spring Garden, PA - enhancing public cloud and other connectivity systems available to 365’s customers. Said customers will now be able to access DIA, Transport, and direct-to-cloud connectivity options to all the major public cloud hyperscalers - such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Oracle Cloud, and IBM Cloud - directly from 365 Data Centers. “Integrating Megaport’s advanced connectivity solutions into our data centers is a natural progression of our partnership and network-centric strategy," comments Derek Gillespie, CRO at 365 Data Centers. "When we’ve added to Megaport’s presence in other facilities, the deployments [have] fortified our joint Infrastructure-as-a-Service (IaaS) and NaaS offerings and complemented our partnership in major markets. "Megaport’s growing presence with 365 significantly enhances the public cloud connectivity options available to our customers.” Michael Reid, CEO at Megaport, adds, “Our expanded partnership with 365 Data Centers is all about pushing boundaries and delivering more for our customers. "Together, we’re making cutting-edge network solutions easier to access, no matter the size or location of the business, so customers can connect, scale, and innovate on their terms.” For more from 365 Data Centers, click here.

Fujitsu developing 10,000+ qubit quantum computer
Japanese multinational ICT company Fujitsu today announced it has started research and development towards a superconducting quantum computer with a capacity exceeding 10,000 qubits. Construction is slated for completion in fiscal 2030. The new superconducting quantum computer will operate with 250 logical qubits and will utilise Fujitsu's 'STAR architecture,' an early-stage fault-tolerant quantum computing (early-FTQC) architecture also developed by the company. Fujitsu aims to make practical quantum computing possible - particularly in areas like materials science, where complex simulations could unlock ground breaking discoveries - and, to this end, will focus on advancing key scaling technologies across various technical domains. As part of this effort, Fujitsu has been selected as an implementing party for the 'Research and Development Project of the Enhanced Infrastructures for Post-5G Information and Communication Systems,' publicly solicited by the NEDO (New Energy and Industrial Technology Development Organisation). The company will be contributing to the thematic area of advancing the development of quantum computers towards industrialisation. The project will be promoted through joint research with Japan’s National Institute of Advanced Industrial Science and Technology (AIST) and RIKEN, and will run until fiscal year 2027. After this 10,000-qubit machine is built, the company says it will further pursue advanced research initiatives targeting the integration of superconducting and diamond spin-based qubits from fiscal 2030, aiming to realise a 1,000 logical qubit machine in fiscal 2035, while considering the possibility of multiple interconnected quantum bit-chips. Comments Vivek Mahajan, Corporate Executive Officer, Corporate Vice President, CTO, in charge of System Platform, Fujitsu, claims, "Fujitsu is already recognised as a world leader in quantum computing across a broad spectrum, from software to hardware. "This project, led by NEDO, will contribute significantly to Fujitsu’s goal of further developing a 'Made in Japan' fault tolerant superconducting quantum computer. "We would also be aiming to combine superconducting quantum computing with diamond spin technology as part of our roadmap. "By realising 250 logical qubits in fiscal 2030 and 1,000 logical qubits in fiscal 2035, Fujitsu is committed to leading the path forward globally in the field of quantum computing. "Additionally, Fujitsu will be developing the next generation of its HPC platform, using its FUJITSU-MONAKA processor line, which will also power FugakuNEXT. Fujitsu will further integrate its platforms for high-performance and quantum computing to offer a comprehensive computing platform to our customers." Focus areas for technological development Fujitsu says its research efforts will focus on developing the following scaling technologies: • High-throughput, high-precision qubit manufacturing technology — Improvement of the manufacturing precision of Josephson Junctions, critical components of superconducting qubits which minimise frequency variations. • Chip-to-chip interconnect technology — Development of wiring and packaging technologies to enable the interconnection of multiple qubit chips, facilitating the creation of larger quantum processors. • High-density packaging and low-cost qubit control — Addressing the challenges associated with cryogenic cooling and control systems, including the development of techniques to reduce component count and heat dissipation. • Decoding technology for quantum error correction — Development of algorithms and system designs for decoding measurement data and correcting errors in quantum computations. Background The world faces increasingly complex challenges that demand computational power beyond the reach of traditional computers. Quantum computers offer the promise of tackling these previously intractable problems, driving advancements across numerous fields. While a fully fault-tolerant quantum computer with 1 million qubits of processing power is considered the ultimate goal, Fujitsu states it is focused on delivering practical solutions in the near term. In August 2024, in collaboration with the University of Osaka, Fujitsu unveiled its 'STAR architecture,' an efficient quantum computing architecture based on phase rotation gates. This architecture could pave the way for early-FTQC systems capable of outperforming conventional computers with only 60,000 qubits. On the hardware front, the RIKEN RQC-Fujitsu Collaboration Center, established in 2021 with RIKEN, has already yielded a 64-qubit superconducting quantum computer in October 2023, followed by a 256-qubit system in April 2025. Scaling to even larger systems requires overcoming challenges such as maintaining high fidelity across multiple interconnected qubit chips and achieving greater integration of components and wiring within dilution refrigerators. In addition to its superconducting approach, Fujitsu is reportedly also exploring the potential of diamond spin-based qubits, which use light for qubit connectivity. The company is conducting research in this area in collaboration with Delft University of Technology and QuTech, a quantum technology research institute, which has resulted in the successful creation of accurate and controllable qubits. For more from Fujitsu, click here.

Sabey announces Austin Building B
Sabey Data Centers, a data centre developer, owner, and operator, has announced that construction is under way for Building B on its growing Austin campus, located in the burgeoning tech corridor of Round Rock, Texas. This three-storey facility is designed to deliver a total of 54 megawatts of power capacity, with the first 18 megawatts expected to be ready for service in Q3 2027. Sabey says Austin B continues its commitment to building "scalable, energy-efficient digital infrastructure tailored for enterprise and hyperscale needs." The facility is liquid-cooling-ready by design, building on Austin Building A, where 86% of current deployments are liquid-cooled. This next phase of development hopes to ensure that Sabey is well-positioned to support the rising demand for high-density compute environments such as AI, HPC, and advanced research workloads. “As we continue to expand our national footprint, launching construction on Austin B represents an important milestone in serving one of the country’s fastest-growing technology markets,” comments Tim Mirick, President of Sabey Data Centers. “The Round Rock facility is purpose-built for flexibility and efficiency, and it offers an ideal home for forward-thinking customers with evolving density needs.” Preleasing is now open, with the building being engineered to accommodate a range of cooling strategies and power densities, including hybrid and liquid-cooled deployments exceeding 200 kilowatts per rack. Sabey Data Centers is a joint venture between Sabey Corporation and National Real Estate Advisors, acting as the investment manager on behalf of its institutional clients. For more from Sabey, click here.



Translate »