Data Centre Security: Protecting Infrastructure from Physical and Cyber Threats


72% of organisations faced IT disruption in past year
New research released today uncovers a worrying truth: despite years of digital transformation, IT resilience remains a critical vulnerability for UK enterprises. According to the latest research commissioned by Asanti, a UK colocation data centre provider, 72% of senior IT decision makers reported experiencing significant disruption or downtime in the past year due to IT resilience issues, with only 31% expressing extreme confidence in their current disaster recovery and business continuity plans. The research, conducted by Vanson Bourne, surveyed 100 senior IT leaders across public and private sector organisations. The findings reveal a concerning disconnect between perceived and actual resilience performance. Key highlights from the study • Gaps in recovery preparedness — Only 56% of organisations surveyed have fully defined and regularly tested Recovery Time Objectives (RTOs), and just 36% say the same for Recovery Point Objectives (RPOs), which are the core thresholds set for acceptable downtime and data loss. • Significant operational fallout — 60% of businesses struggled to return to normal operations after a major resilience disruption, while 58% admitted to suffering substantial financial losses. • Confidence crisis in risk recovery — Over half of respondents report only low or medium confidence in handling major risks like cybersecurity breaches (54%), data centre outages (61%), or unauthorised physical access (62%). The research also identifies a “resilience competency gap,” where critical planning, testing, and investment decisions are lagging behind the complexity and frequency of modern IT threats. Despite widespread cloud adoption, 51% still view cloud service outages as one of the top risks to operations - surpassing even traditional IT system failures (49%). “Too many organisations assume they’re more resilient than they actually are,” comments Stewart Laing, CEO of Asanti. “This research makes clear that real resilience isn’t about where your systems live, it’s about how well you’ve prepared to keep them running. "Without clearly defined recovery objectives, rigorous testing, and a culture of proactive risk management, even the most advanced infrastructure can fail. Business leaders must move beyond surface-level confidence and embed resilience into every layer of operations.” The human factor remains a weak link The study found that 89% of respondents believe human oversight is a critical vulnerability in their resilience strategy, while 91% agreed that operational failures due to human error could compromise backup power capabilities. While 59% of those respondents said that they test their business continuity and disaster recovery plans at least every six months, these exercises often lack the depth required to reveal systemic resilience weaknesses. Only 31% of respondents felt extremely confident in their current business continuity and disaster recovery plans - a sobering indicator of widespread fragility. Businesses are not measuring the true impact of downtime Although most firms track downtime (77%) and financial impact (73%) of resilience incidents, softer yet critical indicators like reputational impact (54%) and impact on digital transformation goals (57%) are often overlooked. “Measurement is the foundation of resilience,” continues Stewart. “If you’re only tracking outages and costs, you’re missing the true business impact. "Resilience must be strategic, tested, and integrated across infrastructure, operations, and leadership thinking.” For more from Asanti, click here.

Infoblox unveils 2025 DNS Threat Landscape Report
Infoblox, a provider of cloud networking and security services, today released its 2025 DNS Threat Landscape Report, revealing a dramatic surge in DNS-based cyberthreats and the growing sophistication of adversaries leveraging AI-enabled deepfakes, malicious adtech, and evasive domain tactics. Based on pre-attack telemetry and real-time analysis of DNS queries from thousands of customer environments - with over 70 billion DNS queries per day - the report offers a view into how threat actors exploit DNS to deceive users, evade detection, and hijack trust. "This year's findings highlight the many ways in which threat actors are taking advantage of DNS to operate their campaigns, both in terms of registering large volumes of domain names and also leveraging DNS misconfigurations to hijack existing domains and impersonate major brands," says Renée Burton, Head of Infoblox Threat Intel. "The report exposes the widespread use of traffic distribution systems (TDS) to help disguise these crimes, among other trends security teams must look out for to stay ahead of attackers." Research background Since its inception, Infoblox Threat Intel has identified a total of over 660 unique threat actors and more than 204,000 suspicious domain clusters, meaning a group of domains believed to be registered by the same actor. Over the past 12 months, Infoblox researchers have published research covering 10 new actors. They have uncovered the breadth and depth of malicious adtech, which disguises threats from users through TDS. The report brings together findings from the past 12 months to illuminate attack trends. Particularly, the report sheds light on adtech's role in these attacks. Top findings • 100.8 million newly observed domains in the past year, with 25.1% classified as malicious or suspicious• 95% of threat-related domains observed in only one customer environment• 82% of customer environments queried domains associated with malicious adtech, which rotate a massive number of domains to evade security tools and serve malicious content• Nearly 500k traffic distribution system (TDS) domains were seen in the last 12 months within Infoblox networks• Daily detection of DNS Tunneling, exfiltration, and command and control, including Cobalt Strike, Sliver, and custom tools, which require ML algorithms to detect Uptick in newly observed domains Over the year, threat actors continuously registered, activated, and deployed new domains, often in very large sets through automated registration processes. By increasing their number of domains, threat actors can bypass traditional forensic-based defences, which are built on a "patient zero" approach to security. This reactive approach relies on detecting and analysing threats after they have already been used somewhere else in the world. As attackers leverage increasing levels of new infrastructure, this approach becomes ineffective, leaving organisations vulnerable. Actors are using these domains for an array of malicious purposes, from creating phishing pages and deploying malware through drive-by downloads to engaging in fraudulent activities and scams, such as fake cryptocurrency investment sites. The need for preemptive security These findings underscore a pressing need for organisations to be proactive in the face of AI-equipped attackers. Investing in preemptive security can be the deciding factor in successfully thwarting threat actors. Proactive protection, paired with consistent radar on emerging threats, tips the scales in favour of security teams — allowing them to pull ahead of attackers and interrupt their unlimited supply of domains.

Summer habits could increase cyber risk to enterprise data
As flexible work arrangements expand over the summer months, cybersecurity experts are warning businesses about the risks associated with remote and ‘workation’ models, particularly when employees access corporate systems from unsecured environments. According to Andrius Buinovskis, Cybersecurity Expert at NordLayer - a provider of network security services for businesses - working from abroad or outside traditional office settings can increase the likelihood of data breaches if not properly managed. The main risks include use of unsecured public Wi-Fi, reduced vigilance against phishing scams, use of personal or unsecured devices, and exposure to foreign jurisdictions with weaker data protection regulations. Devices used outside the workplace are also more susceptible to loss or theft, further raising the threat of data exposure. Andrius recommends the following key measures to mitigate risk: • Strong network encryption — It secures data in transit, transforming it into an unreadable format and safeguarding it from potential attackers. • Multi-factor authentication — Access controls, like multi-factor authentication, make it more difficult for cybercriminals to access accounts with stolen credentials, adding a layer of protection. • Robust password policies — Hackers can easily target and compromise accounts protected by weak, reused, or easy-to-access passwords. Enforcing strict password management policies requiring unique, long, and complex passwords, and educating employees on how to store them securely, minimises the possibility of falling victim to cybercriminals. • Zero trust architecture — The constant verification process of all devices and users trying to access the network significantly reduces the possibility of a hacker successfully infiltrating the business. • Network segmentation — If a bad actor does manage to infiltrate the network, ensuring it's segmented helps to minimise the potential damage. Not granting all employees access to the whole network and limiting it to the parts essential for their work helps reduce the scope of the data an infiltrator can access. He also highlights the importance of centralised security and regular staff training on cyber hygiene, especially when using personal devices or accessing systems while travelling. “High observability into employee activity and centralised security are crucial for defending against remote work-related cyber threats,” he argues.

'Have we learned anything from the CrowdStrike outage?'
On 19 July 2024, services and industries around the world ground to a halt. The cause? A defective rapid response content update. While widely known by security experts, the sheer impact of such an update was made painfully clear to the average person, affecting countless businesses and organisations in every sector. With airlines to healthcare, financial services to government being affected, the impacts on people were felt far and wide – with banking apps out of action and hospitals having to cancel non-urgent surgeries. Yet, a year on from the global IT outage, have businesses really learned anything? Recent outages for banks and major service providers would suggest otherwise. Although not every outage can be avoided, there are a few key things businesses should remember. Eileen Haggerty, Area Vice President, Product & Solutions at Netscout, gives her biggest takeaways from the outage and how organisations can avoid the same happening again: “If nothing else, businesses should ensure they have the visibility they need to pre-empt issues stemming from software updates. Realistically, they need complete round-the-clock monitoring of their networks and entire IT environment. "With this visibility - and by carrying out maintenance checks and regular updates - organisations can mitigate the risk of unexpected downtime and, in turn, prevent financial and reputational losses. “Securing a network and assuring consistent performance isn't just about deploying defences, it's about anticipating every move. That's why a best practice for IT teams includes conducting proactive synthetic tests which simulate real traffic, long before a single customer encounters a frustrating lag or a critical function fails. "Conducting these tests provides organisations with the vital foresight they need to anticipate issues before they even have a chance to materialise. This step, combined with proactive real-time traffic monitoring provides vital details necessary when facing a major industry outage, security incident, or a local corporate issue, enabling the appropriate response with evidence as fast as possible. “While outages like last year’s are a harsh lesson for businesses, they also present an invaluable learning opportunity. Truly resilient organisations will turn the disruption they experienced into a powerful data source and a blueprint for performance assurance and operational resilience. "This means leveraging advanced visibility tools to conduct deeply informative post-mortems. By building a rich, detailed repository of information from every previous incident, organisations aren’t just documenting history, they're establishing best practice policies and actively future-proofing their operations, ensuring they can anticipate and navigate any potential challenges before they become an issue for customers.” For more from Netscout, click here.

Cybersecurity teams pushing back against AI hype
Despite industry hype and pressure from business leaders to accelerate adoption, cybersecurity teams are reportedly taking a cautious approach to artificial intelligence (AI). This is according to a new survey from ISC2, a non-profit organisation that provides cybersecurity training and certifications. While AI is widely promoted as a game-changer for security operations, only a small proportion of practitioners have integrated these tools into their daily workflows, with many remaining hesitant due to concerns over privacy, oversight, and unintended risks. Many CISOs remain cautious about AI adoption, citing concerns around privacy, oversight, and the risks of moving too quickly. A recent survey of over 1,000 cybersecurity professionals found that just 30% of cybersecurity teams are currently using AI tools in their daily operations, while 42% are still evaluating their options. Only 10% said they have no plans to adopt AI at all. Adoption is most advanced in industrial sectors (38%), IT services (36%), and professional services (34%). Larger organisations with more than 10,000 employees are further ahead on the adoption curve, with 37% actively using AI tools. In contrast, smaller businesses - particularly those with fewer than 99 staff or between 500 and 2,499 employees - show the lowest uptake, with only 20% using AI. Among the smallest organisations, 23% say they have no plans to evaluate AI security tools at all. Andy Ward, SVP International at Absolute Security, comments, “The ISC2 research echoes what we’re hearing from CISOs globally. There’s real enthusiasm for the potential of AI in cybersecurity, but also a growing recognition that the risks are escalating just as fast. "Our research shows that over a third (34%) of CISOs have already banned certain AI tools like DeepSeek entirely, driven by fears of privacy breaches and loss of control. "AI offers huge promise to improve detection, speed up response times, and strengthen defences, but without robust strategies for cyber resilience and real-time visibility, organisations risk sleepwalking into deeper vulnerabilities. "As attackers leverage AI to reduce the gap between vulnerability and exploitation, our defences must evolve with equal urgency. Now is the time for security leaders to ensure their people, processes, and technologies are aligned, or risk being left dangerously exposed.” Arkadiy Ukolov, Co-Founder and CEO at Ulla Technology, adds, “It’s no surprise to see security professionals taking a measured, cautious approach to AI. While these tools bring undeniable efficiencies, privacy and control over sensitive data must come first. "Too many AI solutions today operate in ways that risk exposing confidential information through third-party platforms or unsecured systems. "For AI to be truly fit for purpose in cybersecurity, it must be built on privacy-first foundations, where data remains under the user’s control and is processed securely within an enclosed environment. Protecting sensitive information demands more than advanced tech alone, it requires ongoing staff awareness, training on AI use, and a robust infrastructure that doesn’t compromise security." Despite this caution, where AI has been implemented, the benefits are clear. 70% of those already using AI tools report positive impacts on their cybersecurity team’s overall effectiveness. Key areas of improvement include network monitoring and intrusion detection (60%), endpoint protection and response (56%), vulnerability management (50%), threat modelling (45%), and security testing (43%). Looking ahead, AI adoption is expected to have a mixed impact on hiring. Over half of cybersecurity professionals believe AI will reduce the need for entry-level roles by automating repetitive tasks. However, 31% anticipate that AI will create new opportunities for junior talent or demand new skill sets, helping to rebalance some of the projected reductions in headcount. Encouragingly, 44% said their hiring plans have not yet been affected, though the same proportion report that their organisations are actively reconsidering the skills and roles required to manage AI technologies.

Datadog partners with AWS to launch in Australia and NZ
Datadog, a monitoring and security platform for cloud applications, has just launched its full range of products and services on the Amazon Web Services’ (AWS) Asia-Pacific (Sydney) Region. The launch adds to existing locations in North America, Asia, and Europe. The new local availability zone enables Datadog, its customers, and its partners to store and process data locally, enabling in-region capacity to meet applicable Australian privacy, security, and data storage requirements. This, according to the company, is crucial for an increasing number of organisations - particularly those operating in regulated environments such as government, banking, healthcare, and higher education. “This milestone reinforces Datadog’s commitment to supporting the region’s advanced digital capabilities - especially the Australian government’s ambition to make the country a leading digital economy,” says Yanbing Li, Chief Product Officer at Datadog. “With strong momentum across public and private sectors, our investment enhances trust in Datadog’s unified and cloud-agnostic observability and security platform, and positions us to meet the evolving needs of agencies and enterprises alike.” Rob Thorne, Vice President for Asia-Pacific and Japan (APJ) at Datadog, adds, "Australian organisations are on track to spend nearly A$26.6 billion [£12.84 billion] on public cloud services alone in 2025. "For organisations in highly regulated industries, it isn’t just the cloud provider that needs to have local data storage capacity, it should be all layers of the tech stack. "This milestone reflects Datadog’s priority to support these investments. It’s the latest step in our expansion down under, and follows the continued addition of headcount to support our more than 1,100 A/NZ customers, as well as the recent appointments of Field CTO for APJ, Yadi Narayana, and Vice President of Commercial Sales for APJ, Adrian Towsey, to our leadership team.” For more from Datadog, click here.

Netscout expands cybersecurity systems
Netscout Systems, a provider of observability, AIOps, cybersecurity, and DDoS attack protection systems, has just announced Adaptive Threat Analytics, a new enhancement to its Omnis Cyber Intelligence Network Detection and Response (NDR) solution, designed to improve incident response and reduce risk. The aim with the offering is to "enable security teams to investigate, hunt, and respond to cyber threats more rapidly." Cybersecurity professionals face a challenge in the race against time to detect and respond appropriately to cyber threats before it's too late. Alert fatigue, increasing alert volume, fragmented visibility from siloed tools, and cunning AI-enabled adversaries create a compelling need for a faster and more effective response plan. McKinsey & Company noted last year that despite a decline in response time to cyber-related risks in recent years, organisations still take an average of 73 days to contain an incident. In the threat detection and incident response process, comprehensive north-south and east-west network visibility plays a critical role in all phases, but none more so than the ‘Analyse’ phase between ’Detection’ and ‘Response.’ Adaptive Threat Analytics utilises continuous network packet capture and local storage of metadata and packets independent of detections, built-in packet decodes, and an ad hoc querying language, seeking to enable more rapid threat investigation and proactive hunting. “Network environments continue to become more disparate and complex," says John Grady, Principal Analyst, Cybersecurity, Enterprise Strategy Group. "Bad actors exploit this broadened attack surface, making it difficult for security teams to respond quickly and accurately." "Due to this, continuous, unified, packet-based visibility into north-south and east-west traffic has become essential for effective and efficient threat detection and incident response.” “Security teams often lack the specific knowledge to understand exactly what happened to be able to choose the best response,” claims Jerry Mancini, Senior Director, Office of the CTO, Netscout. “Omnis Cyber Intelligence with Adaptive Threat Analytics provides ‘big picture’ data before, during, and after an event that helps teams and organisations move from triage uncertainty and tuning to specific knowledge essential for reducing the mean time to resolution.” For more from Netscout, click here.

DigiCert opens registration for World Quantum Readiness Day
DigiCert, a US-based digital security company, today announced open registration for its annual World Quantum Readiness Day virtual event, which takes place on Wednesday, 10 September 2025. The company is also accepting submissions for its Quantum Readiness Awards. Both initiatives intend to spotlight the critical need for current security infrastructures to adapt to the imminent reality of quantum computing. World Quantum Readiness Day is, according to DigiCert, a "catalyst for action, urging enterprises and governments worldwide to evaluate their preparedness for the emerging quantum era." It seeks to highlight the growing urgency to adopt post-quantum cryptography (PQC) standards and provide a "playbook" to help organisations defend against future quantum-enabled threats. “Quantum computing has the potential to unlock transformative advancements across industries, but it also requires a fundamental rethink of our cybersecurity foundations,” argues Deepika Chauhan, Chief Product Officer at DigiCert. “World Quantum Readiness Day isn’t just a date on the calendar, it’s a starting point for a global conversation about the urgent need for collective action to secure our quantum future.” The Quantum Readiness Awards were created to celebrate organisations that are leading the charge in quantum preparedness. Judges for the Quantum Readiness Awards include: · Bill Newhouse, Cybersecurity Engineer & Project Lead, National Cybersecurity Center of Excellence, NIST· Dr Ali El Kaafarani, CEO, PQShield· Alan Shimel, CEO, TechStrong Group· Blair Canavan, Director, Alliances PQC Portfolio, Thales· Tim Hollebeek, Industry Technology Strategist, DigiCert For more from DigiCert, click here.

Global data centres face rising climate risks, XDI report warns
Data centres are facing sharply rising risks from climate-change-driven extreme weather, according to a major new report released today by XDI (Cross Dependency Initiative), a company which is concerned with physical climate risk analysis. The company argues that without urgent investment in emissions reduction and physical adaptation, operators could face soaring insurance premiums, growing disruption to operations, and billions in damages. XDI’s 2025 Global Data Centre Physical Climate Risk and Adaptation Report offers a global picture of how extreme weather threatens the backbone of the digital economy. The report ranks leading data centre hubs by their exposure to eight climate hazards — flooding, tropical cyclones, forest fires, coastal inundation, and others — now and into the future and under different climate scenarios. It is based on analysis of nearly 9,000 operational and planned data centres worldwide. The report quantifies how targeted structural adaptations (changes to the physical design and construction of data centres) can dramatically improve resilience, reduce risk, and help curb escalating insurance costs. “Data centres are the silent engine of the global economy. But as extreme weather events become more frequent and severe, the physical structures underpinning our digital world are increasingly vulnerable,” says Karl Mallon, Founder of XDI (Cross Dependency Initiative). "When so much depends on this critical infrastructure and with the sector growing exponentially, operators, investors, and governments can’t afford to be flying blind. Our analysis helps them see the global picture, identify where resilience investments are most needed, and chart pathways to reduce risk." Key insights from the report include that: • Data centre hubs in New Jersey, Hamburg, Shanghai, Tokyo, Hong Kong, Moskva, Bangkok, and Hovestaden are all in the top 20 for climate risk by 2050, with 20-64% of data centres in these hubs projected to be at high risk of physical damage from climate change hazards by 2050. • APAC is the fastest growing region for data centre growth in the world, yet it also carries some of the greatest risk, with more than one in ten data centres already at high risk in 2025, becoming more than one in eight by 2050. • Insurance costs for data centres globally could triple or quadruple by 2050 without decisive mitigation and adaptation. • Targeted investments in resilience could save billions of dollars in damages annually. The report highlights that climate risk varies dramatically by location, even between data centres in the same country or region. This kind of like-for-like, jurisdiction-spanning analysis, XDI argues, is critical for guiding smarter investment decisions in new and existing data centres - helping asset owners, operators, and investors allocate capital where it will have the greatest impact on protecting long-term value. The report also reinforces that decarbonisation and adaptation must go hand in hand to safeguard the digital economy for the long term. Adaptation is essential, but the most resilient data centre is only as secure as the infrastructure it depends on — such as roads, water supply, and communications links — which are themselves vulnerable to climate hazards. Without ambitious and sustained investment in emissions reduction to limit the severity of climate change, no amount of structural hardening will fully protect these critical assets.

Invicti launches new Application Security Platform
Cybersecurity company Invicti today announced the launch of what it calls its "next-gen" Application Security Platform, featuring AI-powered scanning capabilities, enhanced dynamic application security testing (DAST) performance, and full-spectrum visibility into application risk. The platform seeks to enable organisations to detect and fix vulnerabilities faster and with greater accuracy. “Your applications are dynamic, shouldn’t your AppSec tools be too?” argues Neil Roseman, CEO of Invicti. “Attackers live in your runtime, but most security tools are stuck in static analysis. With Invicti, we’re cutting through the static with a DAST-first platform that continuously uncovers real risk in real time so security teams can take action with confidence.” DAST improvements with AI The latest release introduces enhancements to Invicti’s DAST engine, which, according to data provided by the company, include: • Being 8x faster than leading competitors.• Finding 40% more high and critical vulnerabilities.• Delivering 99.98% accuracy with proof-based scanning. Securing more of what matters The company says the Invicti platform now combines AI-driven features and integrated discovery to "expose more of the real attack surface and deliver broader, more accurate security coverage." The main features include: • LLM scanning — securing AI-generated code by identifying risks produced by large language models.• AI-powered DAST — revealing vulnerabilities that traditionally required manual penetration testing.• Integrated ASPM — bringing greater visibility into application posture, enabling teams to prioritise and manage risk across the SDLC.• Enhanced API detection — identifying and testing previously hidden or unmanaged APIs, now with native support for F5, NGINX, and Cloudflare. “A stronger DAST engine gives our customers more than better scan results, it gives them clarity,” claims Kevin Gallagher, President of Invicti. “They can see what truly matters, cut through the noise, and move faster to reduce risk. This launch continues our push to make security actionable, efficient, and focused on what’s real.” For more from Invicti, click here.



Translate »