Data Centre Security: Protecting Infrastructure from Physical and Cyber Threats


Summer habits could increase cyber risk to enterprise data
As flexible work arrangements expand over the summer months, cybersecurity experts are warning businesses about the risks associated with remote and ‘workation’ models, particularly when employees access corporate systems from unsecured environments. According to Andrius Buinovskis, Cybersecurity Expert at NordLayer - a provider of network security services for businesses - working from abroad or outside traditional office settings can increase the likelihood of data breaches if not properly managed. The main risks include use of unsecured public Wi-Fi, reduced vigilance against phishing scams, use of personal or unsecured devices, and exposure to foreign jurisdictions with weaker data protection regulations. Devices used outside the workplace are also more susceptible to loss or theft, further raising the threat of data exposure. Andrius recommends the following key measures to mitigate risk: • Strong network encryption — It secures data in transit, transforming it into an unreadable format and safeguarding it from potential attackers. • Multi-factor authentication — Access controls, like multi-factor authentication, make it more difficult for cybercriminals to access accounts with stolen credentials, adding a layer of protection. • Robust password policies — Hackers can easily target and compromise accounts protected by weak, reused, or easy-to-access passwords. Enforcing strict password management policies requiring unique, long, and complex passwords, and educating employees on how to store them securely, minimises the possibility of falling victim to cybercriminals. • Zero trust architecture — The constant verification process of all devices and users trying to access the network significantly reduces the possibility of a hacker successfully infiltrating the business. • Network segmentation — If a bad actor does manage to infiltrate the network, ensuring it's segmented helps to minimise the potential damage. Not granting all employees access to the whole network and limiting it to the parts essential for their work helps reduce the scope of the data an infiltrator can access. He also highlights the importance of centralised security and regular staff training on cyber hygiene, especially when using personal devices or accessing systems while travelling. “High observability into employee activity and centralised security are crucial for defending against remote work-related cyber threats,” he argues.

'Have we learned anything from the CrowdStrike outage?'
On 19 July 2024, services and industries around the world ground to a halt. The cause? A defective rapid response content update. While widely known by security experts, the sheer impact of such an update was made painfully clear to the average person, affecting countless businesses and organisations in every sector. With airlines to healthcare, financial services to government being affected, the impacts on people were felt far and wide – with banking apps out of action and hospitals having to cancel non-urgent surgeries. Yet, a year on from the global IT outage, have businesses really learned anything? Recent outages for banks and major service providers would suggest otherwise. Although not every outage can be avoided, there are a few key things businesses should remember. Eileen Haggerty, Area Vice President, Product & Solutions at Netscout, gives her biggest takeaways from the outage and how organisations can avoid the same happening again: “If nothing else, businesses should ensure they have the visibility they need to pre-empt issues stemming from software updates. Realistically, they need complete round-the-clock monitoring of their networks and entire IT environment. "With this visibility - and by carrying out maintenance checks and regular updates - organisations can mitigate the risk of unexpected downtime and, in turn, prevent financial and reputational losses. “Securing a network and assuring consistent performance isn't just about deploying defences, it's about anticipating every move. That's why a best practice for IT teams includes conducting proactive synthetic tests which simulate real traffic, long before a single customer encounters a frustrating lag or a critical function fails. "Conducting these tests provides organisations with the vital foresight they need to anticipate issues before they even have a chance to materialise. This step, combined with proactive real-time traffic monitoring provides vital details necessary when facing a major industry outage, security incident, or a local corporate issue, enabling the appropriate response with evidence as fast as possible. “While outages like last year’s are a harsh lesson for businesses, they also present an invaluable learning opportunity. Truly resilient organisations will turn the disruption they experienced into a powerful data source and a blueprint for performance assurance and operational resilience. "This means leveraging advanced visibility tools to conduct deeply informative post-mortems. By building a rich, detailed repository of information from every previous incident, organisations aren’t just documenting history, they're establishing best practice policies and actively future-proofing their operations, ensuring they can anticipate and navigate any potential challenges before they become an issue for customers.” For more from Netscout, click here.

Cybersecurity teams pushing back against AI hype
Despite industry hype and pressure from business leaders to accelerate adoption, cybersecurity teams are reportedly taking a cautious approach to artificial intelligence (AI). This is according to a new survey from ISC2, a non-profit organisation that provides cybersecurity training and certifications. While AI is widely promoted as a game-changer for security operations, only a small proportion of practitioners have integrated these tools into their daily workflows, with many remaining hesitant due to concerns over privacy, oversight, and unintended risks. Many CISOs remain cautious about AI adoption, citing concerns around privacy, oversight, and the risks of moving too quickly. A recent survey of over 1,000 cybersecurity professionals found that just 30% of cybersecurity teams are currently using AI tools in their daily operations, while 42% are still evaluating their options. Only 10% said they have no plans to adopt AI at all. Adoption is most advanced in industrial sectors (38%), IT services (36%), and professional services (34%). Larger organisations with more than 10,000 employees are further ahead on the adoption curve, with 37% actively using AI tools. In contrast, smaller businesses - particularly those with fewer than 99 staff or between 500 and 2,499 employees - show the lowest uptake, with only 20% using AI. Among the smallest organisations, 23% say they have no plans to evaluate AI security tools at all. Andy Ward, SVP International at Absolute Security, comments, “The ISC2 research echoes what we’re hearing from CISOs globally. There’s real enthusiasm for the potential of AI in cybersecurity, but also a growing recognition that the risks are escalating just as fast. "Our research shows that over a third (34%) of CISOs have already banned certain AI tools like DeepSeek entirely, driven by fears of privacy breaches and loss of control. "AI offers huge promise to improve detection, speed up response times, and strengthen defences, but without robust strategies for cyber resilience and real-time visibility, organisations risk sleepwalking into deeper vulnerabilities. "As attackers leverage AI to reduce the gap between vulnerability and exploitation, our defences must evolve with equal urgency. Now is the time for security leaders to ensure their people, processes, and technologies are aligned, or risk being left dangerously exposed.” Arkadiy Ukolov, Co-Founder and CEO at Ulla Technology, adds, “It’s no surprise to see security professionals taking a measured, cautious approach to AI. While these tools bring undeniable efficiencies, privacy and control over sensitive data must come first. "Too many AI solutions today operate in ways that risk exposing confidential information through third-party platforms or unsecured systems. "For AI to be truly fit for purpose in cybersecurity, it must be built on privacy-first foundations, where data remains under the user’s control and is processed securely within an enclosed environment. Protecting sensitive information demands more than advanced tech alone, it requires ongoing staff awareness, training on AI use, and a robust infrastructure that doesn’t compromise security." Despite this caution, where AI has been implemented, the benefits are clear. 70% of those already using AI tools report positive impacts on their cybersecurity team’s overall effectiveness. Key areas of improvement include network monitoring and intrusion detection (60%), endpoint protection and response (56%), vulnerability management (50%), threat modelling (45%), and security testing (43%). Looking ahead, AI adoption is expected to have a mixed impact on hiring. Over half of cybersecurity professionals believe AI will reduce the need for entry-level roles by automating repetitive tasks. However, 31% anticipate that AI will create new opportunities for junior talent or demand new skill sets, helping to rebalance some of the projected reductions in headcount. Encouragingly, 44% said their hiring plans have not yet been affected, though the same proportion report that their organisations are actively reconsidering the skills and roles required to manage AI technologies.

Datadog partners with AWS to launch in Australia and NZ
Datadog, a monitoring and security platform for cloud applications, has just launched its full range of products and services on the Amazon Web Services’ (AWS) Asia-Pacific (Sydney) Region. The launch adds to existing locations in North America, Asia, and Europe. The new local availability zone enables Datadog, its customers, and its partners to store and process data locally, enabling in-region capacity to meet applicable Australian privacy, security, and data storage requirements. This, according to the company, is crucial for an increasing number of organisations - particularly those operating in regulated environments such as government, banking, healthcare, and higher education. “This milestone reinforces Datadog’s commitment to supporting the region’s advanced digital capabilities - especially the Australian government’s ambition to make the country a leading digital economy,” says Yanbing Li, Chief Product Officer at Datadog. “With strong momentum across public and private sectors, our investment enhances trust in Datadog’s unified and cloud-agnostic observability and security platform, and positions us to meet the evolving needs of agencies and enterprises alike.” Rob Thorne, Vice President for Asia-Pacific and Japan (APJ) at Datadog, adds, "Australian organisations are on track to spend nearly A$26.6 billion [£12.84 billion] on public cloud services alone in 2025. "For organisations in highly regulated industries, it isn’t just the cloud provider that needs to have local data storage capacity, it should be all layers of the tech stack. "This milestone reflects Datadog’s priority to support these investments. It’s the latest step in our expansion down under, and follows the continued addition of headcount to support our more than 1,100 A/NZ customers, as well as the recent appointments of Field CTO for APJ, Yadi Narayana, and Vice President of Commercial Sales for APJ, Adrian Towsey, to our leadership team.” For more from Datadog, click here.

Netscout expands cybersecurity systems
Netscout Systems, a provider of observability, AIOps, cybersecurity, and DDoS attack protection systems, has just announced Adaptive Threat Analytics, a new enhancement to its Omnis Cyber Intelligence Network Detection and Response (NDR) solution, designed to improve incident response and reduce risk. The aim with the offering is to "enable security teams to investigate, hunt, and respond to cyber threats more rapidly." Cybersecurity professionals face a challenge in the race against time to detect and respond appropriately to cyber threats before it's too late. Alert fatigue, increasing alert volume, fragmented visibility from siloed tools, and cunning AI-enabled adversaries create a compelling need for a faster and more effective response plan. McKinsey & Company noted last year that despite a decline in response time to cyber-related risks in recent years, organisations still take an average of 73 days to contain an incident. In the threat detection and incident response process, comprehensive north-south and east-west network visibility plays a critical role in all phases, but none more so than the ‘Analyse’ phase between ’Detection’ and ‘Response.’ Adaptive Threat Analytics utilises continuous network packet capture and local storage of metadata and packets independent of detections, built-in packet decodes, and an ad hoc querying language, seeking to enable more rapid threat investigation and proactive hunting. “Network environments continue to become more disparate and complex," says John Grady, Principal Analyst, Cybersecurity, Enterprise Strategy Group. "Bad actors exploit this broadened attack surface, making it difficult for security teams to respond quickly and accurately." "Due to this, continuous, unified, packet-based visibility into north-south and east-west traffic has become essential for effective and efficient threat detection and incident response.” “Security teams often lack the specific knowledge to understand exactly what happened to be able to choose the best response,” claims Jerry Mancini, Senior Director, Office of the CTO, Netscout. “Omnis Cyber Intelligence with Adaptive Threat Analytics provides ‘big picture’ data before, during, and after an event that helps teams and organisations move from triage uncertainty and tuning to specific knowledge essential for reducing the mean time to resolution.” For more from Netscout, click here.

DigiCert opens registration for World Quantum Readiness Day
DigiCert, a US-based digital security company, today announced open registration for its annual World Quantum Readiness Day virtual event, which takes place on Wednesday, 10 September 2025. The company is also accepting submissions for its Quantum Readiness Awards. Both initiatives intend to spotlight the critical need for current security infrastructures to adapt to the imminent reality of quantum computing. World Quantum Readiness Day is, according to DigiCert, a "catalyst for action, urging enterprises and governments worldwide to evaluate their preparedness for the emerging quantum era." It seeks to highlight the growing urgency to adopt post-quantum cryptography (PQC) standards and provide a "playbook" to help organisations defend against future quantum-enabled threats. “Quantum computing has the potential to unlock transformative advancements across industries, but it also requires a fundamental rethink of our cybersecurity foundations,” argues Deepika Chauhan, Chief Product Officer at DigiCert. “World Quantum Readiness Day isn’t just a date on the calendar, it’s a starting point for a global conversation about the urgent need for collective action to secure our quantum future.” The Quantum Readiness Awards were created to celebrate organisations that are leading the charge in quantum preparedness. Judges for the Quantum Readiness Awards include: · Bill Newhouse, Cybersecurity Engineer & Project Lead, National Cybersecurity Center of Excellence, NIST· Dr Ali El Kaafarani, CEO, PQShield· Alan Shimel, CEO, TechStrong Group· Blair Canavan, Director, Alliances PQC Portfolio, Thales· Tim Hollebeek, Industry Technology Strategist, DigiCert For more from DigiCert, click here.

Global data centres face rising climate risks, XDI report warns
Data centres are facing sharply rising risks from climate-change-driven extreme weather, according to a major new report released today by XDI (Cross Dependency Initiative), a company which is concerned with physical climate risk analysis. The company argues that without urgent investment in emissions reduction and physical adaptation, operators could face soaring insurance premiums, growing disruption to operations, and billions in damages. XDI’s 2025 Global Data Centre Physical Climate Risk and Adaptation Report offers a global picture of how extreme weather threatens the backbone of the digital economy. The report ranks leading data centre hubs by their exposure to eight climate hazards — flooding, tropical cyclones, forest fires, coastal inundation, and others — now and into the future and under different climate scenarios. It is based on analysis of nearly 9,000 operational and planned data centres worldwide. The report quantifies how targeted structural adaptations (changes to the physical design and construction of data centres) can dramatically improve resilience, reduce risk, and help curb escalating insurance costs. “Data centres are the silent engine of the global economy. But as extreme weather events become more frequent and severe, the physical structures underpinning our digital world are increasingly vulnerable,” says Karl Mallon, Founder of XDI (Cross Dependency Initiative). "When so much depends on this critical infrastructure and with the sector growing exponentially, operators, investors, and governments can’t afford to be flying blind. Our analysis helps them see the global picture, identify where resilience investments are most needed, and chart pathways to reduce risk." Key insights from the report include that: • Data centre hubs in New Jersey, Hamburg, Shanghai, Tokyo, Hong Kong, Moskva, Bangkok, and Hovestaden are all in the top 20 for climate risk by 2050, with 20-64% of data centres in these hubs projected to be at high risk of physical damage from climate change hazards by 2050. • APAC is the fastest growing region for data centre growth in the world, yet it also carries some of the greatest risk, with more than one in ten data centres already at high risk in 2025, becoming more than one in eight by 2050. • Insurance costs for data centres globally could triple or quadruple by 2050 without decisive mitigation and adaptation. • Targeted investments in resilience could save billions of dollars in damages annually. The report highlights that climate risk varies dramatically by location, even between data centres in the same country or region. This kind of like-for-like, jurisdiction-spanning analysis, XDI argues, is critical for guiding smarter investment decisions in new and existing data centres - helping asset owners, operators, and investors allocate capital where it will have the greatest impact on protecting long-term value. The report also reinforces that decarbonisation and adaptation must go hand in hand to safeguard the digital economy for the long term. Adaptation is essential, but the most resilient data centre is only as secure as the infrastructure it depends on — such as roads, water supply, and communications links — which are themselves vulnerable to climate hazards. Without ambitious and sustained investment in emissions reduction to limit the severity of climate change, no amount of structural hardening will fully protect these critical assets.

Invicti launches new Application Security Platform
Cybersecurity company Invicti today announced the launch of what it calls its "next-gen" Application Security Platform, featuring AI-powered scanning capabilities, enhanced dynamic application security testing (DAST) performance, and full-spectrum visibility into application risk. The platform seeks to enable organisations to detect and fix vulnerabilities faster and with greater accuracy. “Your applications are dynamic, shouldn’t your AppSec tools be too?” argues Neil Roseman, CEO of Invicti. “Attackers live in your runtime, but most security tools are stuck in static analysis. With Invicti, we’re cutting through the static with a DAST-first platform that continuously uncovers real risk in real time so security teams can take action with confidence.” DAST improvements with AI The latest release introduces enhancements to Invicti’s DAST engine, which, according to data provided by the company, include: • Being 8x faster than leading competitors.• Finding 40% more high and critical vulnerabilities.• Delivering 99.98% accuracy with proof-based scanning. Securing more of what matters The company says the Invicti platform now combines AI-driven features and integrated discovery to "expose more of the real attack surface and deliver broader, more accurate security coverage." The main features include: • LLM scanning — securing AI-generated code by identifying risks produced by large language models.• AI-powered DAST — revealing vulnerabilities that traditionally required manual penetration testing.• Integrated ASPM — bringing greater visibility into application posture, enabling teams to prioritise and manage risk across the SDLC.• Enhanced API detection — identifying and testing previously hidden or unmanaged APIs, now with native support for F5, NGINX, and Cloudflare. “A stronger DAST engine gives our customers more than better scan results, it gives them clarity,” claims Kevin Gallagher, President of Invicti. “They can see what truly matters, cut through the noise, and move faster to reduce risk. This launch continues our push to make security actionable, efficient, and focused on what’s real.” For more from Invicti, click here.

'7% of organisations tackle vulnerabilities only when necessary'
A recent joint survey conducted by VDC Research, a technology market intelligence and consulting firm, and Kaspersky, a Russian multinational cybersecurity company, has highlighted an alarming trend: 7% of industrial organisations tackle vulnerabilities only when necessary. This leaves them exposed to unplanned downtime, production losses, and the reputational and financial damages that can result from possible cyber breaches. The study, entitled Securing OT with Purpose-built Solutions, illuminates the shifting landscape of cybersecurity within the industrial sector. Focusing on key industries such as energy, utilities, manufacturing, and transportation, their research surveyed over 250 decision-makers to uncover trends and challenges faced in fortifying industrial environments against cyber threats. A strong cybersecurity strategy begins with complete visibility into an organisation’s assets, allowing leaders to understand what assets need protection and to assess the highest risk areas. In environments where IT and OT systems converge, this demands more than just a comprehensive asset inventory. Organisations must implement a risk assessment methodology that is aligned with their operational realities. By establishing a clear asset baseline, organisations can engage in meaningful risk assessments that address both corporate risk criteria and the potential physical and cyber consequences of vulnerabilities. Recent survey findings reveal a concerning trend: a significant number of organisations are not engaging in regular penetration testing or vulnerability assessments. Only 27.1% of respondents perform these critical evaluations on a monthly basis, while 48.4% conduct assessments every few months. Alarmingly, 16.7% do so only once or twice a year, and 7.4% address vulnerabilities solely as needed. This inconsistent approach could leave organisations vulnerable as they navigate an increasingly complex threat landscape. Every software platform is inherently vulnerable to bugs, insecure code, and other weaknesses that malicious actors can exploit to compromise IT environments. For industrial companies, effective patch management is therefore crucial to mitigate these risks. That being said, studies reveal that many organisations encounter significant challenges in this area, often struggling to allocate the necessary time to pause operations for critical updates. Unnervingly, many organisations patch their OT systems only every few months or even longer, significantly heightening their risk exposure. Specifically, 31.4% apply patches monthly, while 46.9% do so every few months and 12.4% update only once or twice a year. These challenges in maintaining effective patch management are exacerbated in OT environments, where limited device visibility, inconsistent vendor patch availability, specialised expertise requirements, and regulatory compliance add layers of complexity to the cybersecurity landscape. As IT and OT systems increasingly converge, there is a pressing need to harmonise these traditionally disparate systems which have often relied on proprietary technologies rather than open standards. The challenge is further intensified by the rapid proliferation of Internet of Things (IoT) devices — ranging from cameras and smart sensors for asset tracking and health monitoring to advanced climate control systems. This explosion of connected devices broadens the attack surface for industrial organisations, underscoring the urgent need for robust cybersecurity measures.

'More than a third of UK businesses unprepared for AI risks'
Despite recognising artificial intelligence (AI) as a major threat, with nearly a third (30%) of UK organisations surveyed naming it among their top three risks, many remain significantly unprepared to manage AI risk. Recent research from CyXcel, a global cyber security consultancy, highlights a concerning gap: nearly a third (29%) of UK businesses surveyed have only just implemented their first AI risk strategy - and 31% don’t have any AI governance policy in place. This critical gap exposes organisations to substantial risks including data breaches, regulatory fines, reputational harm, and critical operational disruptions, especially as AI threats continue to grow and rapidly evolve. CyXcel’s research shows that nearly a fifth (18%) of UK and US companies surveyed are still not prepared for AI data poisoning, a type of cyberattack that targets the training datasets of AI and machine learning (ML) models, or for a deepfake or cloning security incident (16%). Responding to these mounting threats and geopolitical challenges, CyXcel has launched its Digital Risk Management (DRM) platform, which aims to provide businesses with insight into evolving AI risks across major sectors, regardless of business size or jurisdiction. The DRM seeks to help organisations identify risk and implement the right policies and governance to mitigate them. Megha Kumar, Chief Product Officer and Head of Geopolitical Risk at CyXcel, comments, “Organisations want to use AI but are worried about risks – especially as many do not have a policy and governance process in place. The CyXcel DRM provides clients across all sectors, especially those that have limited technological resources in house, with a robust tool to proactively manage digital risk and harness AI confidently and safely.” Edward Lewis, CEO of CyXcel, adds, “The cybersecurity regulatory landscape is rapidly evolving and becoming more complex, especially for multinational organisations. Governments worldwide are enhancing protections for critical infrastructure and sensitive data through legislation like the EU’s Cyber Resilience Act, which mandates security measures such as automatic updates and incident reporting. Similarly, new laws are likely to arrive in the UK next year which introduce mandatory ransomware reporting and stronger regulatory powers. With new standards and controls continually emerging, staying current is essential.”



Translate »