News in Cloud Computing & Data Storage


365 Data Centers collaborates with Liberty Center One
365 Data Centers (365), a provider of network-centric colocation, network, cloud, and other managed services, has announced a collaboration with Liberty Center One, an IT delivery solutions company focused on cloud services, data protection, and high-availability environments. The collaboration aims to expand the companies' combined cloud capabilities. Liberty Center One provides open-sourced-based public and private cloud services, disaster recovery resources, and colocation at two data centres which it operates. 365 currently operates 20 colocation data centres, and this relationship is set to enhance the company’s colocation, public cloud, multi-tenant, private cloud, and hybrid cloud offerings for enterprise clients, as well as its managed and dedicated services. "This collaboration will have big implications for 365 as we continue to expand our offerings to the market," believes Derek Gillespie, CRO of 365 Data Centers. "When it comes to solutions for enterprise, working with Liberty Center One will enable us to enhance our current suite of cloud capabilities and hosted services to give our customers what they need today to meet the demands of their business.” Tim Mullahy, Managing Director of Liberty Center One, adds, "We’re looking forward to working with 365 Data Centers to be able to truly bring the best out of one another’s services through this agreement. "Customer service has always been our number one priority, and this association will be instrumental in helping 365 reach its business goals.” For more from 365 Data Centers, click here.

Huawei named a leader for container management
Chinese multinational technology company Huawei has been positioned in the 'Leaders' quadrant of American IT research and advisory company Gartner's Magic Quadrant for Container Management 2025, recognising its capabilities in cloud-native infrastructure and container management. The company’s Huawei Cloud portfolio includes products such as CCE Turbo, CCE Autopilot, Cloud Container Instance (CCI), and the distributed cloud-native service UCS. These are designed to support large-scale containerised workloads across public, distributed, hybrid, and edge cloud environments. Huawei Cloud’s offerings cover a range of use cases, including new cloud-native applications, containerisation of existing applications, AI container deployments, edge computing, and hybrid cloud scenarios. Gartner’s assessment also highlighted Huawei Cloud’s position in the AI container domain. Huawei is an active contributor to the Cloud Native Computing Foundation (CNCF), having participated in 82 CNCF projects and holding more than 20 maintainer roles. It is currently the only Chinese cloud provider with a vice-chair position on the CNCF Technical Oversight Committee. The company says it has donated multiple projects to the CNCF, including KubeEdge, Karmada, Volcano, and Kuasar, and contributed other projects such as Kmesh, openGemini, and Sermant in 2024. Use cases and deployments Huawei Cloud container services are deployed globally in sectors such as finance, manufacturing, energy, transport, and e-commerce. Examples include: • Starzplay, an OTT platform in the Middle East and Central Asia, used Huawei Cloud CCI to transition to a serverless architecture, handling millions of access requests during the 2024 Cricket World Cup whilst reducing resource costs by 20%. • Ninja Van, a Singapore-based logistics provider, containerised its services using Huawei Cloud CCE, enabling uninterrupted operations during peak periods and improving order processing efficiency by 40%. • Chilquinta Energía, a Chilean energy provider, migrated its big data platform to Huawei Cloud CCE Turbo, achieving a 90% performance improvement. • Konga, a Nigerian e-commerce platform, adopted CCE Turbo to support millions of monthly active users. • Meitu, a Chinese visual creation platform, uses CCE and Ascend cloud services to manage AI computing resources for model training and deployment. Cloud Native 2.0 and AI integration Huawei Cloud has incorporated AI into its cloud-native strategy through three main areas: 1. Cloud for AI – CCE AI clusters form the infrastructure for CloudMatrix384 supernodes, offering topology-aware scheduling, workload-aware scaling, and faster container startup for AI workloads. 2. AI for Cloud – The CCE Doer feature integrates AI into container lifecycle management, offering diagnostics, recommendations, and Q&A capabilities. Huawei reports over 200 diagnosable exception scenarios with a root cause accuracy rate above 80%. 3. Serverless containers – Products include CCE Autopilot and CCI, designed to reduce operational overhead and improve scalability. New serverless container options aim to improve computing cost-effectiveness by up to 40%. Huawei Cloud states it will continue working with global operators to develop cloud-native technologies and broaden adoption across industries. For more from Huawei, click here.

Microchip launches Adaptec SmartRAID 4300 accelerators
Semiconductor manufacturer Microchip Technology has introduced the Adaptec SmartRAID 4300 series, a new family of NVMe RAID storage accelerators designed for use in server OEM platforms, storage systems, data centres, and enterprise environments. The series aims to support scalable, software-defined storage (SDS) solutions, particularly for high-performance workloads in AI-focused data centres. The SmartRAID 4300 series uses a disaggregated architecture, separating software and hardware elements to improve efficiency. The accelerators integrate with Microchip’s PCIe-based storage controllers to offload key RAID processes from the host CPU, while the main storage software stack runs directly on the host system. This approach allows data to flow at native PCIe speeds, while offloading parity-based functions such as XOR to dedicated accelerator hardware. According to internal testing by Microchip, the new architecture has delivered input/output (I/O) performance gains of up to seven times compared with the company’s previous generation products. Architecture and capabilities The SmartRAID 4300 accelerators are designed to work with Gen 4 and Gen 5 PCIe host CPUs and can support up to 32 CPU-attached x4 NVMe devices and 64 logical drives or RAID arrays. This is intended to help address data bottlenecks common in conventional in-line storage solutions by taking advantage of expanded host PCIe infrastructure. By removing the reliance on a single PCIe slot for all data traffic, Microchip aims to deliver greater performance and system scalability. Storage operations such as writes now occur directly between the host CPU and the NVMe endpoints, while the accelerator handles redundancy tasks. Brian McCarson, Corporate Vice President of Microchip’s Data Centre Solutions Business Unit, says, “Our innovative solution with separate software and hardware addresses the limitations of traditional architectures that rely on a PCIe host interface slot for all data flows. "The SmartRAID 4300 series allows us to enhance performance, efficiency, and adaptability to better support modern enterprise infrastructure systems.” Power efficiency and security Power optimisation features include automatic idling of processor cores and autonomous power reduction mechanisms. To help maintain data integrity and system security, the SmartRAID 4300 series incorporates features such as secure boot and update, hardware root of trust, attestation, and Self-Encrypting Drive (SED) support. Management tools and compatibility The series is supported by Microchip’s Adaptec maxView management software, which includes an HTML5-based web interface, the ARCCONF command line tool, and plug-ins for both local and remote management. The tools are accessible through standard desktop and mobile browsers and are designed to remain compatible with existing Adaptec SmartRAID utilities. For out-of-band management via Baseboard Management Controllers (BMCs), the series supports Distributed Management Task Force (DMTF) standards, including Platform-Level Data Model (PLDM) and Redfish Device Enablement (RDE), using MCTP protocol. For more from Microchip, click here.

Kioxia showcases flash storage at FMS 2025
Memory manufacturer Kioxia is showcasing its latest flash storage technologies at this year’s Flash Memory Summit (FMS 2025), highlighting how its memory and SSD developments are supporting the infrastructure demands of artificial intelligence (AI) applications in enterprise and data centre settings. Among the products on display is the Kioxia LC9 Series, introduced as the industry’s first 245.76 terabyte (TB) NVMe SSD. Other featured releases include the CM9 and CD9P Series SSDs, built using Kioxia’s eighth-generation BiCS FLASH 3D flash memory. These devices aim to deliver a balance of performance, power efficiency, and versatility. The company is also presenting its ninth-generation BiCS FLASH memory, which is based on 1 terabit (Tb) 3bit/cell technology. It uses the CBA (CMOS directly Bonded to Array) architecture initially developed for the previous generation and offers gains in data read speed and energy consumption. Additional benefits include improvements in PI-LLT and SCA characteristics. “Artificial intelligence is reforming data infrastructure, and Kioxia is advancing storage technology alongside it,” says Axel Störmann, Vice President and Chief Technology Officer for Memory and SSD products at Kioxia Europe. “Our BiCS FLASH technology features a 32-die stack QLC architecture and innovative CBA technology. Delivering an industry-first 8TB per chip package, this breakthrough redefines the performance, scalability, and efficiency needed to power next-generation AI workloads.” Conference participation Kioxia is also contributing to a range of talks and sessions throughout FMS 2025: Keynote presentation: “Optimise AI Infrastructure Investments with Flash Memory Technology and Storage Solutions”Tuesday, 5 August at 11am PDTPresented by Katsuki Matsudera, General Manager, Memory Technical Marketing Department, Kioxia Corporation; and Neville Ichhaporia, Senior Vice President and General Manager, SSD Business Unit, Kioxia America. Executive AI panel discussion: “Memory and Storage Scaling for AI Inferencing”Thursday, 7 August at 11am PDTRory Bolt, Senior Fellow and Principal Architect, SSD Business Unit, Kioxia America, will join a panel featuring experts from NVIDIA and other companies in the memory and storage sector. The discussion will explore how to avoid configuration challenges and optimise infrastructure for AI workloads. Kioxia will also participate in additional panel discussions and technical sessions during the event. For more from Kioxia, click here.

The hidden cost of overuse and misuse of data storage
Most organisations are storing far more data than they use and, while keeping it “just in case” might feel like the safe option, it’s a habit that can quietly chip away at budgets, performance, and even sustainability goals. At first glance, storing everything might not seem like a huge problem. But when you factor in rising energy prices and ballooning data volumes, the cracks in that strategy start to show. Over time, outdated storage practices, from legacy systems to underused cloud buckets, can become a surprisingly expensive problem. Mike Hoy, Chief Technology Officer at UK edge infrastructure provider Pulsant, explores this growing challenge for UK businesses: More data, more problems Cloud computing originally promised a simple solution: elastic storage, pay-as-you-go, and endless scalability. But in practice, this flexibility has led many organisations to amass sprawling, unmanaged environments. Files are duplicated, forgotten, or simply left idle – all while costs accumulate. Many businesses also remain tied to on-premises legacy systems, either from necessity or inertia. These older infrastructures typically consume more energy, require regular maintenance, and provide limited visibility into data usage. Put unmanaged cloud plus outdated on-prem systems together and you’ve got a recipe for inefficiency. The financial sting of bad habits Most leaders in IT understand storing and securing data costs money. But what often gets overlooked are the hidden costs: the backup of low-value data, the power consumption of idle systems, or the surprise charges that come from cloud services which are not being monitored properly. Then there’s the operational cost. Disorganised or poorly labelled data makes access slower and compliance tougher. It also increases security risks, especially if sensitive information is spread across uncontrolled environments. The longer these issues go unchecked, the more danger there is of a snowball effect. Smarter storage starts with visibility The first step towards resolving these issues isn’t deleting data indiscriminately, it’s understanding what’s there. Carrying out an infrastructure or storage audit can shed light on what’s being stored, who’s using it, and whether it still serves a purpose. Once that visibility is at your fingertips, you can start making smarter decisions about what stays, what goes, and what gets moved somewhere more cost-effective. This is where a hybrid approach of combining cloud, on-premises, and edge infrastructure comes into play. It lets businesses tailor their storage to the job at hand, reducing waste while improving performance. Why edge computing is part of the solution Edge computing isn’t just a tech buzzword; it’s an increasingly practical way to harness data where it’s generated. By processing information at the edge, organisations can act on insights faster, reduce the volume of data stored centrally, and ease the load on core networks and systems. Edge computing technologies make this approach practical. By using regional edge data centres or local processing units, businesses can filter and process data closer to its source, sending only essential information to the cloud or core infrastructure. This reduces storage and transmission costs and helps prevent the build-up of redundant or low-value data that can silently increase expenses over time. This approach is particularly valuable in data-heavy industries such as healthcare, logistics, and manufacturing, where large volumes of real-time information are produced daily. Processing data locally enables businesses to store less, move less, and act faster. The wider payoff Cutting storage costs is an obvious benefit but it’s far from the only one. A smarter, edge-driven strategy helps businesses build a more efficient, resilient, and sustainable digital infrastructure: • Lower energy usage — By processing and filtering data locally, organisations reduce the energy demands of transmitting and storing large volumes centrally, supporting both carbon reduction targets and lower utility costs. As sustainability reporting becomes more critical, this can also help meet Scope 2 emissions goals. • Faster access to critical data — When the most important data is processed closer to its source, teams can respond in real time, meaning improved decision-making, customer experience, and operational agility. • Greater resilience and reliability — Local processing means organisations are less dependent on central networks. If there’s an outage or disruption, edge infrastructure can provide continuity, keeping key services running when they’re needed most. • Improved compliance and governance — By keeping sensitive data within regional boundaries and only transmitting what’s necessary, businesses can simplify compliance with regulations such as GDPR, while reducing the risk of data sprawl and shadow IT. Ultimately, it’s about creating a storage and data environment that’s fit for modern demands. It needs to be fast, flexible, efficient and aligned with wider business priorities. Don’t let storage be an afterthought Data is valuable - but only when it's well managed. When storage becomes a case of “out of sight, out of mind,” businesses end up paying more for less. And what do they have to show for it? Ageing infrastructure and bloated cloud bills. A little housekeeping goes a long way. By adopting modern infrastructure strategies, including edge computing and hybrid storage models, businesses can transform data storage from a hidden cost centre into a source of operational efficiency and competitive advantage. For more from Pulsant, click here.

LINX, Megaport partner to expand cloud connectivity for London
The London Internet Exchange (LINX), an Internet Exchange Point (IXP) operator of digital infrastructure across the UK, Africa, and the United States, has today announced a partnership with global Network as a Service (NaaS) provider Megaport to enhance cloud connectivity options for its members. This collaboration brings an expansion to the LINX Cloud Connect service, enabling access to a broader suite of cloud platforms including Amazon Web Services (AWS), Google Cloud, Microsoft Azure, Oracle Cloud, and others. Through this partnership, LINX members in London can now use Megaport’s global infrastructure to connect to cloud service providers directly from their existing multi-service port. This means one invoice, port, and point of contact for engineering support, aiming to streamline operations and reduce complexity for network operators. “This partnership with Megaport is a significant step forward in our mission to ensure we are best serving our UK members,” says Tyrone Turner, Product Manager at LINX. “Our community now have even more choice and control when it comes to low-latency peering and cloud connectivity, all through a single interface.” Megaport is a LINX member network and ConneXions Reseller Partner. “This partnership gives LINX members a faster, simpler path to cloud,” claims Emmanuel Sevray, VP of Sales, EMEA at Megaport. “By combining Megaport’s global infrastructure and broad cloud ecosystem with LINX’s interconnection services, UK networks can connect to leading cloud providers with less complexity, accessing the services they need, when and where they need them.” LINX Cloud Connect is designed to try to simplify cloud adoption. With Megaport’s integration, the company hopes LINX members will gain greater flexibility and reach, empowering them to build hybrid and multi-cloud environments. For more from LINX, click here.

Datadog partners with AWS to launch in Australia and NZ
Datadog, a monitoring and security platform for cloud applications, has just launched its full range of products and services on the Amazon Web Services’ (AWS) Asia-Pacific (Sydney) Region. The launch adds to existing locations in North America, Asia, and Europe. The new local availability zone enables Datadog, its customers, and its partners to store and process data locally, enabling in-region capacity to meet applicable Australian privacy, security, and data storage requirements. This, according to the company, is crucial for an increasing number of organisations - particularly those operating in regulated environments such as government, banking, healthcare, and higher education. “This milestone reinforces Datadog’s commitment to supporting the region’s advanced digital capabilities - especially the Australian government’s ambition to make the country a leading digital economy,” says Yanbing Li, Chief Product Officer at Datadog. “With strong momentum across public and private sectors, our investment enhances trust in Datadog’s unified and cloud-agnostic observability and security platform, and positions us to meet the evolving needs of agencies and enterprises alike.” Rob Thorne, Vice President for Asia-Pacific and Japan (APJ) at Datadog, adds, "Australian organisations are on track to spend nearly A$26.6 billion [£12.84 billion] on public cloud services alone in 2025. "For organisations in highly regulated industries, it isn’t just the cloud provider that needs to have local data storage capacity, it should be all layers of the tech stack. "This milestone reflects Datadog’s priority to support these investments. It’s the latest step in our expansion down under, and follows the continued addition of headcount to support our more than 1,100 A/NZ customers, as well as the recent appointments of Field CTO for APJ, Yadi Narayana, and Vice President of Commercial Sales for APJ, Adrian Towsey, to our leadership team.” For more from Datadog, click here.

Vawlt 3.2 'supercloud' storage platform launches
Portuguese cloud storage platform Vawlt Technologies has just unveiled Vawlt 3.2, the newest release of its 'supercloud' data storage platform. The update introduces live, "zero-downtime" cloud switching, expands native coverage to three additional European clouds, brings full MinIO-powered private-cloud integration, and delivers engine optimisations that reportedly cut resource consumption while boosting throughput by up to 40× on high-demand workloads. New features in Vawlt 3.2 include: ● Switching clouds with "no downtime" – There's the ability to replace underlying clouds on an active Vawlt volume, with data migrating in the background while applications keep running.● Three new EU clouds – Native support for IONOS Cloud, Scaleway, and Impossible Cloud lets organisations build fully EU-resident or mixed-region Supercloud volumes.● MinIO private-cloud integration – On-prem or partner-hosted MinIO clusters now appear in the Vawlt console alongside public clouds for unified policy and data-plane control.● Performance and efficiency boost – The re-engineered storage engine, according to the company, "slashes CPU/RAM needs and delivers up to 40x faster bulk-data transfers on selected workloads." This release marks a step in Vawlt’s mission to keep data ownership with the organisations that create it. Cloud switching seeks to dissolve vendor lock-in, an expanded roster of EU providers to anchor data inside chosen jurisdictions, and private-cloud onboarding to extend sovereignty to infrastructure businesses already own. As the EU Data Act’s portability requirements come into force on 12 September 2025, Vawlt 3.2 - the company claims - "equips enterprises to meet the letter of the law while granting operational independence to navigate supply-chain risk [and] shifting regulations." Ricardo Mendes, CEO & Co-Founder, Vawlt, comments, “True digital freedom is the ability to be independent of cloud providers — including the right to pick the right cloud, or clouds, at any point in time, without fear of downtime, lock-in, or bill shock. Vawlt 3.2 turns that vision into a push-button reality. Whether you’re preparing for the EU Data Act’s portability rules or safeguarding your business against supply-chain risk, you’re now fully in control of where your data lives and how fast it moves.”

Macquarie and CareSuper join forces
Macquarie Cloud Services, an Australian cloud services provider for business and government and part of Macquarie Technology Group, has been appointed by CareSuper to lead a major cloud transformation program, marking a high-profile shift away from VMware Cloud on AWS and towards a more modern Azure environment. The agreement is seeing Macquarie Cloud migrate and recalibrate CareSuper’s VMware Cloud on AWS (VMC) environment – made up of hundreds of applications and petabytes of data – into a Managed Edge Azure Local offering. “Our goal is to optimise every part of our operation so we can deliver long-term value to our members,” states Simon Reiter, Chief Technology Officer at CareSuper. “Cloud decisions must serve that mission – not just today, but five years from now. Macquarie Cloud Services stood out as a partner who could deliver both the technical transformation and the ongoing managed service maturity required.” Macquarie’s Azure-led approach consolidates CareSuper’s Technology estate into a unified platform. The engagement includes migrating workloads from VMware Cloud on AWS into a new Azure landing zone, modernising databases and implementing platform-as-a-service (PaaS) offerings with the aim to streamline performance and efficiencies for the fund. “We’re seeing a wave of repatriation from AWS,” comments Naran McClung, Head of Azure at Macquarie Cloud Services. “For many organisations, rising costs and architectural limitations have made them re-evaluate. But it’s not just about moving away, it’s about moving forward. That’s where our team adds value.” Macquarie has assumed the risk of the migration project, delivering the transformation with zero upfront cost to CareSuper and full accountability for outcomes. “What we’ve found in partnering with Macquarie Cloud Services is a team of experts who can transform, refactor, migrate, and ensure we get the best operational value from our cloud environment. That the company backs itself by taking on the cost risk of the migration phase is telling of its capabilities and commitment to putting customers first,” continues Simon. Four years as an Azure Expert MSP Macquarie Cloud Services is one of only a handful of partners across Asia Pacific to retain its Microsoft Azure Expert Managed Services Provider (MSP) status for four consecutive years. “We’ve seen our Azure team and business expand by about 20% every year since we set it up in 2020,” claims Naran. “Becoming an Azure Expert MSP is not a lifetime achievement, it takes incredible dedication, assessments requiring dozens of the team to come together, and – most importantly – an ability to deliver value to customers time and time again.” For more from Macquarie, click here.

Nasuni achieves AWS Energy & Utilities Competency status
Nasuni, a unified file data platform company, has announced that it has achieved Amazon Web Services (AWS) Energy & Utilities Competency status. This designation recognises that Nasuni has demonstrated expertise in helping customers leverage AWS cloud technology to transform complex systems and accelerate the transition to a sustainable energy and utilities future. To receive the designation, AWS Partners undergo a rigorous technical validation process, including a customer reference audit. The AWS Energy & Utilities Competency provides energy and utilities customers the ability to more easily select skilled partners to help accelerate their digital transformations. "Our strategic collaboration with AWS is redefining how energy companies harness seismic data,” comments Michael Sotnick, SVP of Business & Corporate Development at Nasuni. “Together, we’re removing traditional infrastructure barriers and unlocking faster, smarter subsurface decisions. By integrating Nasuni’s global unified file data platform with the power of AWS solutions including Amazon Simple Storage Service (S3), Amazon Bedrock, and Amazon Q, we’re helping upstream operators accelerate time to first oil, boost capital efficiency, and prepare for the next era of data-driven exploration." AWS says it is enabling scalable, flexible, and cost-effective solutions from startups to global enterprises. To support the integration and deployment of these solutions, AWS established the AWS Competency Program to help customers identify AWS Partners with industry experience and expertise. By bringing together Nasuni’s cloud-native file data platform with Amazon S3 and other AWS services, the company claims energy customers could eliminate data silos, reduce interpretation cycle times, and unlock the value of seismic data for AI-driven exploration. For more from Nasuni, click here.



Translate »