Features


Kioxia showcases flash storage at FMS 2025
Memory manufacturer Kioxia is showcasing its latest flash storage technologies at this year’s Flash Memory Summit (FMS 2025), highlighting how its memory and SSD developments are supporting the infrastructure demands of artificial intelligence (AI) applications in enterprise and data centre settings. Among the products on display is the Kioxia LC9 Series, introduced as the industry’s first 245.76 terabyte (TB) NVMe SSD. Other featured releases include the CM9 and CD9P Series SSDs, built using Kioxia’s eighth-generation BiCS FLASH 3D flash memory. These devices aim to deliver a balance of performance, power efficiency, and versatility. The company is also presenting its ninth-generation BiCS FLASH memory, which is based on 1 terabit (Tb) 3bit/cell technology. It uses the CBA (CMOS directly Bonded to Array) architecture initially developed for the previous generation and offers gains in data read speed and energy consumption. Additional benefits include improvements in PI-LLT and SCA characteristics. “Artificial intelligence is reforming data infrastructure, and Kioxia is advancing storage technology alongside it,” says Axel Störmann, Vice President and Chief Technology Officer for Memory and SSD products at Kioxia Europe. “Our BiCS FLASH technology features a 32-die stack QLC architecture and innovative CBA technology. Delivering an industry-first 8TB per chip package, this breakthrough redefines the performance, scalability, and efficiency needed to power next-generation AI workloads.” Conference participation Kioxia is also contributing to a range of talks and sessions throughout FMS 2025: Keynote presentation: “Optimise AI Infrastructure Investments with Flash Memory Technology and Storage Solutions”Tuesday, 5 August at 11am PDTPresented by Katsuki Matsudera, General Manager, Memory Technical Marketing Department, Kioxia Corporation; and Neville Ichhaporia, Senior Vice President and General Manager, SSD Business Unit, Kioxia America. Executive AI panel discussion: “Memory and Storage Scaling for AI Inferencing”Thursday, 7 August at 11am PDTRory Bolt, Senior Fellow and Principal Architect, SSD Business Unit, Kioxia America, will join a panel featuring experts from NVIDIA and other companies in the memory and storage sector. The discussion will explore how to avoid configuration challenges and optimise infrastructure for AI workloads. Kioxia will also participate in additional panel discussions and technical sessions during the event. For more from Kioxia, click here.

The hidden cost of overuse and misuse of data storage
Most organisations are storing far more data than they use and, while keeping it “just in case” might feel like the safe option, it’s a habit that can quietly chip away at budgets, performance, and even sustainability goals. At first glance, storing everything might not seem like a huge problem. But when you factor in rising energy prices and ballooning data volumes, the cracks in that strategy start to show. Over time, outdated storage practices, from legacy systems to underused cloud buckets, can become a surprisingly expensive problem. Mike Hoy, Chief Technology Officer at UK edge infrastructure provider Pulsant, explores this growing challenge for UK businesses: More data, more problems Cloud computing originally promised a simple solution: elastic storage, pay-as-you-go, and endless scalability. But in practice, this flexibility has led many organisations to amass sprawling, unmanaged environments. Files are duplicated, forgotten, or simply left idle – all while costs accumulate. Many businesses also remain tied to on-premises legacy systems, either from necessity or inertia. These older infrastructures typically consume more energy, require regular maintenance, and provide limited visibility into data usage. Put unmanaged cloud plus outdated on-prem systems together and you’ve got a recipe for inefficiency. The financial sting of bad habits Most leaders in IT understand storing and securing data costs money. But what often gets overlooked are the hidden costs: the backup of low-value data, the power consumption of idle systems, or the surprise charges that come from cloud services which are not being monitored properly. Then there’s the operational cost. Disorganised or poorly labelled data makes access slower and compliance tougher. It also increases security risks, especially if sensitive information is spread across uncontrolled environments. The longer these issues go unchecked, the more danger there is of a snowball effect. Smarter storage starts with visibility The first step towards resolving these issues isn’t deleting data indiscriminately, it’s understanding what’s there. Carrying out an infrastructure or storage audit can shed light on what’s being stored, who’s using it, and whether it still serves a purpose. Once that visibility is at your fingertips, you can start making smarter decisions about what stays, what goes, and what gets moved somewhere more cost-effective. This is where a hybrid approach of combining cloud, on-premises, and edge infrastructure comes into play. It lets businesses tailor their storage to the job at hand, reducing waste while improving performance. Why edge computing is part of the solution Edge computing isn’t just a tech buzzword; it’s an increasingly practical way to harness data where it’s generated. By processing information at the edge, organisations can act on insights faster, reduce the volume of data stored centrally, and ease the load on core networks and systems. Edge computing technologies make this approach practical. By using regional edge data centres or local processing units, businesses can filter and process data closer to its source, sending only essential information to the cloud or core infrastructure. This reduces storage and transmission costs and helps prevent the build-up of redundant or low-value data that can silently increase expenses over time. This approach is particularly valuable in data-heavy industries such as healthcare, logistics, and manufacturing, where large volumes of real-time information are produced daily. Processing data locally enables businesses to store less, move less, and act faster. The wider payoff Cutting storage costs is an obvious benefit but it’s far from the only one. A smarter, edge-driven strategy helps businesses build a more efficient, resilient, and sustainable digital infrastructure: • Lower energy usage — By processing and filtering data locally, organisations reduce the energy demands of transmitting and storing large volumes centrally, supporting both carbon reduction targets and lower utility costs. As sustainability reporting becomes more critical, this can also help meet Scope 2 emissions goals. • Faster access to critical data — When the most important data is processed closer to its source, teams can respond in real time, meaning improved decision-making, customer experience, and operational agility. • Greater resilience and reliability — Local processing means organisations are less dependent on central networks. If there’s an outage or disruption, edge infrastructure can provide continuity, keeping key services running when they’re needed most. • Improved compliance and governance — By keeping sensitive data within regional boundaries and only transmitting what’s necessary, businesses can simplify compliance with regulations such as GDPR, while reducing the risk of data sprawl and shadow IT. Ultimately, it’s about creating a storage and data environment that’s fit for modern demands. It needs to be fast, flexible, efficient and aligned with wider business priorities. Don’t let storage be an afterthought Data is valuable - but only when it's well managed. When storage becomes a case of “out of sight, out of mind,” businesses end up paying more for less. And what do they have to show for it? Ageing infrastructure and bloated cloud bills. A little housekeeping goes a long way. By adopting modern infrastructure strategies, including edge computing and hybrid storage models, businesses can transform data storage from a hidden cost centre into a source of operational efficiency and competitive advantage. For more from Pulsant, click here.

LINX, Megaport partner to expand cloud connectivity for London
The London Internet Exchange (LINX), an Internet Exchange Point (IXP) operator of digital infrastructure across the UK, Africa, and the United States, has today announced a partnership with global Network as a Service (NaaS) provider Megaport to enhance cloud connectivity options for its members. This collaboration brings an expansion to the LINX Cloud Connect service, enabling access to a broader suite of cloud platforms including Amazon Web Services (AWS), Google Cloud, Microsoft Azure, Oracle Cloud, and others. Through this partnership, LINX members in London can now use Megaport’s global infrastructure to connect to cloud service providers directly from their existing multi-service port. This means one invoice, port, and point of contact for engineering support, aiming to streamline operations and reduce complexity for network operators. “This partnership with Megaport is a significant step forward in our mission to ensure we are best serving our UK members,” says Tyrone Turner, Product Manager at LINX. “Our community now have even more choice and control when it comes to low-latency peering and cloud connectivity, all through a single interface.” Megaport is a LINX member network and ConneXions Reseller Partner. “This partnership gives LINX members a faster, simpler path to cloud,” claims Emmanuel Sevray, VP of Sales, EMEA at Megaport. “By combining Megaport’s global infrastructure and broad cloud ecosystem with LINX’s interconnection services, UK networks can connect to leading cloud providers with less complexity, accessing the services they need, when and where they need them.” LINX Cloud Connect is designed to try to simplify cloud adoption. With Megaport’s integration, the company hopes LINX members will gain greater flexibility and reach, empowering them to build hybrid and multi-cloud environments. For more from LINX, click here.

Datadog partners with AWS to launch in Australia and NZ
Datadog, a monitoring and security platform for cloud applications, has just launched its full range of products and services on the Amazon Web Services’ (AWS) Asia-Pacific (Sydney) Region. The launch adds to existing locations in North America, Asia, and Europe. The new local availability zone enables Datadog, its customers, and its partners to store and process data locally, enabling in-region capacity to meet applicable Australian privacy, security, and data storage requirements. This, according to the company, is crucial for an increasing number of organisations - particularly those operating in regulated environments such as government, banking, healthcare, and higher education. “This milestone reinforces Datadog’s commitment to supporting the region’s advanced digital capabilities - especially the Australian government’s ambition to make the country a leading digital economy,” says Yanbing Li, Chief Product Officer at Datadog. “With strong momentum across public and private sectors, our investment enhances trust in Datadog’s unified and cloud-agnostic observability and security platform, and positions us to meet the evolving needs of agencies and enterprises alike.” Rob Thorne, Vice President for Asia-Pacific and Japan (APJ) at Datadog, adds, "Australian organisations are on track to spend nearly A$26.6 billion [£12.84 billion] on public cloud services alone in 2025. "For organisations in highly regulated industries, it isn’t just the cloud provider that needs to have local data storage capacity, it should be all layers of the tech stack. "This milestone reflects Datadog’s priority to support these investments. It’s the latest step in our expansion down under, and follows the continued addition of headcount to support our more than 1,100 A/NZ customers, as well as the recent appointments of Field CTO for APJ, Yadi Narayana, and Vice President of Commercial Sales for APJ, Adrian Towsey, to our leadership team.” For more from Datadog, click here.

Vawlt 3.2 'supercloud' storage platform launches
Portuguese cloud storage platform Vawlt Technologies has just unveiled Vawlt 3.2, the newest release of its 'supercloud' data storage platform. The update introduces live, "zero-downtime" cloud switching, expands native coverage to three additional European clouds, brings full MinIO-powered private-cloud integration, and delivers engine optimisations that reportedly cut resource consumption while boosting throughput by up to 40× on high-demand workloads. New features in Vawlt 3.2 include: ● Switching clouds with "no downtime" – There's the ability to replace underlying clouds on an active Vawlt volume, with data migrating in the background while applications keep running.● Three new EU clouds – Native support for IONOS Cloud, Scaleway, and Impossible Cloud lets organisations build fully EU-resident or mixed-region Supercloud volumes.● MinIO private-cloud integration – On-prem or partner-hosted MinIO clusters now appear in the Vawlt console alongside public clouds for unified policy and data-plane control.● Performance and efficiency boost – The re-engineered storage engine, according to the company, "slashes CPU/RAM needs and delivers up to 40x faster bulk-data transfers on selected workloads." This release marks a step in Vawlt’s mission to keep data ownership with the organisations that create it. Cloud switching seeks to dissolve vendor lock-in, an expanded roster of EU providers to anchor data inside chosen jurisdictions, and private-cloud onboarding to extend sovereignty to infrastructure businesses already own. As the EU Data Act’s portability requirements come into force on 12 September 2025, Vawlt 3.2 - the company claims - "equips enterprises to meet the letter of the law while granting operational independence to navigate supply-chain risk [and] shifting regulations." Ricardo Mendes, CEO & Co-Founder, Vawlt, comments, “True digital freedom is the ability to be independent of cloud providers — including the right to pick the right cloud, or clouds, at any point in time, without fear of downtime, lock-in, or bill shock. Vawlt 3.2 turns that vision into a push-button reality. Whether you’re preparing for the EU Data Act’s portability rules or safeguarding your business against supply-chain risk, you’re now fully in control of where your data lives and how fast it moves.”

Macquarie and CareSuper join forces
Macquarie Cloud Services, an Australian cloud services provider for business and government and part of Macquarie Technology Group, has been appointed by CareSuper to lead a major cloud transformation program, marking a high-profile shift away from VMware Cloud on AWS and towards a more modern Azure environment. The agreement is seeing Macquarie Cloud migrate and recalibrate CareSuper’s VMware Cloud on AWS (VMC) environment – made up of hundreds of applications and petabytes of data – into a Managed Edge Azure Local offering. “Our goal is to optimise every part of our operation so we can deliver long-term value to our members,” states Simon Reiter, Chief Technology Officer at CareSuper. “Cloud decisions must serve that mission – not just today, but five years from now. Macquarie Cloud Services stood out as a partner who could deliver both the technical transformation and the ongoing managed service maturity required.” Macquarie’s Azure-led approach consolidates CareSuper’s Technology estate into a unified platform. The engagement includes migrating workloads from VMware Cloud on AWS into a new Azure landing zone, modernising databases and implementing platform-as-a-service (PaaS) offerings with the aim to streamline performance and efficiencies for the fund. “We’re seeing a wave of repatriation from AWS,” comments Naran McClung, Head of Azure at Macquarie Cloud Services. “For many organisations, rising costs and architectural limitations have made them re-evaluate. But it’s not just about moving away, it’s about moving forward. That’s where our team adds value.” Macquarie has assumed the risk of the migration project, delivering the transformation with zero upfront cost to CareSuper and full accountability for outcomes. “What we’ve found in partnering with Macquarie Cloud Services is a team of experts who can transform, refactor, migrate, and ensure we get the best operational value from our cloud environment. That the company backs itself by taking on the cost risk of the migration phase is telling of its capabilities and commitment to putting customers first,” continues Simon. Four years as an Azure Expert MSP Macquarie Cloud Services is one of only a handful of partners across Asia Pacific to retain its Microsoft Azure Expert Managed Services Provider (MSP) status for four consecutive years. “We’ve seen our Azure team and business expand by about 20% every year since we set it up in 2020,” claims Naran. “Becoming an Azure Expert MSP is not a lifetime achievement, it takes incredible dedication, assessments requiring dozens of the team to come together, and – most importantly – an ability to deliver value to customers time and time again.” For more from Macquarie, click here.

Nasuni achieves AWS Energy & Utilities Competency status
Nasuni, a unified file data platform company, has announced that it has achieved Amazon Web Services (AWS) Energy & Utilities Competency status. This designation recognises that Nasuni has demonstrated expertise in helping customers leverage AWS cloud technology to transform complex systems and accelerate the transition to a sustainable energy and utilities future. To receive the designation, AWS Partners undergo a rigorous technical validation process, including a customer reference audit. The AWS Energy & Utilities Competency provides energy and utilities customers the ability to more easily select skilled partners to help accelerate their digital transformations. "Our strategic collaboration with AWS is redefining how energy companies harness seismic data,” comments Michael Sotnick, SVP of Business & Corporate Development at Nasuni. “Together, we’re removing traditional infrastructure barriers and unlocking faster, smarter subsurface decisions. By integrating Nasuni’s global unified file data platform with the power of AWS solutions including Amazon Simple Storage Service (S3), Amazon Bedrock, and Amazon Q, we’re helping upstream operators accelerate time to first oil, boost capital efficiency, and prepare for the next era of data-driven exploration." AWS says it is enabling scalable, flexible, and cost-effective solutions from startups to global enterprises. To support the integration and deployment of these solutions, AWS established the AWS Competency Program to help customers identify AWS Partners with industry experience and expertise. By bringing together Nasuni’s cloud-native file data platform with Amazon S3 and other AWS services, the company claims energy customers could eliminate data silos, reduce interpretation cycle times, and unlock the value of seismic data for AI-driven exploration. For more from Nasuni, click here.

UAE-IX now powered by DE-CIX
DE-CIX, an Internet Exchange (IX) operator, and partner Datamena, Du’s carrier neutral data centre and connectivity platform based in the UAE and serving the Middle East and Africa (MEA) region, today announced the upgrade of the UAE-IX to offer 400 GE access. Connected customer capacity on the exchange has soared over the last year, growing two terabits, or 30%, in twelve months. The UAE-IX is the largest IX in the Middle East, based on both connected networks and peak traffic, and is now the only IX in the region to offer 400 GE access. Established in 2012 and operated by DE-CIX on behalf of partner Datamena, the IX today has over six terabits of connected capacity and connects close to 110 internet service providers (ISPs), carriers, cloud, content, and application providers, and global enterprises. It also provides enterprise-grade interconnection services, such as a Cloud Exchange, cloud routing, and application connectivity like the Microsoft Azure Peering Service (MAPS). “The UAE-IX today stands as a global internet hub, bringing together the network operators, content, applications, and cloud services to serve the entire GCC region with resilient and low latency connectivity,” claims Ivo Ivanov, CEO of DE-CIX. “This upgrade further reinforces the importance of the UAE-IX, now ready to serve the rising demand for everything digital. The excellent collaboration with our partner Datamena has enabled the UAE-IX powered by DE-CIX to shine as the most important aggregation point for network interconnection in the Middle East. I look forward to a bright future working together for the next decade of digital development.” Karim Benkirane, Chief Commercial Officer, Du, comments, "We are proud to partner with DE-CIX in leading digital growth in the Middle East with the upgrade of the UAE-IX powered by DE-CIX to 400 GE access. It is our vision to foster a seamlessly interconnected landscape where businesses and consumers alike can benefit from unparalleled internet exchange capabilities, heightened performance, and robust security. This milestone aligns with our commitment to maintaining the UAE-IX as a pioneer in interconnection and marks a transformative leap for regional digital ecosystems." DE-CIX has been active in the Middle East for over a decade, and now operates IXs in multiple countries in the region: Iraq, Jordan, Qatar, the UAE, and Turkey. The UAE-IX in Dubai is operated under the DE-CIX as a Service (DaaS) model. The DaaS program includes a set of services – such as installation, maintenance, provisioning, and marketing and sales support – designed for carriers, data centre operators, or other third parties to create their own IX and interconnection platform operated by DE-CIX. For more from DE-CIX, click here.

AMD processors now power Nokia cloud infrastructure
AMD, an American multinational semiconductor company specialising in computer processors and graphics cards, has announced that Nokia has included 5th Gen AMD EPYC processors to power the Nokia Cloud Platform. “Telecom operators are looking for infrastructure solutions that combine performance, scalability, and power efficiency to manage the growing complexity and scale of 5G networks,” says Dan McNamara, Senior Vice President and General Manager, Server Business, AMD. “Working together with Nokia, we’re using the leadership performance and energy efficiency of the 5th Gen AMD EPYC processors to help our customers build and operate high-performance and efficient networks.” “This expanded collaboration between Nokia and AMD brings a multitude of benefits and underscores Nokia's commitment to innovation through diverse chip partnerships in 5G network infrastructure. The new 5th Gen AMD EPYC processors offer high performance and impressive energy efficiency, enabling Nokia to meet the demanding needs of its 5G customers while contributing to the industry's sustainability goals,” adds Kal De, Senior Vice President, Product and Engineering, Cloud and Network Services, Nokia. The processors will be deployed within Nokia Cloud Platform, a component that supports containerised workloads foundational to 5G core, edge, and enterprise applications. By integrating the AMD EPYC 9005 Series processors into Nokia Cloud Platform, Nokia hopes to deliver good performance per watt and meet growing data demands whilst minimising environmental impact. For more from AMD, click here.

Hitachi Vantara launches Virtual Storage Platform 360
Hitachi Vantara, the data storage, infrastructure, and hybrid cloud management subsidiary of Hitachi, today announced the launch of Virtual Storage Platform 360 (VSP 360), a unified management software solution designed to help customers simplify data infrastructure management operations, improve decision-making, and the delivery of data services. With support for block, file, object, and software-defined storage, VSP 360 consolidates multiple management tools and aims to enable IT teams, including those with limited storage expertise, to more efficiently control hybrid cloud deployments, gain AIOps predictive insights, and simplify data lifecycle governance. Organisations today are struggling to manage sprawling data environments spread across disparate storage systems, fragmented data silos, and complex application workloads, all while grappling with overextended IT teams and rising demands for compliance and AI readiness. A recent survey showed AI has led to a dramatic increase in the amount of data storage that businesses require, with the amount of data expected to increase 122% by 2026. The survey also revealed that many IT leaders are being forced to implement AI before their data infrastructure is ready to handle it, with many embarking on a journey of experimentation, hoping to find additional ways to recover some of the cost of their investments. VSP 360 seeks to address these obstacles by integrating data management tools across enterprise storage to monitor key performance indicators, including storage capacity utilisation and overall system health, helping to deliver optimal performance and efficient resource management. It intends to improve end-to-end visibility, leveraging AIOps observability to break down data silos, as well as streamlining the deployment of VSP One data services. “VSP 360 represents a bold step forward in unifying the way enterprises manage their data,” says Octavian Tanase, Chief Product Officer, Hitachi Vantara. “It’s not just a new management tool—it’s a strategic approach to modern data infrastructure that gives IT teams complete command over their data, wherever it resides. With built-in AI and automation and by making it available via SaaS, Private, or via your mobile phone, we're empowering our customers to make faster, smarter decisions and eliminate the traditional silos that slow innovation.” “VSP 360 gives our customers the unified visibility and control they’ve been asking for,” claims Dan Pittet, Senior Solutions Architect, Stoneworks Technologies. “The ability to manage block, file, object, and software-defined storage from a single AI-driven platform helps streamline operations and reduce complexity across hybrid environments. It’s especially valuable for IT teams with limited resources who need to respond quickly to evolving data demands without compromising performance or governance.” "VSP 360 hits the mark for what modern enterprises need," states Ashish Nadkarni, Group Vice President and General Manager, Worldwide Infrastructure Research, IDC. "It goes beyond monitoring to deliver true intelligence across the storage lifecycle. The solution's robust data resiliency helps businesses maintain continuous operations and protect their critical assets, even in the face of unexpected disruptions. By integrating advanced analytics, automation, and policy enforcement, Hitachi Vantara is giving customers the agility and resilience needed to thrive in a data-driven economy.” For more from Hitachi, click here.



Translate »