Tuesday, April 29, 2025

Data


NetApp receives AAA rating for its AI ransomware detection
NetApp, the intelligent data infrastructure company, today announced that NetApp ONTAP Autonomous Ransomware Protection with Artificial Intelligence (ARP/AI) has received a AAA rating from SE Labs, an independently-owned and run testing company that assesses security products and services. SE Labs validated the protection effectiveness of NetApp ARP/AI with 99% recall - a metric that measures malware detection rates - for ransomware attacks, while noting the absence of false positives. When responding to ransomware attacks, seconds can make the difference between ensuring continuity and a massive business disruption. Organisations need fast, automated, and accurate detection and response capabilities built into their primary storage systems to minimise the damage done by lost production data and downtime. NetApp ARP/AI, with its AI-powered ransomware detection capability, addresses this gap by providing real-time detection and response to minimise the impact of cybersecurity threats. SE Labs rigorously tested NetApp ARP/AI against hundreds of known ransomware variants with impressive results. NetApp ARP/AI demonstrated 99% detection of advanced ransomware attacks. NetApp ARP/AI also achieved 100% detection of legitimate files without flagging any false positives, indicating a strong ability to operate in a business context without contributing to alert fatigue. Ensuring data is secure against internal and external threats is a critical part of making data infrastructure intelligent, which then empowers customers to turn disruption into opportunity. This validation of NetApp’s AI-powered ransomware detection capabilities underscores how NetApp is staying at the forefront of AI innovation by both enabling AI adoption and applying AI to data services. NetApp’s newly released more powerful, all-flash storage systems help enterprises leverage their data to drive AI workloads, built on NetApp’s secure storage infrastructure. "NetApp has passed a significant milestone in the fight against ransomware as the first and only storage vendor to offer AI-driven on-box ransomware detection with externally validated top-notch protection effectiveness,” says Dr. Arun Gururajan, Vice President, Research & Data Science at NetApp. “Ransomware detection methodologies that rely only on backup data are too slow to effectively mitigate the risks businesses face from cybersecurity threats. NetApp ARP/AI hardens enterprise storage by providing robust, built-in detection capabilities that can respond to ransomware threats in real time. The AAA rating we achieved from SE Labs is the result of our commitment to innovation in intelligent data infrastructure and our drive to find new ways to make NetApp the most secure storage on the planet.” By embedding ransomware detection in storage, NetApp ARP/AI helps customers improve their cyber resilience while reducing the operational burden and skills required to maintain their intelligent data infrastructure. NetApp ARP/AI’s detection technology continuously adapts and evolves as new ransomware variants are discovered. This continuous retraining on the latest ransomware strains ensures that NetApp ARP/AI remains at the forefront of protection effectiveness, offering organisations a future-proof defence against the dynamic ransomware landscape. To see the full results of the tests, read the SE Labs Report by clicking here. NetApp ARP/AI is currently in tech preview. Customers can request to participate in the tech preview by reaching out to their NetApp sales representative. For more from NetApp, click here.

Pure Storage introduces unified data storage platform
Pure Storage, an IT pioneer that delivers advanced data storage technologies and services, has announced new capabilities in the Pure Storage platform that are evolving the ways IT and business leaders can improve their ability to deploy AI, improve cyber resilience, and modernise their applications. The Pure Storage platform delivers agility and risk reduction to organisations with a simple, consistent storage platform and an 'as-a-service' experience for the broadest set of use cases across on-premises, public cloud, and hosted environments. At the heart of the platform, the Evergreen architecture brings continuous and non-disruptive upgrades helping enterprises adapt to dynamic business environments. With the industry’s record number of concurrent SLAs, customers get the reliability, performance, and sustainability their business requires. Charles Giancarlo, Chairman and CEO, Pure Storage, says, “Pure is redefining enterprise storage with a single, unified data storage platform that can address virtually all enterprise storage needs including the most pressing challenges and opportunities IT leaders face today, like AI and cyber resilience. The Pure Storage platform delivers unparalleled consistency, resilience, and SLA-guaranteed data storage services, reducing costs and uncertainty in an increasingly complex business landscape.” Pure Storage announced new innovations in the platform including: Storage automation: Pure Fusion unifies arrays and optimises storage pools on the fly across structured and unstructured data, on-premises, and in the cloud. Now fully embedded into the Purity operating environment designed to continually get better over time via non-disruptive upgrades, the next generation Pure Fusion will be available across the entire Pure Storage platform to all global customers. Generative AI co-pilot for storage: Extending Pure Storage’s leadership position as the innovator in simplicity, the first AI co-pilot for storage represents a radically new way to manage and protect data using natural language. This leverages data insights from tens of thousands of Pure Storage customers to guide storage teams through every step of investigating complex performance and management issues and staying ahead of security incidents. In the new Innovation Race survey of 1,500 global CIOs and decision makers commissioned by Pure Storage, nearly all respondents (98%) state that their organisation’s data infrastructure must improve to support initiatives like AI - which is evolving so rapidly that IT is struggling to keep up, much less predict what’s next. Companies large and small are realising that they are locked into inflexible storage architectures lacking enterprise-grade reliability, unable to resize or upgrade performance without complex and risky infrastructure planning. Pure Storage is introducing new innovations in the platform that help businesses accelerate successful AI deployments today, and future-proof for tomorrow. The Pure Storage platform empowers organisations to unlock the value of their data with AI, while delivering agility to instantly scale capacity and performance up and down independently, without disruption. New Evergreen//One for AI - First Purpose-Built AI Storage as-a-Service: Provides guaranteed storage performance for GPUs to support training, inference, and HPC workloads, extending Pure Storage's leadership position for capacity subscriptions and introduces the ability to purchase based on dynamic performance and throughput needs. The new SLA uniquely delivers the performance needed and eliminates the need for planning or overbuying by paying for throughput performance. Secure Application Workspaces with Fine-Grained Access Controls: Combines Kubernetes container management, secure multi-tenancy, and policy governance tools to enable advanced data integrations between enterprise mission-critical data and AI clusters. This makes storage infrastructure transparent to application owners, who gain fully automated access to AI innovation without sacrificing security, independence, or control. For more from Pure Storage, click here.

Avaneidi secures funding to advance data security
Avaneidi, an innovative Italian start-up specialising in security enterprise storage systems, has announced an €8 million (£6.7m) Series A funding round by United Ventures. The investment underscores a shared commitment to advancing solid-state storage technologies, enhancing data security, and promoting a sustainable digital transition. Avaneidi develops comprehensive enterprise storage systems based on a rigorous 360-degree, multi-level 'security by design' approach, enabling an unprecedented degree of cyber security, protection and data reliability for enterprise-grade applications. Avaneidi’s storage technology advancements boost performance, security and reduce energy consumption. This allows electronic devices and data centres to increase their operating efficiency and limit their carbon footprint, addressing key sustainable development goals such as clean energy and sustainable industry innovation. Avaneidi’s Enterprise Solid State Drives (ESSDs) utilise tailor-made chips and advanced algorithms, providing a bespoke solution optimised for performance and cyber security applications. Designed for on-premise data centres, their storage appliances offer a cost-effective, highly efficient alternative to traditional storage solutions, featuring extended drive lifetime, improved security and significant energy savings. “Our mission at Avaneidi is to pave the way for more secure, efficient, and sustainable data storage solutions,” says Dr. Rino Micheloni, CEO of Avaneidi. “This funding will keep us at the forefront of the market, enabling us to accelerate the development of our enterprise ESSDs and all-inclusive storage appliances. Unlike off-the-shelf products, our solutions address cyber security and data governance issues by leveraging a tight hardware-software co-design while offering extensive customisation options.” Avaneidi targets organisations and industries that are highly sensitive to data governance and security, particularly within the rapidly evolving field of AI applications, where these issues are of paramount importance, such as finance, defence, automotive and healthcare. By prioritising data integrity and protection, Avaneidi empowers entire industries to better leverage AI technology safely and effectively when it comes to storage solutions. Avaneidi’s technology’s potential has attracted the attention of major industry players, the company states. Negotiations and preliminary agreements are in place to validate and expand the market reach of its innovative products. “United Ventures invests in technologies that have a tangible positive impact,” states Massimiliano Magrini, Managing Partner at United Ventures. “Avaneidi's vision and mission to enable organisations to make better and more sustainable storage decisions, focusing on governance and data security, align with our investment philosophy. By channeling resources into AI infrastructure like Avaneidi’s, we aim to facilitate the development of technologies that will redefine industries and transform tomorrow's society.” As the AI sector rapidly expands, robust infrastructure for advanced AI applications is paramount. According to recent estimates, the AI infrastructure market is projected to grow from $25.8 billion (£20.3bn) in 2022 to $195 billion (£153.9bn) by 2027, reflecting a compound annual growth rate (CAGR) of 50%. This surge is driven by significant advancements in AI computing, which is expected to escalate from $15.8 billion (£12.4bn) in 2022 to $165 billion (£130.2bn) in 2027, achieving a 60% CAGR.

Vertiv launches new AI hub
While artificial intelligence (AI) use cases are growing at an unprecedented rate, expert information is scarce for pioneering data centres. Vertiv, a global provider of critical digital infrastructure and continuity solutions, recognises this knowledge gap and the urgency of accessing this information, leading to the launch of its AI Hub. Partners, customers, and other website visitors will have access to expert information, reference designs and resources to successfully plan their AI-ready infrastructure. The Vertiv AI Hub features white papers, industry research, tools, and power and cooling portfolios for retrofit and greenfield applications. The new reference design library demonstrates scalable liquid cooling and power infrastructure to support current and future chip sets from 10-140kW per rack. Reflecting the rapid and continuous changes of the AI tech stack and the supporting infrastructure, the Vertiv AI Hub is a dynamic site that will be frequently updated with new content, including an AI Infrastructure certification program for Vertiv partners. “Vertiv has a history of sharing new to world technology and insights for the data centre industry,” says Vertiv CEO Giordano (Gio) Albertazzi. “We are committed to providing deep knowledge, the broadest portfolio, and expert guidance to enable our customers to be among the first to deploy energy-efficient AI power and cooling infrastructure for current and future deployments. Our close partnerships with leading chipmakers and innovative data centre operators make us uniquely qualified to help our customers and partners on their AI journey.” “AI is here to stay, and Vertiv is ready to help our customers navigate the challenges of realising their AI objectives. The AI hub is an excellent source for all our partners and customers in the Asia region to gain a deeper understanding of the opportunities and challenges of AI, and how Vertiv can assist to scale and embrace the AI journey,” says Alvin Cheang, High Density Compute Business and AI Director at Vertiv in Asia. Sean Graham, Research Director, Data Centres at IDC, notes, “Virtually every industry is exploring opportunities to drive business value through AI, but there are more questions than answers around how to deploy the infrastructure. A recognised infrastructure provider like Vertiv is valuable to businesses building an AI strategy and looking for a single source for information.” For more from Vertiv, click here.

R&M introduces latest version of its DCIM software
R&M, a globally active developer and provider of high-end infrastructure solutions for data and communications networks, is now offering Release 5 of the DCIM software, inteliPhy net. With Release 5, inteliPhy net is turning into a digital architect for data centres. Computer rooms can be flexibly designed according to the demand, application, size, and category of the data centre. Planners can position the infrastructure modules intuitively on an arbitrary floor plan using drag-and-drop, and inteliPhy net enables detailed 2D and 3D visualisations that are also suitable for project presentations. With inteliPhy net, it is possible to insert, structure and move racks, rack rows and enclosures with just a few clicks, R&M tells us. Patch panels, PDUs, cable ducts and pre-terminated trunk cables can be added, adapted and connected virtually just as quickly. The software finds optimal routes for the trunk cables and calculates the cable lengths. inteliPhy net contains an extensive library of templates for the entire infrastructure, such as racks, patch panels, cables and power supply. Models for active devices with data on weight, size, ports, slots, feed connections, performance and consumption are also included. Users can configure metamodels and save them for future planning. During planning, inteliPhy net generates an inventory list that can be used directly for cost calculations and orders. The planning process results in a database with a digital twin of the computer room. It serves as the basis for the entire Data Centre Infrastructure Management (DCIM), which is the main function of inteliPhy net. R&M also now offers ready-made KPI reports with zero-touch configuration for inteliPhy net. Users can link the reports with environmental, monitoring, infrastructure, and operating data to monitor the efficiency of the data centre. Customisable dashboards and automated KPI analyses help them to regulate power consumption and temperatures more precisely, and to utilise resources. Another new feature is the interaction of inteliPhy net with a focus on the savings in packaging service from R&M. Customers can, for example, configure Netscale 48 patch panels individually with inteliPhy net. R&M assembles the patch panels completely ready for installation and delivers them in single packaging. The concept saves a considerable amount of individual packaging for small parts. This reduces raw material consumption, waste and the time required for installation. For more from R&M, click here.

The key elements to delivering a successful data mesh strategy
First introduced in 2019, data mesh is a decentralised, platform-agnostic approach to managing analytical data that involves organising information by business domain. The concept has quickly grown in popularity, with 60% of companies with over 1,000 employees having adopted data mesh in some form within three years of its inception. According to Krzysztof Sazon, Senior Data Engineer at STX Next, businesses seeking to implement a data mesh strategy must ensure they adhere to four vital principles: Take ownership of domains Teams managing data must ensure it is structured and aligns with business needs. Krzysztof says, “Ultimately, an organisation is responsible for its data. Therefore, specialist teams should be able to quantify the reliability of the information they store, understand what is missing and explain how information was generated. “Centralised teams, on the other hand, are strangers to the organisation’s overarching data infrastructure and lack the understanding of where data is stored and how it is accessed. Giving specialist teams the chance to implement ideas and populate data warehouses while stored information is recent allows businesses to unlock the potential of the data at their disposal.” Treating data as a product Businesses must ensure their data upholds a specific set of standards if it is to become an asset they can leverage to drive growth. Krzysztof notes, “The data mesh approach advocates treating data like a product, where domains take ownership of the data they generate and grant access to internal users. This shifts the perception of data as a by-product of business activity, elevating its status to a primary output and asset that companies actively curate and share. “Data must adhere to specific standards if it is to become an asset: it should be easy to navigate, trustworthy, align with internal processes and comply with external regulations. Central data teams must build the infrastructure to support these principles, making data accessible and discoverable in the process.” Implementing self-serve architecture Users should be able to navigate stores of information on their own, without consulting a middleman. Krzysztof continues, “It’s vital employees have the ability to autonomously navigate internal data product stores relevant to their business sector. To facilitate this, a catalogue of all data products, with a search engine that provides detailed and up-to-date information about the datasets, is a key requirement. “There should be no need to ask external teams to set up data sharing and updating. Ideally, this is automated as much as possible and provides useful features, such as data lineage, instantly.” Federated computational governance Governance of data mesh architectures should embrace decentralisation and domain self-sovereignty. Krzysztof concludes, “Governance is more of an ongoing process than a fixed set of policies – rules, categorisations and other properties evolve as needs change. Typically, there is a central data standards committee and local data stewards responsible for compliance in their domains, allowing for consistency, oversight and context-aware governance. “Federated computational governance enables decentralised data ownership and flexibility in local data management, while maintaining standards throughout the organisation.”

Acronis expands security portfolio with new XDR offering
Acronis, a global leader in cybersecurity and data protection, has introduced Acronis XDR, the newest addition to the company’s security solution portfolio. Designed to be easy to deploy, manage, and maintain, Acronis XDR expands on the current endpoint detection and response (EDR) offering and delivers complete natively integrated, highly efficient cybersecurity with data protection, endpoint management, and automated recovery specifically built for managed service providers (MSPs). Cyberattacks have become increasingly sophisticated due to cybercriminals deploying AI and attack surfaces expanding, allowing businesses to be more vulnerable to data breaches and malware. To protect their customers, MSPs who offer security services commonly only have a choice of complex tools with insufficient, incomplete protection that are expensive and time-consuming to deploy and maintain. As a direct response to these challenges, Acronis XDR seeks to provide complete protection without high costs and added complexity. “Acronis makes a compelling entrance into XDR,” notes Chris Kissel, Research Vice-President at IDC. “Acronis has provided an endpoint protection platform for the better part of a year. The company has extended its XDR stack mapping alerts to Mitre Attack and offer cloud correlation detections. Importantly, its platform supports multitenancy, and the dashboard provides intuitive visualisations.” Key features and benefits of Acronis XDR include: • Native integration across cybersecurity, data protection, and endpoint management. The product is designed to protect vulnerable attack surfaces, enabling business continuity.• High efficiency, with the ability to easily launch, manage, scale, and deliver security services. It also includes AI-based incident analysis and single-click response for swift investigation and response.• Built for MSPs, including a single agent and console for all services, and a customisable platform to integrate additional tools into a unified technology stack. “It is imperative that MSPs provide reliable cybersecurity to customers with diverse IT environments and constrained budgets,” says Gaidar Magdanurov, President at Acronis. “Acronis XDR enables MSPs to offer top-notch security without the complexity and significant overhead of traditional non-integrated tools. This is achieved in several ways, including AI-assisted capabilities within the Acronis solution that helps MSPs provide the utmost cybersecurity - even if an MSP only has limited cybersecurity expertise.” Earlier this year, the company released Acronis MDR powered by Novacoast, a simple, effective, and advanced endpoint security service built for MSPs with native integration of data protection to deliver business resilience. Acronis MDR is a service offering used with the Acronis EDR solution focused on endpoint protection platform (EPP) to provide passive endpoint protection. The addition of Acronis MDR amplifies MSP’s security capabilities without the need for large security resources or added investments. The introduction of Acronis MDR and XDR follows a string of security-related offerings and solutions from Acronis, building on the company's EDR offering released in May 2023. Acronis security solutions leverage AI-based innovations and native integrations, which lower complexity and provide complete security in the easiest and most efficient way. With a comprehensive security portfolio from Acronis, MSPs can now offer complete cybersecurity to their customers and scale operations to grow their business. For more from Acronis, click here.

SELECT warns about demand for electricity from power-hungry AI
SELECT's new President has warned that the demands on the electrical network to power AI may become unsustainable as it becomes a growing part of society. Mike Stark, who took over the association reins last week, said the UK’s National Grid could struggle to satisfy the voracious energy needs of AI and the systems it supports. The 62-year-old, who is Director of Data Cabling and Networks at Member firm OCS M&E Services, joins a growing number of experts who have warned about the new technology’s huge appetite for electricity, which is often greater than many small countries use in a year. And he questioned whether the UK’s current electrical infrastructure was fit for purpose in the face of the massive increase in predicted demand, not only from the power-hungry data centres supporting AI, but also from the continued rise in electric vehicle (EV) charging units. Mike says, “AI is becoming more embedded in our everyday lives, from digital assistants and chatbots helping us on websites to navigation apps and autocorrect on our mobile phones. And it is going to become even more prevalent in the near future. “Data centres, which have many servers as their main components, need electrical power to survive. It is therefore only natural that any talk about building a data centre should begin with figuring out the electrical needs and how to satisfy those power requirements. “At present, the UK’s National Grid appears to be holding its own, with current increases being met with renewable energy systems. But as technology advances and systems such as AI are introduced, there will be a time when the grid will struggle to support the demand.” Mike said that it is estimated that there could be 1.5 million AI servers by 2027. Running at full capacity, these would consume between 85 and 134 terawatt hours per year – roughly equivalent to the current energy demands of countries like the Netherlands and Sweden. He adds, “I remember attending an EV training session about 25 years ago and the standing joke was, ‘Where’s all this electricity going to come from?’ We all felt the network needed upgrading then, and now there is extra pressure from the new AI data centres springing up.” Mike has spent 44 years in the electrical industry, with 40 of those providing continued service at the same company; starting at Arthur McKay as a qualified electrician in June 1984, through to his current role at what is now OCS. He was confirmed as new SELECT President at the association’s AGM at the Doubletree Edinburgh North Queensferry on Thursday 6 June, taking over from Alistair Grant.

Scality RING solution deployed at SeqOIA medical lab
Scality, a global provider of cyber-resilient storage for the AI era, today announced a large-scale deployment of its RING distributed file and object storage solution to optimise and accelerate the data lifecycle for high-throughput genomics sequencing laboratory, SeqOIA Médecine Génomique. This is the most recent in a series of deployments where RING is leveraged as a foundational analytics and AI data lake repository for organisations in healthcare, financial services and travel services across the globe. Selected as part of the France Médecine Génomique 2025 (French Genomic Medicine Plan), SeqOIA is one of two national laboratories integrating whole genome sequencing into the French healthcare system to benefit patients with rare diseases and cancer. SeqOIA adopted Scality RING to aggregate petabyte-scale genetics data used to better characterise pathologies, as well as guide genetic counselling and patient treatment. RING grants SeqOIA biologists efficient access from thousands of compute nodes to nearly 10 petabytes of data throughout its lifecycle, spanning from lab data to processed data, at accelerated speeds and a cost three to five times lower than that of all-flash file storage. “RING is the repository for 90% of our genomics data pipeline, and we see a need for continued growth on it for years to come,” says Alban Lermine, IS and Bioinformatics Director of SeqOIA. “In collaboration with Scality, we have solved our analytics processing needs through a two-tier storage solution, with all-flash access of temporary hot data sets and long-term persistent storage in RING. We trust RING to protect the petabytes of mission-critical data that enable us to carry out our mission of improving care for patients suffering from cancer and other diseases.” Scality RING powers AI data lakes for other data-intensive industries. One of the largest publicly held personal line insurance providers in the US chose RING as the preferred AI-data lake repository for insurance analytics claim processing. The provider chose RING to replace its HDFS (Hadoop File System), and the customer has since realised three times improved space efficiency and cost savings, with higher-availability through a multi-site RING deployment to support site failover. Meanwhile, a multinational IT services company whose technology fuels the global travel and tourism industry is using Scality RING to power its core data lake. RING supports one petabyte of new log data ingested each day to maintain a 14-day rotating data lake. This requires RING to purge (delete) the oldest petabyte each day, while simultaneously supporting tens of gigabytes per second (GB/s) read access for analysis from a cluster of Splunk indexers. For data lake deployments, these organisations require trusted and proven solutions with a long-term track record of delivering performance and data protection at petabyte-scale. For AI workload processing, they pair RING repositories in an intelligent tiered manner with all-flash file systems, as well as leading AI tools and analytics applications, including Weka.io, HPE Pachyderm, Cribl, Cloudera, Splunk, Elastic, Dremio, Starburst and more. With strategic partners like HPE and HPE GreenLake, Scality has the ability to deliver managed AI data lakes. For more from Scality, click here.

Cirata to offer native integration with Databricks Unity Catalog
Cirata, the company that automates Hadoop data transfer and integration to modern cloud analytics and AI platforms, has announced the release of Cirata Data Migrator 2.5 which now includes native integration with the Databricks Unity Catalog. Expanding the Cirata and Databricks partnership, the new integration centralises data governance and access control capabilities to enable faster data operations and accelerated time-to-business-value for enterprises. Databricks Unity Catalog delivers a unified governance layer for data and AI within the Databricks Data Intelligence Platform. Using Unity Catalog enables organisations to seamlessly govern their structured and unstructured data, machine learning modules, notebooks, dashboards and files on any cloud or platform. By integrating with Databricks Unity Catalog, Cirata Data Migrator unlocks the ability to execute analytics jobs as soon as possible or to modernise data in the cloud. With the ability to support Databricks Unity Catalog’s functionality for stronger data operations, access control, accessibility and search, Cirata Data Migrator automates large-scale transfer of data and metadata from existing data lakes to cloud storage and database targets, even while changes are being made by the application at the source. Using Cirata Data Migrator 2.5, users can now select the Databricks agent and define the use of Unity Catalog with Databricks SQL Warehouse. This helps data science and engineering teams maximise the value of their entire data estate while benefiting from their choice of metadata technology in Databricks. “As a long-standing partner, Cirata has helped many customers in their legacy Hadoop to Databricks migrations,” said Siva Abbaraju, Go-to-Market Leader, Migrations, Databricks. “Now, the seamless integration of Cirata Data Migrator with Unity Catalog enables enterprises to capitalize on our Data and AI capabilities to drive productivity and accelerate their business value.” “Cirata is excited by the customer benefits that come from native integration with the Databricks Unity Catalog,” says Paul Scott-Murphy, Chief Technology Officer, Cirata. “By unlocking a critical benefit for our customers, we are furthering the adoption of data analytics, AI and ML and empowering data teams to drive more meaningful data insights and outcomes.” This expanded Cirata-Databricks partnership builds on previous product integrations between the two companies. In 2021, the companies partnered to automate metadata and data migration capabilities to Databricks and Delta Lake on Databricks, respectively. With data available for immediate use, the integration eliminated the need to construct and maintain data pipelines to transform, filter and adjust data, along with the significant up-front planning and staging. Cirata Data Migrator is a fully automated solution that automates Hadoop data transfer and integration and moves on-premises HDFS data, Hive metadata, local filesystem, or cloud data sources to any cloud or on-premises environment, even while those datasets are under active change. Cirata Data Migrator requires zero changes to applications or business operations and moves data of any scale without production system downtime, business disruption, and with zero risk of data loss. Cirata Data Migrator 2.5 is available now with native integration with the Databricks Unity Catalog.



Translate »