Monday, March 10, 2025

Data


Vertiv launches new AI hub
While artificial intelligence (AI) use cases are growing at an unprecedented rate, expert information is scarce for pioneering data centres. Vertiv, a global provider of critical digital infrastructure and continuity solutions, recognises this knowledge gap and the urgency of accessing this information, leading to the launch of its AI Hub. Partners, customers, and other website visitors will have access to expert information, reference designs and resources to successfully plan their AI-ready infrastructure. The Vertiv AI Hub features white papers, industry research, tools, and power and cooling portfolios for retrofit and greenfield applications. The new reference design library demonstrates scalable liquid cooling and power infrastructure to support current and future chip sets from 10-140kW per rack. Reflecting the rapid and continuous changes of the AI tech stack and the supporting infrastructure, the Vertiv AI Hub is a dynamic site that will be frequently updated with new content, including an AI Infrastructure certification program for Vertiv partners. “Vertiv has a history of sharing new to world technology and insights for the data centre industry,” says Vertiv CEO Giordano (Gio) Albertazzi. “We are committed to providing deep knowledge, the broadest portfolio, and expert guidance to enable our customers to be among the first to deploy energy-efficient AI power and cooling infrastructure for current and future deployments. Our close partnerships with leading chipmakers and innovative data centre operators make us uniquely qualified to help our customers and partners on their AI journey.” “AI is here to stay, and Vertiv is ready to help our customers navigate the challenges of realising their AI objectives. The AI hub is an excellent source for all our partners and customers in the Asia region to gain a deeper understanding of the opportunities and challenges of AI, and how Vertiv can assist to scale and embrace the AI journey,” says Alvin Cheang, High Density Compute Business and AI Director at Vertiv in Asia. Sean Graham, Research Director, Data Centres at IDC, notes, “Virtually every industry is exploring opportunities to drive business value through AI, but there are more questions than answers around how to deploy the infrastructure. A recognised infrastructure provider like Vertiv is valuable to businesses building an AI strategy and looking for a single source for information.” For more from Vertiv, click here.

R&M introduces latest version of its DCIM software
R&M, a globally active developer and provider of high-end infrastructure solutions for data and communications networks, is now offering Release 5 of the DCIM software, inteliPhy net. With Release 5, inteliPhy net is turning into a digital architect for data centres. Computer rooms can be flexibly designed according to the demand, application, size, and category of the data centre. Planners can position the infrastructure modules intuitively on an arbitrary floor plan using drag-and-drop, and inteliPhy net enables detailed 2D and 3D visualisations that are also suitable for project presentations. With inteliPhy net, it is possible to insert, structure and move racks, rack rows and enclosures with just a few clicks, R&M tells us. Patch panels, PDUs, cable ducts and pre-terminated trunk cables can be added, adapted and connected virtually just as quickly. The software finds optimal routes for the trunk cables and calculates the cable lengths. inteliPhy net contains an extensive library of templates for the entire infrastructure, such as racks, patch panels, cables and power supply. Models for active devices with data on weight, size, ports, slots, feed connections, performance and consumption are also included. Users can configure metamodels and save them for future planning. During planning, inteliPhy net generates an inventory list that can be used directly for cost calculations and orders. The planning process results in a database with a digital twin of the computer room. It serves as the basis for the entire Data Centre Infrastructure Management (DCIM), which is the main function of inteliPhy net. R&M also now offers ready-made KPI reports with zero-touch configuration for inteliPhy net. Users can link the reports with environmental, monitoring, infrastructure, and operating data to monitor the efficiency of the data centre. Customisable dashboards and automated KPI analyses help them to regulate power consumption and temperatures more precisely, and to utilise resources. Another new feature is the interaction of inteliPhy net with a focus on the savings in packaging service from R&M. Customers can, for example, configure Netscale 48 patch panels individually with inteliPhy net. R&M assembles the patch panels completely ready for installation and delivers them in single packaging. The concept saves a considerable amount of individual packaging for small parts. This reduces raw material consumption, waste and the time required for installation. For more from R&M, click here.

The key elements to delivering a successful data mesh strategy
First introduced in 2019, data mesh is a decentralised, platform-agnostic approach to managing analytical data that involves organising information by business domain. The concept has quickly grown in popularity, with 60% of companies with over 1,000 employees having adopted data mesh in some form within three years of its inception. According to Krzysztof Sazon, Senior Data Engineer at STX Next, businesses seeking to implement a data mesh strategy must ensure they adhere to four vital principles: Take ownership of domains Teams managing data must ensure it is structured and aligns with business needs. Krzysztof says, “Ultimately, an organisation is responsible for its data. Therefore, specialist teams should be able to quantify the reliability of the information they store, understand what is missing and explain how information was generated. “Centralised teams, on the other hand, are strangers to the organisation’s overarching data infrastructure and lack the understanding of where data is stored and how it is accessed. Giving specialist teams the chance to implement ideas and populate data warehouses while stored information is recent allows businesses to unlock the potential of the data at their disposal.” Treating data as a product Businesses must ensure their data upholds a specific set of standards if it is to become an asset they can leverage to drive growth. Krzysztof notes, “The data mesh approach advocates treating data like a product, where domains take ownership of the data they generate and grant access to internal users. This shifts the perception of data as a by-product of business activity, elevating its status to a primary output and asset that companies actively curate and share. “Data must adhere to specific standards if it is to become an asset: it should be easy to navigate, trustworthy, align with internal processes and comply with external regulations. Central data teams must build the infrastructure to support these principles, making data accessible and discoverable in the process.” Implementing self-serve architecture Users should be able to navigate stores of information on their own, without consulting a middleman. Krzysztof continues, “It’s vital employees have the ability to autonomously navigate internal data product stores relevant to their business sector. To facilitate this, a catalogue of all data products, with a search engine that provides detailed and up-to-date information about the datasets, is a key requirement. “There should be no need to ask external teams to set up data sharing and updating. Ideally, this is automated as much as possible and provides useful features, such as data lineage, instantly.” Federated computational governance Governance of data mesh architectures should embrace decentralisation and domain self-sovereignty. Krzysztof concludes, “Governance is more of an ongoing process than a fixed set of policies – rules, categorisations and other properties evolve as needs change. Typically, there is a central data standards committee and local data stewards responsible for compliance in their domains, allowing for consistency, oversight and context-aware governance. “Federated computational governance enables decentralised data ownership and flexibility in local data management, while maintaining standards throughout the organisation.”

Acronis expands security portfolio with new XDR offering
Acronis, a global leader in cybersecurity and data protection, has introduced Acronis XDR, the newest addition to the company’s security solution portfolio. Designed to be easy to deploy, manage, and maintain, Acronis XDR expands on the current endpoint detection and response (EDR) offering and delivers complete natively integrated, highly efficient cybersecurity with data protection, endpoint management, and automated recovery specifically built for managed service providers (MSPs). Cyberattacks have become increasingly sophisticated due to cybercriminals deploying AI and attack surfaces expanding, allowing businesses to be more vulnerable to data breaches and malware. To protect their customers, MSPs who offer security services commonly only have a choice of complex tools with insufficient, incomplete protection that are expensive and time-consuming to deploy and maintain. As a direct response to these challenges, Acronis XDR seeks to provide complete protection without high costs and added complexity. “Acronis makes a compelling entrance into XDR,” notes Chris Kissel, Research Vice-President at IDC. “Acronis has provided an endpoint protection platform for the better part of a year. The company has extended its XDR stack mapping alerts to Mitre Attack and offer cloud correlation detections. Importantly, its platform supports multitenancy, and the dashboard provides intuitive visualisations.” Key features and benefits of Acronis XDR include: • Native integration across cybersecurity, data protection, and endpoint management. The product is designed to protect vulnerable attack surfaces, enabling business continuity.• High efficiency, with the ability to easily launch, manage, scale, and deliver security services. It also includes AI-based incident analysis and single-click response for swift investigation and response.• Built for MSPs, including a single agent and console for all services, and a customisable platform to integrate additional tools into a unified technology stack. “It is imperative that MSPs provide reliable cybersecurity to customers with diverse IT environments and constrained budgets,” says Gaidar Magdanurov, President at Acronis. “Acronis XDR enables MSPs to offer top-notch security without the complexity and significant overhead of traditional non-integrated tools. This is achieved in several ways, including AI-assisted capabilities within the Acronis solution that helps MSPs provide the utmost cybersecurity - even if an MSP only has limited cybersecurity expertise.” Earlier this year, the company released Acronis MDR powered by Novacoast, a simple, effective, and advanced endpoint security service built for MSPs with native integration of data protection to deliver business resilience. Acronis MDR is a service offering used with the Acronis EDR solution focused on endpoint protection platform (EPP) to provide passive endpoint protection. The addition of Acronis MDR amplifies MSP’s security capabilities without the need for large security resources or added investments. The introduction of Acronis MDR and XDR follows a string of security-related offerings and solutions from Acronis, building on the company's EDR offering released in May 2023. Acronis security solutions leverage AI-based innovations and native integrations, which lower complexity and provide complete security in the easiest and most efficient way. With a comprehensive security portfolio from Acronis, MSPs can now offer complete cybersecurity to their customers and scale operations to grow their business. For more from Acronis, click here.

SELECT warns about demand for electricity from power-hungry AI
SELECT's new President has warned that the demands on the electrical network to power AI may become unsustainable as it becomes a growing part of society. Mike Stark, who took over the association reins last week, said the UK’s National Grid could struggle to satisfy the voracious energy needs of AI and the systems it supports. The 62-year-old, who is Director of Data Cabling and Networks at Member firm OCS M&E Services, joins a growing number of experts who have warned about the new technology’s huge appetite for electricity, which is often greater than many small countries use in a year. And he questioned whether the UK’s current electrical infrastructure was fit for purpose in the face of the massive increase in predicted demand, not only from the power-hungry data centres supporting AI, but also from the continued rise in electric vehicle (EV) charging units. Mike says, “AI is becoming more embedded in our everyday lives, from digital assistants and chatbots helping us on websites to navigation apps and autocorrect on our mobile phones. And it is going to become even more prevalent in the near future. “Data centres, which have many servers as their main components, need electrical power to survive. It is therefore only natural that any talk about building a data centre should begin with figuring out the electrical needs and how to satisfy those power requirements. “At present, the UK’s National Grid appears to be holding its own, with current increases being met with renewable energy systems. But as technology advances and systems such as AI are introduced, there will be a time when the grid will struggle to support the demand.” Mike said that it is estimated that there could be 1.5 million AI servers by 2027. Running at full capacity, these would consume between 85 and 134 terawatt hours per year – roughly equivalent to the current energy demands of countries like the Netherlands and Sweden. He adds, “I remember attending an EV training session about 25 years ago and the standing joke was, ‘Where’s all this electricity going to come from?’ We all felt the network needed upgrading then, and now there is extra pressure from the new AI data centres springing up.” Mike has spent 44 years in the electrical industry, with 40 of those providing continued service at the same company; starting at Arthur McKay as a qualified electrician in June 1984, through to his current role at what is now OCS. He was confirmed as new SELECT President at the association’s AGM at the Doubletree Edinburgh North Queensferry on Thursday 6 June, taking over from Alistair Grant.

Scality RING solution deployed at SeqOIA medical lab
Scality, a global provider of cyber-resilient storage for the AI era, today announced a large-scale deployment of its RING distributed file and object storage solution to optimise and accelerate the data lifecycle for high-throughput genomics sequencing laboratory, SeqOIA Médecine Génomique. This is the most recent in a series of deployments where RING is leveraged as a foundational analytics and AI data lake repository for organisations in healthcare, financial services and travel services across the globe. Selected as part of the France Médecine Génomique 2025 (French Genomic Medicine Plan), SeqOIA is one of two national laboratories integrating whole genome sequencing into the French healthcare system to benefit patients with rare diseases and cancer. SeqOIA adopted Scality RING to aggregate petabyte-scale genetics data used to better characterise pathologies, as well as guide genetic counselling and patient treatment. RING grants SeqOIA biologists efficient access from thousands of compute nodes to nearly 10 petabytes of data throughout its lifecycle, spanning from lab data to processed data, at accelerated speeds and a cost three to five times lower than that of all-flash file storage. “RING is the repository for 90% of our genomics data pipeline, and we see a need for continued growth on it for years to come,” says Alban Lermine, IS and Bioinformatics Director of SeqOIA. “In collaboration with Scality, we have solved our analytics processing needs through a two-tier storage solution, with all-flash access of temporary hot data sets and long-term persistent storage in RING. We trust RING to protect the petabytes of mission-critical data that enable us to carry out our mission of improving care for patients suffering from cancer and other diseases.” Scality RING powers AI data lakes for other data-intensive industries. One of the largest publicly held personal line insurance providers in the US chose RING as the preferred AI-data lake repository for insurance analytics claim processing. The provider chose RING to replace its HDFS (Hadoop File System), and the customer has since realised three times improved space efficiency and cost savings, with higher-availability through a multi-site RING deployment to support site failover. Meanwhile, a multinational IT services company whose technology fuels the global travel and tourism industry is using Scality RING to power its core data lake. RING supports one petabyte of new log data ingested each day to maintain a 14-day rotating data lake. This requires RING to purge (delete) the oldest petabyte each day, while simultaneously supporting tens of gigabytes per second (GB/s) read access for analysis from a cluster of Splunk indexers. For data lake deployments, these organisations require trusted and proven solutions with a long-term track record of delivering performance and data protection at petabyte-scale. For AI workload processing, they pair RING repositories in an intelligent tiered manner with all-flash file systems, as well as leading AI tools and analytics applications, including Weka.io, HPE Pachyderm, Cribl, Cloudera, Splunk, Elastic, Dremio, Starburst and more. With strategic partners like HPE and HPE GreenLake, Scality has the ability to deliver managed AI data lakes. For more from Scality, click here.

Cirata to offer native integration with Databricks Unity Catalog
Cirata, the company that automates Hadoop data transfer and integration to modern cloud analytics and AI platforms, has announced the release of Cirata Data Migrator 2.5 which now includes native integration with the Databricks Unity Catalog. Expanding the Cirata and Databricks partnership, the new integration centralises data governance and access control capabilities to enable faster data operations and accelerated time-to-business-value for enterprises. Databricks Unity Catalog delivers a unified governance layer for data and AI within the Databricks Data Intelligence Platform. Using Unity Catalog enables organisations to seamlessly govern their structured and unstructured data, machine learning modules, notebooks, dashboards and files on any cloud or platform. By integrating with Databricks Unity Catalog, Cirata Data Migrator unlocks the ability to execute analytics jobs as soon as possible or to modernise data in the cloud. With the ability to support Databricks Unity Catalog’s functionality for stronger data operations, access control, accessibility and search, Cirata Data Migrator automates large-scale transfer of data and metadata from existing data lakes to cloud storage and database targets, even while changes are being made by the application at the source. Using Cirata Data Migrator 2.5, users can now select the Databricks agent and define the use of Unity Catalog with Databricks SQL Warehouse. This helps data science and engineering teams maximise the value of their entire data estate while benefiting from their choice of metadata technology in Databricks. “As a long-standing partner, Cirata has helped many customers in their legacy Hadoop to Databricks migrations,” said Siva Abbaraju, Go-to-Market Leader, Migrations, Databricks. “Now, the seamless integration of Cirata Data Migrator with Unity Catalog enables enterprises to capitalize on our Data and AI capabilities to drive productivity and accelerate their business value.” “Cirata is excited by the customer benefits that come from native integration with the Databricks Unity Catalog,” says Paul Scott-Murphy, Chief Technology Officer, Cirata. “By unlocking a critical benefit for our customers, we are furthering the adoption of data analytics, AI and ML and empowering data teams to drive more meaningful data insights and outcomes.” This expanded Cirata-Databricks partnership builds on previous product integrations between the two companies. In 2021, the companies partnered to automate metadata and data migration capabilities to Databricks and Delta Lake on Databricks, respectively. With data available for immediate use, the integration eliminated the need to construct and maintain data pipelines to transform, filter and adjust data, along with the significant up-front planning and staging. Cirata Data Migrator is a fully automated solution that automates Hadoop data transfer and integration and moves on-premises HDFS data, Hive metadata, local filesystem, or cloud data sources to any cloud or on-premises environment, even while those datasets are under active change. Cirata Data Migrator requires zero changes to applications or business operations and moves data of any scale without production system downtime, business disruption, and with zero risk of data loss. Cirata Data Migrator 2.5 is available now with native integration with the Databricks Unity Catalog.

New AI data cycle storage framework introduced
Fuelling the next wave of AI innovation, Western Digital has introduced a six-stage AI Data Cycle framework that defines the optimal storage mix for AI workloads at scale. This framework is designed to help customers plan and develop advanced storage infrastructures to maximise their AI investments, improve efficiency, and reduce the total cost of ownership (TCO) of their AI workflows. AI models operate in a continuous loop of data consumption and generation - processing text, images, audio and video among other data types while simultaneously producing new unique data. As AI technologies become more advanced, data storage systems must deliver the capacity and performance to support the computational loads and speeds required for large, sophisticated models while managing immense volumes of data. Western Digital has strategically aligned its Flash and HDD product and technology roadmaps to the storage requirements of each critical stage of the cycle, and has now introduced a new high-performance PCIe Gen5 SSD to support AI training and inference; a high-capacity 64TB SSD for fast AI data lakes; and the world’s highest capacity ePMR, UltraSMR 32TB HDD for cost-effective storage at scale. “There’s no doubt that Generative AI is the next transformational technology, and storage is a critical enabler,” says Ed Burns, Research Director at IDC. “The implications for storage are expected to be significant as the role of storage, and access to data, influences the speed, efficiency and accuracy of AI Models, especially as larger and higher-quality data sets become more prevalent. “As a leader in Flash and HDD, Western Digital has an opportunity to benefit in this growing AI landscape with its strong market position and broad portfolio, which meets a variety of needs within the different AI data cycle stages.” Rob Soderbery, Executive Vice President and General Manager of Western Digital’s Flash Business Unit, adds, “Data is the fuel of AI. As AI technologies become embedded across virtually every industry sector, storage has become an increasingly important and dynamic component of the AI technology stack. The new AI Data Cycle framework will equip our customers to build a storage infrastructure that impacts the performance, scalability, and the deployment of AI applications, underscoring our commitment to deliver unparalleled value to our customers." The new Ultrastar DC SN861 SSD is Western Digital’s first enterprise-class PCIe Gen 5.0 solution with random read performance and projected best-in-class power efficiency for AI workloads. With capacities up to 16TB, it delivers up to three times random read performance increase versus the previous generation with ultra-low latency and incredible responsiveness for large language model (LLM) training, inferencing and AI service deployment. In addition, the low power profile delivers higher IOPS/Watt, reducing overall TCO. The increased PCIe Gen5 bandwidth addresses the growing needs of the AI market for high-speed accelerated computing combined with low latency to serve compute-intensive environments of AI. Built for mission-critical workloads, the Ultrastar DC SN861 provides a rich feature set, including NVMe 2.0 and OCP 2.0 support, 1 and 3 DWPD, and a five-year limited warranty. The Ultrastar DC SN861 E1.S is now sampling. The U.2 will begin sampling this month and will begin volume shipments in CQ3’24. More details on E1.S and E3.S form factors will follow later this year. Complementing the Ultrastar DC SN861, is the expanded Ultrastar DC SN655 enterprise-class SSD range designed for storage-intensive applications. The new options for the U.3 SSD will reach up to 64TB, driving higher performance and capacity for AI data preparation and faster, larger data lakes. These new DC SN655 variants are now sampling. More details about the drives will be released later this year when volume shipments begin. Western Digital is also now sampling the industry’s highest-capacity, 32TB ePMR enterprise-class HDD to select customers. Designed for massive data storage in hyperscale cloud and enterprise data centres, the new Ultrastar DC HC690 high-capacity UltraSMR HDD will play a vital role in AI workflows where large-scale data storage and low TCO are paramount, the company says. Leveraging proven designs from generations of successful products, the new 32TB drive delivers high capacity with seamless qualification and integration for rapid deployment, while maintaining superior dependability and reliability. More details about the drive will be available later this summer. “Each stage of the AI Data Cycle is unique with different infrastructure and compute requirements. By understanding the dynamic interplay between AI and data storage, Western Digital is delivering solutions that not only offer higher capacities but are also tailored to support the extreme performance and endurance of next-generation AI workloads,” adds Soderbery. “With our growing portfolio, long-term roadmap and constant innovation, our goal is to help customers unlock the transformative capabilities of AI.” For more from Western Digital, click here.

Syniti chosen to modernise and integrate IT and data landscape
Syniti, a global leader in enterprise data management, has announced that Caldic, a global distribution solutions provider for the life and material science markets, will use Syniti’s Knowledge Platform (SKP) to help improve its data quality and build a global master data management (MDM) platform for active data governance. This will allow Caldic to work with clean data, now and in the future.  Recently, Caldic expanded significantly following strategic mergers and acquisitions, positioning the organisation as a key industry player with a strong foothold in four regions around the world. With global coverage, the opportunity arose to build a unique, integrated platform in the distribution industry that facilitates the company’s business partners’ digital journey at a global scale. As a result of these strategic investments, Caldic has multiple separate ERPs and IT infrastructures. Syniti will work with Caldic to help optimise data quality and to apply data governance across the globe on the different ERP systems; the company is investing in data quality, harmonisation, and governance in its systems to enhance the efficiency of commercial operations and help make it easier for customers to unlock value when doing business with Caldic. Furthermore, these efforts will solidify a strong data foundation to enable faster integration of future acquisitions and data migrations, help to improve time to value and to ensure that additional strategic efforts can be more secure, risk-averse, and cost-efficient. With Syniti, Caldic gains: · An experienced partner solely focused on data with a proven track record of delivering successful outcomes in data optimisation, data quality management and master data management for the world’s largest organisations. · A unified end-to-end platform that improves data quality and simplifies enterprise data migrations, data management, governance and analytics capabilities. · A team of 100% data-focused specialists who bring technical, functional and business expertise to deliver high-quality, business-optimal data in the context of Caldic’s business. The Syniti Knowledge Platform is an enterprise data management platform that helps organisations overcome complex enterprise data transformation challenges by combining data expertise, intelligent software and packaged solution accelerators. Its software applies automation and guidance informed by AI and machine learning technologies to data migration, data quality, analytics, master data management and more. Lenno Maris, Chief Data Officer, Caldic, says, “As we advance building our global platform and onboarding new companies into our fold, our business requires fast and smooth incorporation of data and systems in Caldic’s infrastructure. The end-to-end solution from Syniti provides a data first approach that guarantees high-quality data are available to help us drive our operations more effectively and efficiently in the future.” Kevin Campbell, CEO, Syniti, says, “When getting it right the first time matters, you need expert help. We offer both the technology and expertise Caldic needs to support their data journey. Caldic is regional- and supply chain-driven – everything that has to do with supply chain, inventory, transportation and material handling – so finding optimisation is key in terms of in-time delivery, product, storage and cost. With the right data in the right places, Caldic is well-positioned to continue on its growth journey.”

Arista unveils Etherlink AI networking platforms
Arista Networks, a provider of cloud and AI networking solutions, has announced the Arista Etherlink AI platforms, designed to deliver optimal network performance for the most demanding AI workloads, including training and inferencing. Powered by new AI-optimised Arista EOS features, the Arista Etherlink AI portfolio supports AI cluster sizes ranging from thousands to 100,000s of XPUs with highly efficient one and two-tier network topologies that deliver superior application performance compared to more complex multi-tier networks while offering advanced monitoring capabilities including flow-level visibility. “The network is core to successful job completion outcomes in AI clusters,” says Alan Weckel, Founder and Technology Analyst for 650 Group. “The Arista Etherlink AI platforms offer customers the ability to have a single 800G end-to-end technology platform across front-end, training, inference and storage networks. Customers benefit from leveraging the same well-proven Ethernet tooling, security, and expertise they have relied on for decades while easily scaling up for any AI application.” Arista’s Etherlink AI platforms The 7060X6 AI Leaf switch family employs Broadcom Tomahawk 5 silicon, with a capacity of 51.2Tbps and support for 64 800G or 128 400G Ethernet ports. The 7800R4 AI Spine is the fourth generation of Arista’s flagship 7800 modular systems. It implements the latest Broadcom Jericho3-AI processors with an AI-optimised packet pipeline and offers non-blocking throughput with the proven virtual output queuing architecture. The 7800R4-AI supports up to 460Tbps in a single chassis, which corresponds to 576 800G or 1152 400G Ethernet ports. The 7700R4 AI Distributed Etherlink Switch (DES) supports the largest AI clusters, offering customers massively parallel distributed scheduling and congestion-free traffic spraying based on the Jericho3-AI architecture. The 7700 represents the first in a new series of ultra-scalable, intelligent distributed systems that can deliver the highest consistent throughput for very large AI clusters. A single-tier network topology with Etherlink platforms can support over 10,000 XPUs. With a two-tier network, Etherlink can support more than 100,000 XPUs. Minimising the number of network tiers is essential for optimising AI application performance, reducing the number of optical transceivers, lowering cost and improving reliability. All Etherlink switches support the emerging Ultra Ethernet Consortium (UEC) standards, which are expected to provide additional performance benefits when UEC NICs become available in the near future. “Broadcom is a firm believer in the versatility, performance, and robustness of Ethernet, which makes it the technology of choice for AI workloads,” says Ram Velaga, Senior Vice President and General Manager, Core Switching Group, Broadcom. “By leveraging industry-leading Ethernet chips such as Tomahawk 5 and Jericho3-AI, Arista provides the ideal accelerator-agnostic solution for AI clusters of any shape or size, outperforming proprietary technologies and providing flexible options for fixed, modular, and distributed switching platforms.” Arista EOS Smart AI Suite The rich features of Arista EOS and CloudVision complement these new networking-for-AI platforms. The innovative software suite for AI-for-networking, security, segmentation, visibility, and telemetry features brings AI-grade robustness and protection to high-value AI clusters and workloads. For example, Arista EOS’s Smart AI suite of innovative enhancements now integrates with SmartNIC providers to deliver advanced RDMA-aware load balancing and QoS. Arista AI Analyzer powered by Arista AVA automates configuration and improves visibility and intelligent performance analysis of AI workloads. “Arista’s competitive advantage consistently comes down to our rich operating system and broad product portfolio to address AI networks of all sizes,” says Hugh Holbrook, Chief Development Officer, Arista Networks. “Innovative AI-optimised EOS features enable faster deployment, reduce configuration issues and deliver flow-level performance analysis, and improve AI job completion times for any size AI cluster.” The 7060X6 is available now. The 7800R4-AI and 7700R4 DES are in customer testing and will be available 2H 2024. For more from Arista Networks, click here.



Translate »