Advertise on DCNN Advertise on DCNN Advertise on DCNN
Friday, June 13, 2025

Artificial Intelligence


SELECT warns about demand for electricity from power-hungry AI
SELECT's new President has warned that the demands on the electrical network to power AI may become unsustainable as it becomes a growing part of society. Mike Stark, who took over the association reins last week, said the UK’s National Grid could struggle to satisfy the voracious energy needs of AI and the systems it supports. The 62-year-old, who is Director of Data Cabling and Networks at Member firm OCS M&E Services, joins a growing number of experts who have warned about the new technology’s huge appetite for electricity, which is often greater than many small countries use in a year. And he questioned whether the UK’s current electrical infrastructure was fit for purpose in the face of the massive increase in predicted demand, not only from the power-hungry data centres supporting AI, but also from the continued rise in electric vehicle (EV) charging units. Mike says, “AI is becoming more embedded in our everyday lives, from digital assistants and chatbots helping us on websites to navigation apps and autocorrect on our mobile phones. And it is going to become even more prevalent in the near future. “Data centres, which have many servers as their main components, need electrical power to survive. It is therefore only natural that any talk about building a data centre should begin with figuring out the electrical needs and how to satisfy those power requirements. “At present, the UK’s National Grid appears to be holding its own, with current increases being met with renewable energy systems. But as technology advances and systems such as AI are introduced, there will be a time when the grid will struggle to support the demand.” Mike said that it is estimated that there could be 1.5 million AI servers by 2027. Running at full capacity, these would consume between 85 and 134 terawatt hours per year – roughly equivalent to the current energy demands of countries like the Netherlands and Sweden. He adds, “I remember attending an EV training session about 25 years ago and the standing joke was, ‘Where’s all this electricity going to come from?’ We all felt the network needed upgrading then, and now there is extra pressure from the new AI data centres springing up.” Mike has spent 44 years in the electrical industry, with 40 of those providing continued service at the same company; starting at Arthur McKay as a qualified electrician in June 1984, through to his current role at what is now OCS. He was confirmed as new SELECT President at the association’s AGM at the Doubletree Edinburgh North Queensferry on Thursday 6 June, taking over from Alistair Grant.

Scality RING solution deployed at SeqOIA medical lab
Scality, a global provider of cyber-resilient storage for the AI era, today announced a large-scale deployment of its RING distributed file and object storage solution to optimise and accelerate the data lifecycle for high-throughput genomics sequencing laboratory, SeqOIA Médecine Génomique. This is the most recent in a series of deployments where RING is leveraged as a foundational analytics and AI data lake repository for organisations in healthcare, financial services and travel services across the globe. Selected as part of the France Médecine Génomique 2025 (French Genomic Medicine Plan), SeqOIA is one of two national laboratories integrating whole genome sequencing into the French healthcare system to benefit patients with rare diseases and cancer. SeqOIA adopted Scality RING to aggregate petabyte-scale genetics data used to better characterise pathologies, as well as guide genetic counselling and patient treatment. RING grants SeqOIA biologists efficient access from thousands of compute nodes to nearly 10 petabytes of data throughout its lifecycle, spanning from lab data to processed data, at accelerated speeds and a cost three to five times lower than that of all-flash file storage. “RING is the repository for 90% of our genomics data pipeline, and we see a need for continued growth on it for years to come,” says Alban Lermine, IS and Bioinformatics Director of SeqOIA. “In collaboration with Scality, we have solved our analytics processing needs through a two-tier storage solution, with all-flash access of temporary hot data sets and long-term persistent storage in RING. We trust RING to protect the petabytes of mission-critical data that enable us to carry out our mission of improving care for patients suffering from cancer and other diseases.” Scality RING powers AI data lakes for other data-intensive industries. One of the largest publicly held personal line insurance providers in the US chose RING as the preferred AI-data lake repository for insurance analytics claim processing. The provider chose RING to replace its HDFS (Hadoop File System), and the customer has since realised three times improved space efficiency and cost savings, with higher-availability through a multi-site RING deployment to support site failover. Meanwhile, a multinational IT services company whose technology fuels the global travel and tourism industry is using Scality RING to power its core data lake. RING supports one petabyte of new log data ingested each day to maintain a 14-day rotating data lake. This requires RING to purge (delete) the oldest petabyte each day, while simultaneously supporting tens of gigabytes per second (GB/s) read access for analysis from a cluster of Splunk indexers. For data lake deployments, these organisations require trusted and proven solutions with a long-term track record of delivering performance and data protection at petabyte-scale. For AI workload processing, they pair RING repositories in an intelligent tiered manner with all-flash file systems, as well as leading AI tools and analytics applications, including Weka.io, HPE Pachyderm, Cribl, Cloudera, Splunk, Elastic, Dremio, Starburst and more. With strategic partners like HPE and HPE GreenLake, Scality has the ability to deliver managed AI data lakes. For more from Scality, click here.

Cirata to offer native integration with Databricks Unity Catalog
Cirata, the company that automates Hadoop data transfer and integration to modern cloud analytics and AI platforms, has announced the release of Cirata Data Migrator 2.5 which now includes native integration with the Databricks Unity Catalog. Expanding the Cirata and Databricks partnership, the new integration centralises data governance and access control capabilities to enable faster data operations and accelerated time-to-business-value for enterprises. Databricks Unity Catalog delivers a unified governance layer for data and AI within the Databricks Data Intelligence Platform. Using Unity Catalog enables organisations to seamlessly govern their structured and unstructured data, machine learning modules, notebooks, dashboards and files on any cloud or platform. By integrating with Databricks Unity Catalog, Cirata Data Migrator unlocks the ability to execute analytics jobs as soon as possible or to modernise data in the cloud. With the ability to support Databricks Unity Catalog’s functionality for stronger data operations, access control, accessibility and search, Cirata Data Migrator automates large-scale transfer of data and metadata from existing data lakes to cloud storage and database targets, even while changes are being made by the application at the source. Using Cirata Data Migrator 2.5, users can now select the Databricks agent and define the use of Unity Catalog with Databricks SQL Warehouse. This helps data science and engineering teams maximise the value of their entire data estate while benefiting from their choice of metadata technology in Databricks. “As a long-standing partner, Cirata has helped many customers in their legacy Hadoop to Databricks migrations,” said Siva Abbaraju, Go-to-Market Leader, Migrations, Databricks. “Now, the seamless integration of Cirata Data Migrator with Unity Catalog enables enterprises to capitalize on our Data and AI capabilities to drive productivity and accelerate their business value.” “Cirata is excited by the customer benefits that come from native integration with the Databricks Unity Catalog,” says Paul Scott-Murphy, Chief Technology Officer, Cirata. “By unlocking a critical benefit for our customers, we are furthering the adoption of data analytics, AI and ML and empowering data teams to drive more meaningful data insights and outcomes.” This expanded Cirata-Databricks partnership builds on previous product integrations between the two companies. In 2021, the companies partnered to automate metadata and data migration capabilities to Databricks and Delta Lake on Databricks, respectively. With data available for immediate use, the integration eliminated the need to construct and maintain data pipelines to transform, filter and adjust data, along with the significant up-front planning and staging. Cirata Data Migrator is a fully automated solution that automates Hadoop data transfer and integration and moves on-premises HDFS data, Hive metadata, local filesystem, or cloud data sources to any cloud or on-premises environment, even while those datasets are under active change. Cirata Data Migrator requires zero changes to applications or business operations and moves data of any scale without production system downtime, business disruption, and with zero risk of data loss. Cirata Data Migrator 2.5 is available now with native integration with the Databricks Unity Catalog.

Arista unveils Etherlink AI networking platforms
Arista Networks, a provider of cloud and AI networking solutions, has announced the Arista Etherlink AI platforms, designed to deliver optimal network performance for the most demanding AI workloads, including training and inferencing. Powered by new AI-optimised Arista EOS features, the Arista Etherlink AI portfolio supports AI cluster sizes ranging from thousands to 100,000s of XPUs with highly efficient one and two-tier network topologies that deliver superior application performance compared to more complex multi-tier networks while offering advanced monitoring capabilities including flow-level visibility. “The network is core to successful job completion outcomes in AI clusters,” says Alan Weckel, Founder and Technology Analyst for 650 Group. “The Arista Etherlink AI platforms offer customers the ability to have a single 800G end-to-end technology platform across front-end, training, inference and storage networks. Customers benefit from leveraging the same well-proven Ethernet tooling, security, and expertise they have relied on for decades while easily scaling up for any AI application.” Arista’s Etherlink AI platforms The 7060X6 AI Leaf switch family employs Broadcom Tomahawk 5 silicon, with a capacity of 51.2Tbps and support for 64 800G or 128 400G Ethernet ports. The 7800R4 AI Spine is the fourth generation of Arista’s flagship 7800 modular systems. It implements the latest Broadcom Jericho3-AI processors with an AI-optimised packet pipeline and offers non-blocking throughput with the proven virtual output queuing architecture. The 7800R4-AI supports up to 460Tbps in a single chassis, which corresponds to 576 800G or 1152 400G Ethernet ports. The 7700R4 AI Distributed Etherlink Switch (DES) supports the largest AI clusters, offering customers massively parallel distributed scheduling and congestion-free traffic spraying based on the Jericho3-AI architecture. The 7700 represents the first in a new series of ultra-scalable, intelligent distributed systems that can deliver the highest consistent throughput for very large AI clusters. A single-tier network topology with Etherlink platforms can support over 10,000 XPUs. With a two-tier network, Etherlink can support more than 100,000 XPUs. Minimising the number of network tiers is essential for optimising AI application performance, reducing the number of optical transceivers, lowering cost and improving reliability. All Etherlink switches support the emerging Ultra Ethernet Consortium (UEC) standards, which are expected to provide additional performance benefits when UEC NICs become available in the near future. “Broadcom is a firm believer in the versatility, performance, and robustness of Ethernet, which makes it the technology of choice for AI workloads,” says Ram Velaga, Senior Vice President and General Manager, Core Switching Group, Broadcom. “By leveraging industry-leading Ethernet chips such as Tomahawk 5 and Jericho3-AI, Arista provides the ideal accelerator-agnostic solution for AI clusters of any shape or size, outperforming proprietary technologies and providing flexible options for fixed, modular, and distributed switching platforms.” Arista EOS Smart AI Suite The rich features of Arista EOS and CloudVision complement these new networking-for-AI platforms. The innovative software suite for AI-for-networking, security, segmentation, visibility, and telemetry features brings AI-grade robustness and protection to high-value AI clusters and workloads. For example, Arista EOS’s Smart AI suite of innovative enhancements now integrates with SmartNIC providers to deliver advanced RDMA-aware load balancing and QoS. Arista AI Analyzer powered by Arista AVA automates configuration and improves visibility and intelligent performance analysis of AI workloads. “Arista’s competitive advantage consistently comes down to our rich operating system and broad product portfolio to address AI networks of all sizes,” says Hugh Holbrook, Chief Development Officer, Arista Networks. “Innovative AI-optimised EOS features enable faster deployment, reduce configuration issues and deliver flow-level performance analysis, and improve AI job completion times for any size AI cluster.” The 7060X6 is available now. The 7800R4-AI and 7700R4 DES are in customer testing and will be available 2H 2024. For more from Arista Networks, click here.

Arista announces technology demonstration of AI data centres
Arista Networks has announced a technology demonstration of AI data centres in order to align compute and network domains as a single managed AI entity, in collaboration with NVIDIA. In order to build optimal generative AI networks with lower job completion times, customers can configure, manage, and monitor AI clusters uniformly across key building blocks including networks, NICs, and servers. This demonstrates the first step in achieving a multi-vendor, interoperable ecosystem that enables control and coordination between AI networking and AI compute infrastructure. As the size of AI clusters and large language models (LLMs) grows, the complexity and sheer volume of disparate parts of the puzzle grow apace. GPUs, NICs, switches, optics, and cables must all work together to form a holistic network. Customers need uniform controls between their AI servers hosting NICs and GPUs, and the AI network switches at different tiers. All these elements are reliant upon each other for proper AI job completion but operate independently. This could lead to misconfiguration or misalignment between aspects of the overall ecosystem, such as between NICs and the switch network, which can dramatically impact job completion time, since network issues can be very difficult to diagnose. Large AI clusters also require coordinated congestion management to avoid packet drops and under-utilisation of GPUs, as well as coordinated management and monitoring to optimise compute and network resources in tandem. At the heart of this solution is an Arista EOS-based agent enabling the network and the host to communicate with each other and coordinate configurations to optimise AI clusters. Using a remote AI agent, EOS running on Arista switches can be extended to directly-attached NICs and servers to allow a single point of control and visibility across an AI data centre as a holistic solution. This remote AI agent, hosted directly on an NVIDIA BlueField-3 SuperNIC or running on the server and collecting telemetry from the SuperNIC, allows EOS, on the network switch, to configure, monitor, and debug network problems on the server, ensuring end-to-end network configuration and QoS consistency. AI clusters can now be managed and optimised as a single homogenous solution. John McCool, Chief Platform Officer for Arista Networks, comments, “Arista aims to improve efficiency of communication between the discovered network and GPU topology to improve job completion times through coordinated orchestration, configuration, validation, and monitoring of NVIDIA accelerated compute, NVIDIA SuperNICs, and Arista network infrastructure.” This new technology demonstration highlights how an Arista EOS-based remote AI agent allows the combined, interdependent AI cluster to be managed as a single solution. EOS running in the network can now be extended to servers or SuperNICs via remote AI agents to enable instantaneous tracking and reporting of performance degradation or failures between hosts and networks, so they can be rapidly isolated and the impact minimised. Since EOS-based network switches are constantly aware of accurate network topology, extending EOS down to SuperNICs and servers with the remote AI agent further enables coordinated optimisation of end-to-end QoS between all elements in the AI data centre to reduce job completion time. Zeus Kerravala, Principal Analyst at ZK Research, adds, “Best-of-breed Arista networking platforms with NVIDIA’s compute platforms and SuperNICs enable coordinated AI data centres. The new ability to extend Arista’s EOS operating system with remote AI agents on hosts promises to solve a critical customer problem of AI clusters at scale, by delivering a single point of control and visibility to manage AI availability and performance as a holistic solution.” Arista will demonstrate the AI agent technology at the Arista IPO tenth anniversary celebration in NYSE on 5 June, with customer trials expected in the second half of 2024. For more from Arista, click here.

DataStax launches hyper-converged data platform
DataStax, a Generative AI data company, today announced the upcoming launch of DataStax HCDP (Hyper-Converged Data Platform), in addition to the upcoming release of DataStax Enterprise (DSE) 6.9. Both products enable customers to add generative AI and vector search capabilities to their self-managed, enterprise data workloads. The company describes DataStax HCDP as "the future of self-managed data", and the platform is designed for modern data centres and Hyper-Converged Infrastructure (HCI) to support the breadth of data workloads and AI systems. It supports powerful on-premises enterprise data systems built to AI-enable data and is ideal for enterprise operators and architects who want to take advantage of data centre modernisation investments to reduce operating costs and operational overhead. HCDP brings OpenSearch and Apache Pulsar for generative AI RAG knowledge retrieval The combination of OpenSearch’s widely adopted Enterprise Search capabilities, with the high-performance vector search capabilities of the DataStax cloud-native, NoSQL Hyper-Converged Database, enables users to retrieve data in highly flexible ways, catalysing the ability to get RAG and knowledge applications into production. “With more than 2,000 contributors, including upwards of 200 maintainers spanning 19 organisations, OpenSearch has built a vibrant, growing community and is used by companies around the world,” says Mukul Karnik, General Manager, Search Services at AWS. “We’ve seen a growing number of companies building AI-powered capabilities using OpenSearch, and RAG workloads in particular can benefit from OpenSearch’s extensive full-text search capabilities. We are excited to have DataStax join the community and offer OpenSearch as part of HCDP, and we look forward to seeing how their customers use it to build a range of new generative AI-powered services and applications.” Hyper-converged streaming (HCS) built with Apache Pulsar is designed to provide data communications for a modern infrastructure. With native support of inline data processing and embedding, HCS brings vector data to the edge, allowing for faster response times and enabling event data for better contextual generative AI experiences. With the ability to consolidate workloads from legacy applications, HCS provides application communication modernisation and enables users to operate across multiple cloud environments or between on-premise and cloud to provide a high-performance, scalable data communications layer that simplifies data integration across all environments. HCDP provides rapid provisioning and powerful, intuitive data APIs built around the DataStax one-stop GenAI stack for enterprise retrieval-augmented generation (RAG), and it’s all built on the proven, open-source Apache Cassandra platform. DataStax 6.9 brings generative AI capabilities to existing on-premises data DataStax Enterprise 6.9 offers enterprise operators and architects with massive on-premises Cassandra and DSE workloads the fastest upgrade to lower total cost of ownership and operational overhead, while adding powerful GenAI capabilities to their existing on-premises data. Its vector add-on brings vector search capabilities to both existing DSE and lift-and-shift Cassandra workloads, and the addition of DataStax Mission Control makes the operations of on-premises data workloads dramatically easier with cloud-native operations and observability on Kubernetes. “We're looking forward to DSE 6.9 for its advancements in real-time digital banking services. Currently, we use DSE for critical tasks like data offloading and real-time data services. Its full-text search enhances customer engagement without adding to core system loads,” notes Marcin Dobosz, Director of Technology, Neontri. “With features like the Vector Add-On and DataStax Mission Control, DSE 6.9 aligns with our goal of modernising technology stacks for financial institutions, enhancing productivity, search relevance, and operational efficiency to better meet clients' needs." Ed Anuff, Chief Product Officer, DataStax, concludes, “We’re delivering the future of self-managed data for modern infrastructure and AI systems, and enabling enterprises to get the most out of their data centre modernisation and total cost of ownership. In addition to supporting all of the new HCI investments on the most powerful systems in new AI data and edge clouds, we’re also enabling users to benefit from powerful integrations with leading technologies like OpenSearch and Apache Pulsar.”

"We’ve got to talk about AI"
Centiel, a global Uninterruptible Power Supply manufacturer, looks at the role of AI in relation to data centres while exploring some of the challenges and opportunities. There is much talk about how data centres will deal with AI (machine learning). One major challenge is that AI requires three times the power needed compared to normal servers. Facilities will also need extra space to house additional equipment to manage and power the data. We are already seeing AI in use in our daily lives. On LinkedIn we are invited to use AI to help us write a social media post and ChatGPT has caught the imagination of copywriters across the world. However, this is just the start. AI’s capability will grow exponentially over the next few years. This phenomenon is set against a society already struggling to provide enough energy and infrastructure for existing services. Data centres are already placing a huge strain on the national grid and we also have EV chargers and additional homes to add to the grid. We must look at alternatives. At Centiel, members of our team have been at the forefront of technology for many years bringing to market the first three-phase transformerless UPS and the first, second, third and fourth generations of true modular, hot-swappable three-phase UPS. Most recently, we turned our attention to helping data centres achieve net-zero targets. Following four years’ development we recently launched StratusPower a UPS to provide complete peace of mind in relation to power availability while helping data centres become more sustainable. StratusPower shares all the benefits of our award-winning three phase, true modular UPS CumulusPower - including '9 nines' (99.9999999%) availability to effectively eliminate system downtime; class leading 97.6% on-line efficiency to minimise running costs; true 'hot swap' modules to eliminate human error in operation – but now also includes long-life components to improve sustainability. Large data centres can need 100MW or more of power to support their load and in the future, they will need to generate their own power through renewables like photovoltaic arrays, perhaps located on the roofs of their buildings, or wind farms located in adjacent fields. Currently, mains AC power is rectified to create a DC bus that is used to charge batteries and provide an input to an inverter. But what about a future where the DC bus can be supplied from mains power and/or renewable sources? There is little doubt that future grid instability and unreliability will need to be corrected by the use of renewables, and StratusPower is ready to meet this future. The product is built with the future of renewables in mind, ensuring stability and reliability as we transition to cleaner energy sources. With a power density greater than 1MW per M2, StratusPower also offers a compact solution. Al is set to put more strain on infrastructure and power supplies which are already struggling to cope. However, the technology is now available to harness alternative energy sources, and our team of trusted advisors at Centiel can discuss the options to help future-proof facilities in the face of the AI revolution. We’ve got to talk about AI, so why not give Centiel’s team a call? To learn more, visit www.centiel.co.uk. For more from Centiel, click here.

Digital Realty partners with Oracle to accelerate AI growth
Digital Realty, a global provider of cloud and carrier-neutral data centre, colocation, and interconnection technologies, has announced a collaboration with Oracle to accelerate the growth and adoption of artificial intelligence (AI) among enterprises. The strategic collaboration aims to develop hybrid integrated innovations that address data gravity challenges, expedite time to market for enterprises deploying next-generation AI services, and unlock data and AI-based business outcomes. As part of this collaboration, Oracle will deploy critical GPU-based infrastructure in a dedicated Digital Realty data centre in Northern Virginia. This deployment, which leverages PlatformDIGITAL - Digital Realty’s open, purpose-built global data centre solution - will cater to a wide range of enterprises and AI customers, helping them to address critical infrastructure challenges, including those experienced with NVIDIA and AMD deployments. By leveraging the expertise and resources of both Oracle and Digital Realty, customers with mixed requirements can benefit from a tailored and efficient offering that meets their specific needs for GPU-based infrastructure. This announcement further strengthens Digital Realty's existing partnership with Oracle, which currently encompasses multiple deployments across the globe, 11 Oracle Cloud Infrastructure (OCI) FastConnect points of presence, a global ServiceFabric presence, and a recent Oracle Solution Centre deployment in one of Digital Realty’s Frankfurt data centres. In the fourth quarter of last year, Digital Realty successfully implemented an OCI Dedicated Region deployment for a major financial services customer, showcasing the potential of the collaboration between Oracle and Digital Realty in meeting enterprise customers' hybrid cloud requirements. Patrick Cyril, Global Vice President, Technical Sales & Customer Excellence – Revenue Operations, Oracle, says, “We’re excited to be working with Digital Realty to bring innovative solutions to the market that empower our enterprise customers workloads and their ecosystems to harness the boundless possibilities of AI. Together, we're not just pioneering technology; we're unlocking a future where every challenge is met with unparalleled innovation and every opportunity is maximised.” Chris Sharp, Chief Technology Officer at Digital Realty, comments, “We’re delighted to build upon our relationship with Oracle and enable the next generation of hybrid and private AI adoption among enterprises. Together, we're bringing the extensive capabilities of the cloud to enterprises’ private data sets through secure interconnection, unlocking new data-driven business outcomes.” Digital Realty states that its collaboration with Oracle signifies a significant step forward in the advancement of AI technologies. Furthermore, the company hopes that by combining their expertise and resources, together they can revolutionise the AI landscape and empower enterprises to unlock the full potential of their data. For more from Digital Realty, click here.

STACKIT and Dremio partner to pioneer data sovereignty
Dremio, the unified lakehouse platform for self-service analytics and AI, and STACKIT, a data-sovereign cloud provider in Europe and part of the IT organisation within Schwarz Group, today announced a strategic partnership that provides European organisations with the first fully managed, cloud-based modern lakehouse offering capable of meeting today’s stringent data residency requirements. Unveiled at Dremio’s annual user conference, Subsurface Live, this collaboration reportedly marks a significant milestone in STACKIT's mission to expand its expertise and product range in data and AI across Europe. Data residency requirements are the legal and regulatory stipulations that dictate where a company's data must physically reside. This means that physical servers hosting a business's data need to be located within a specified country or region to comply with local laws. With this partnership, STACKIT not only embraces open standards like Apache Iceberg, but also reaffirms its commitment to empowering customers with data sovereignty and freedom from vendor lock-in. By leveraging Dremio's unified lakehouse platform, STACKIT is set to overhaul data management by enabling organisations to transition seamlessly from traditional data lakes to high-performance and agile data lakehouses. Walter Wolf, Board Member of Schwarz IT, comments, “Our approach to data sovereignty hinges on leveraging open standards to facilitate seamless integration with applications spanning various business domains. Dremio serves as a solid foundation for this endeavour due to its emphasis on open formats such as Apache Iceberg.” Key benefits of the STACKIT and Dremio partnership include: ● Lower total cost of ownership: Customers can expect up to an 80% reduction in costs associated with analytics and AI projects thanks to Dremio's efficient data processing capabilities.● Faster time to market for analytics and AI: With Dremio, analytics and AI projects will see a significant boost in productivity enabling organisations to complete projects five to ten times faster than conventional methods.● Improved discoverability and governance via Iceberg Data Catalogue: Leveraging Git-inspired versioning, Dremio provides a robust data catalogue that supports data integrity, governance and traceability throughout the entire data lifecycle.● Flexibility and scalability: With the ability to run on any infrastructure, Dremio offers unparalleled flexibility and scalability allowing customers to adapt to changing business needs seamlessly. With its open lakehouse architecture, Dremio allows users to access and process data independently on the STACKIT platform through a data lakehouse service that ensures data protection and promotes data sovereignty. Together, the two companies enable businesses to derive valuable insights from sensitive data while adhering to the highest standards of data privacy and security. Thanks to the partnership, STACKIT customers can expect a significant cost reduction, flexible and consumption-based billing options, and overall affordability in their data residency and sovereignty efforts. Andreas Vogels, Central Europe Lead at Dremio, comments, "The collaboration between Dremio and STACKIT not only empowers organisations with the freedom to scale their data operations seamlessly but also ensures they can derive actionable insights from their data without constraints, no matter where the data resides. By leveraging Dremio's cloud-native architecture and STACKIT's commitment to digital sovereignty, customers can unlock the full potential of their data while maintaining control and flexibility in their cloud strategy.” Customers will be able to access the new data lakehouse service via a private preview, with support from STACKIT's data and AI consulting team to guide them through the initial steps. For more from Dremio, click here.

New micro modular data centre system with AI features
Vertiv, a global provider of critical digital infrastructure and continuity solutions, today introduced the Vertiv SmartAisle 3, a micro modular data centre system that utilises the power of Artificial Intelligence (AI), providing enhanced intelligence and enabling efficient operations within the data centre environment. Now available in Southeast Asia, Australia and New Zealand, the SmartAisle 3 can be configured up to 120kW of total IT load and is ideal for a wide range of industry applications, including banking, healthcare, government, and transportation. Building on the previous Vertiv SmartAisle technology, the SmartAisle 3 is a fully-integrated data centre ecosystem consisting of racks, uninterruptible power supply (UPS), thermal management and monitoring, and physical security. The latest iteration of the SmartAisle comes with AI functionality and self-learning features that help significantly optimise the micro data centre operational and energy efficiency. Each carriage or rack cabinet has a Smart Power Output Device or POD, which seamlessly manages the power distribution to rack PDUs and serves as a monitoring gateway that oversees carriage conditions including temperature, humidity and door status. With built-in cabling integration, front and rear carriage sensors, the SmartAisle 3 also eliminates the hassle of complex on-site cabling installation and saves on data centre white space. Moreover, the SmartAisle 3 further reduces the complexity of on-site setup with its one-click networking feature, which effortlessly configures the data centre system. It also has an AI self-learning function that intelligently monitors and adjusts temperature depending on the operating environment, helping to achieve energy savings by as much as 20% compared to systems without AI features, while maintaining optimum operation conditions. Cheehoe Ling, Vice President, Product Management at Vertiv Asia, explains, “As demand for data-intensive applications continues to rapidly grow, many businesses are requiring their data centre infrastructure to be deployed quickly and efficiently, and to be as scalable as possible. Vertiv enriched the Vertiv SmartAisle 3 with AI features to help our customers simplify their data centre operations, so they can have greater flexibility in their business operation and to help them achieve their energy efficiency goals. “The speed at which AI is evolving not only demands a workforce that is technically proficient. but also a digital economy built on infrastructure designed to scale,” says Faraz Ali, Product Manager IMS at Vertiv Australia and New Zealand. “By integrating AI directly into data centre operations, we’re giving businesses across Australia and New Zealand the opportunity to keep up with their AI ambitions. The new system’s automated intelligence makes light work of balancing critical infrastructure at optimum energy efficiency and power resilience, offering time back to critical talent, while promoting scalability as workload requirements expand.” With an intuitive 15-inch touch screen control panel and an option to upgrade to a 95-inch local door display, the Vertiv SmartAisle 3 provides enhanced system visibility and assists in troubleshooting to enable the system to operate at peak efficiency. The SmartAisle 3 includes the Vertiv Liebert APM modular uninterruptible power supply (UPS) system, the Vertiv Liebert CRV 4 in-row cooling, and the Vertiv Liebert RDU 501 intelligent monitoring system. It also comes pre-installed with a flexible busbar system that streamlines the overall system design by reducing power distribution installation complexity. The latest iteration of the SmartAisle 3 is part of Vertiv’s growing portfolio of flexible, fully-integrated modular solutions. With its combination of Environmental Management System (EMS), AI functionality and 'carriage' type rack architecture, the SmartAisle 3 can help customers boost operational efficiency through comprehensive intelligent monitoring and management, ease of installation, quick deployment time and highly efficient operating conditions, thus allowing customers to better adapt to the needs of today’s diverse compute environments. For more from Vertiv, click here.



Translate »