SambaNova, Intel unveil hybrid AI platform

Author: Joe Peck

SambaNova, a company specialising in AI hardware and software, and American multinational technology company Intel have announced a new hybrid-chip platform designed to address data centre capacity constraints linked to AI workloads.

The architecture combines GPUs for prefill processing, Intel Xeon 6 processors for system control and workload execution, and SambaNova’s reconfigurable dataflow units (RDUs) for inference decoding.

The platform is expected to be available in the second half of 2026 for enterprise, cloud, and sovereign AI deployments.

The design targets agent-based AI workloads, which require coordinated processing across multiple stages, including data input, model inference, and execution of external tools and applications.

Hybrid approach to AI infrastructure

The platform reflects a shift towards heterogeneous computing in data centres, where different processor types are used for specific tasks rather than relying solely on GPUs.

In this model, GPUs handle the initial processing of prompts, while RDUs manage high-throughput inference tasks. Xeon 6 processors act as both the host system and execution layer, coordinating workloads, running code, and managing interactions with external systems.

Rodrigo Liang, CEO and co-founder of SambaNova Systems, explains, “Agentic AI is moving into production, and the winning pattern we’re seeing is GPUs to start the job, Intel Xeon 6 to run it, and SambaNova RDUs to finish it fast.

“Together with Intel, we’re giving customers a blueprint they can deploy in existing air-cooled data centres, with broad x86 coverage for the coding agents and tools they already use today.”

Kevork Kechichian, Executive Vice President and General Manager of the Data Center Group at Intel, adds, “The data centre software ecosystem is built on x86 and it runs on Xeon, providing a mature, proven foundation that developers, enterprises, and cloud providers rely on at scale.

“Workloads of the future will require a heterogeneous mix of computing, and this collaboration with SambaNova delivers a cost-efficient, high-performance inference architecture designed to meet customer needs at scale, powered by Xeon 6.”

The companies state that the approach is intended to support increasing demand for AI inference, particularly as agent-based systems move from testing into production environments.

Additional industry participants highlighted the growing need for scalable infrastructure to support coding agents and similar workloads, which rely on CPUs for execution alongside accelerators for inference.

The announcement marks an expansion of the existing collaboration between SambaNova and Intel, with a focus on enabling large-scale AI deployment across data centre environments.



Related Posts

Translate »