AI workloads are reshaping the data centre. As back-end traffic scales and racks densify, the interconnect choices you make today will determine the performance, efficiency, and scalability of tomorrow’s AI infrastructure.
In this fast, focused 30-minute live tech talk, Siemon’s experts will share a practical, cabling-led view to help you plan smarter and deploy faster.
Drawing on field experience and expectations from large-scale AI deployments, the session will give you clear context and actionable guidance before your next design, upgrade, or AI back-end project begins.
• AI market overview & nomenclature: A clear look at scale-up vs scale-out networks and where each fit in AI planning.
• Reference designs & deployment sizes: Common GPU pod approaches (including air-cooled and liquid-to-chip) and what they mean for density and footprint.
• AI network connection points: Critical interconnect considerations for high-performance AI back-end networks.
• AI network cabling considerations: What to evaluate when selecting cables for demanding 400G/800G workloads.
• Cabling options that improve efficiency: Real-world examples of how architecture choices affect deployment efficiency, including a 1024-GPU comparison.
• A clear understanding of high-density interconnect options.
• Insight into proven deployment strategies and the trade-offs that matter.
• Confidence to make informed decisions that scale with AI workloads.
Speaker: Ryan Harris, Director, Systems Engineering (High-Speed Interconnect), Siemon
Date: Thursday, 2 October 2025
Time: 2:00–2:30 PM BST | 3:00–3:30 PM CET
This is the must-see tech talk for anyone planning, designing, or deploying high-density AI data centres. Don’t miss your chance to get the insight that can accelerate your next project and keep your infrastructure ready for the demands ahead.
Register now via this link to secure your spot.
For more from Siemon, click here.
Head office & Accounts:
Suite 14, 6-8 Revenge Road, Lordswood
Kent ME5 8UD
T: +44 (0)1634 673163
F: +44 (0)1634 673173