Artificial Intelligence in Data Centre Operations


Singtel collaborates with NVIDIA to bring AI to Singapore
Singtel has joined the NVIDIA Partner Network Cloud Programme and will bring NVIDIA’s full-stack AI platform to enterprises across the region. This NVIDIA-powered AI Cloud will be hosted by Singtel’s Nxera regional data centre business, which will be developing a new generation of sustainable, hyper-connected AI-ready data centres. Bill Chang, CEO of Nxera and Singtel's Digital InfraCo unit, says, “We are pleased to collaborate with AI leader, NVIDIA, to deliver AI infrastructure services, democratising access for enterprises, start-ups, government agencies and research organisations to leverage the power of AI sustainably within our purpose-built AI data centres. This tie-up provides an easy on-ramp for enterprises across all industries which can accelerate their development of generative AI, large language models, AI finetuning and other AI workloads. Together with our sustainable AI data centres, 5G network platform and our AI cluster with NVIDIA, these form part of our next-generation digital infrastructure to support AI adoption and digital transformation in Singapore and the region.”  Ronnie Vasishta, SVP for Telecom at NVIDIA, says, “Our collaboration with Singtel will combine technologies and expertise to facilitate the development of robust AI infrastructure in Singapore and throughout the region. In addition to supporting the Singapore government’s AI strategy, this will empower enterprises, regional customers and start-ups with advanced AI capabilities.”  Singtel's upcoming hyper-connected green 58MW data centre Tuas in Singapore will be one of the first data centres that will be AI ready, when it comes into operation in early 2026. Data centre Tuas will offer a high- density environment suited for AI workloads and will operate at a Power Usage Effectiveness (PUE) of 1.23 at full load, making it one of the most efficient in the industry. Besides data centre Tuas, Singtel is developing two other data centre projects in Indonesia and Thailand. The NVIDIA GPU clusters will run in existing upgraded data centres initially and can be further expanded into the new AI DCs when they are ready for service.

Growth of AI creates unprecedented demand for global data centres
As the global economy continues to rapidly adopt Artificial Intelligence (AI), infrastructure to support these systems must keep pace. Consumers and businesses are expected to generate twice as much data in the next five years as all the data created over the past 10 years. This growth presents both an opportunity and a challenge for real estate investors, developers and operators. JLL’s Data Centers 2024 Global Outlook explores how data centres need to be designed, operated and sourced to meet the evolving needs of the global economy. With the growing demands of AI, data centre storage capacity is expected to grow from 10.1 zettabytes (ZB) in 2023 to 21.0 zettabytes in 2027, for a five-year compound annual growth rate of 18.5%. Not only will this increased storage generate a need for more data centres, but generative AI’s greater energy requirements, ranging from 300 to 500MW+, will also require more energy efficient designs and locations. The need for more power will require data centre operators to increase efficiency and work with local governments to find sustainable energy sources to support data centre needs. “As the data centre industry grapples with power challenges and the urgent need for sustainable energy, strategic site selection becomes paramount in ensuring operational scalability and meeting environmental goals,” says Jonathan Kinsey, EMEA Lead and Global Chair, Data Centre Solutions, JLL. “In many cases, existing grid infrastructure will struggle to support the global shift to electrification and the expansion of critical digital infrastructure, making it increasingly important for real estate professionals and developers to work hand in hand with partners to secure adequate future power.” Sustainable data centre design and operations solutions AI-specialised data centres look different than conventional facilities and may require operators to plan, design and allocate power resources based on the type of data processed or stage of generative AI development. As the amount of computing equipment installed and operated is expected to continue increasing with AI demand, heat generation will surpass current standards. Since cooling typically accounts for roughly 40% of an average data centre’s electricity use, operators are shifting from traditional air-based cooling methods to liquid cooling. Providers have shown that liquid cooling boasts significant power reductions, as high as 90%, while improving capability and space requirements. “In addition to location and design considerations, data centre operators are starting to explore alternative power sourcing strategies for onsite power generation including small modular reactors (SMRs), hydrogen fuel cells and natural gas,” says Andy Cvengros, Managing Director, US Data Centre Markets, JLL. “With power grids becoming effectively tapped out and transformers having more than three-year lead times, operators will need to innovate.” Global investment in data centres and power To support these requirements, critical changes need to be made across the globe to increase power usage. In Europe, one-third of the grid infrastructure is over 40 years old, requiring an estimated €584bn of investment by 2030 to meet the European Union’s green goals. In the United States, meeting energy transition goals to upgrade the grid and feed more renewable energy into the power supply will require an estimated $2 trillion. Data centres’ rapid growth is also putting pressure on limited energy resources in many countries. In Singapore, for example, the government enacted a moratorium to temporarily halt construction in certain regions to carefully review new data centre proposals and ensure alignment with the country’s sustainability goals. The global energy conundrum presents both opportunities and challenges to commercial real estate leaders with a stake in the data centre sector. Generative AI will continue to fuel demand for specialised and redesigned data centres, and developers and operators who can provide sustainable computing power will reap the rewards of the data-intense digital economy. Read more latest news here.

Expereo launches an AI-driven and fully internet-based solution
Expereo has launched its Enhanced Internet service, its first AI-driven solution that continually monitors the 100,000 networks that make up the internet, and predicts the best performing route for companies’ network traffic. Empowering cloud performance Despite large investments into cloud first strategies and SaaS applications, many enterprises are still experiencing inconsistent network performance that impacts employees productivity and experience. Enhanced internet tackles this issue by improving the consistency and quality of application performance and enables businesses to get the most from their investments in cloud and SaaS strategies.  Companies typically spend approximately €500 per employee, per month, on cloud based applications. Despite this investment, issues in application performance can arise when their network traffic is impacted by latency, jitter, and poor performing routes. This can have serious implications, such as a decline in employee productivity and poor user experience, which can significantly impact the company’s financial performance. Leveraging AI to improve efficiencies Enhanced Internet’s AI capabilities means that customers can experience the agility and accessibility of the public internet, combined with the reliability and consistent performance levels typically expected from a private network or MPLS solution. Expereo’s proprietary AI software does this by combining AI with machine learning to intelligently and proactively route network traffic over the best possible path. For customers, the knowledge extends to predictive routes for specific situation by learning the most common traffic paths for their applications, and therefore, being ready to predict the path before the traffic is sent. What businesses now have at their disposal is a self-healing network that can navigate past latency and packet loss caused by network congestion. Complete visibility from a single login Paired with Expereo’s customer experience platform expereoOne, businesses can have complete visibility of the health of their network and transparency showing how Enhanced Internet has improved their application performance all from a single login. In addition to performance data, incident management, order status and invoices at site level, expereoOne displays latency and packet loss statistics, percentage of times routes have been changed and overall quality of service statistics personalised by top five cloud destinations for their business. Complete visibility of how application performance has been improved by Enhanced Internet. Sander Barens, Chief Product Officer for Expereo, comments, “Unpredictable network performance may not seem, at first, like an urgent threat to your business’ success. That is until you realise the increase demands and pressure it puts on IT teams, the damaging impact it has on business efficiency and hours of productivity downtime. With cloud applications now laying the groundwork for so many modern businesses, connecting to the public internet with the best possible application user experience is the next logical step for global businesses looking to future proof their operations. “At Expereo, we’re already using our latest AI solution, Enhanced Internet, to lift several customers into the internet age of enhanced connectivity, with very positive results and have big plans to elevate even more global enterprises into this new era of AI internet connectivity over the coming year.”

Juniper Networks unveils its AI-Native Networking Platform
Juniper Networks has announced its AI-Native Networking Platform, purpose-built to leverage AI to assure the best end-to-end operator and end-user experiences. Trained on seven years of insights and data science development, Juniper’s platform was designed from the ground up to assure that every connection is reliable, measurable and secure for every device, user, application and asset. Juniper’s platform unifies all campus, branch and data centre networking solutions with a common AI engine and Marvis Virtual Network Assistant (VNA). This enables end-to-end AI for IT Operations (AIOps) to be used for deep insight, automated troubleshooting and seamless end-to-end networking assurance, which elevates IT teams’ focus from maintaining basic network connectivity to delivering exceptional and secure end-to-end experiences for students, staff, patients, guests, customers and employees. The Juniper platform provides the simplest and most assured day zero, one, two and more operations, resulting in up to 85% lower operational expenditures than traditional solutions, demonstrates the elimination of up to 90% of network trouble tickets, 85% of IT onsite visits and up to 50% reduction in network incident resolution times. “AI is the biggest technology inflection point since the internet itself, and its ongoing impact on networking cannot be understated. At Juniper, we have seen first-hand how our game changing AIOps has saved thousands of global enterprises significant time and money while delighting the end user with a superior experience. Our AI-Native Networking Platform represents a bold new direction for Juniper, and for our industry. By extending AIOps from the end user all the way to the application, and across every network domain in between, we are taking a big step toward making network outages, trouble tickets and application downtime things of the past," says Rami Rahim, Chief Executive Officer, Juniper Networks. Within the new AI-Native Networking Platform, Juniper is introducing several new products that advance the experience-first mission, including simpler high performance data centre networks specifically designed for AI training and inference. The company is also expanding its AI Data Centre solution which consists of a spine-leaf data centre architecture with a foundation of QFX switches and PTX routers operated by Juniper Apstra. With this, Juniper takes much of the complexity out of AI Data Centre networking design, deployment and troubleshooting, allowing customers to do more with fewer IT resources. The solution also delivers unsurpassed flexibility to customers, avoiding vendor lock-in with silicon diversity, multivendor switch management and a commitment to open, standards-based ethernet fabrics.  Read more about Juniper's platform and its offerings here.

VAST Data forms strategic partnership with Genesis Cloud
VAST Data has announced a strategic partnership with Genesis Cloud. Together, VAST and Genesis Cloud aim to make AI and accelerated cloud computing more efficient, scalable and accessible to organisations across the globe. Genesis Cloud helps businesses optimise their AI training and inference pipeline by offering performance and capacity for AI projects at scale while providing enterprise-grade features. The company is using the VAST Data Platform to build a comprehensive set of AI data services in the industry. With VAST, Genesis Cloud will lead a new generation of AI initiatives and Large Language Model (LLM) development by delivering highly automated infrastructure with exceptional performance and hyperscaler efficiency. “To complement Genesis Cloud’s market-leading compute services, we needed a world-class partner at the data layer that could withstand the rigors of data-intensive AI workloads across multiple geographies,” says Dr Stefan Schiefer, CEO at Genesis Cloud. “The VAST Data Platform was the obvious choice, bringing performance, scalability and simplicity paired with rich enterprise features and functionality. Throughout our assessment, we were incredibly impressed not just with VAST’s capabilities and product roadmap, but also their enthusiasm around the opportunity for co-development on future solutions.” Key benefits for Genesis Cloud with the VAST Data Platform include: Multitenancy enabling concurrent users across public cloud: VAST allows multiple, disparate organisations to share access to the VAST DataStore, enabling Genesis Cloud to allocate orders for capacity as needed while delivering unparalleled performance. Enhancing security in cloud environments: By implementing a zero trust security strategy, the VAST Data Platform provides superior security for AI/ML and analytics workloads with Genesis Cloud customers, helping organisations achieve regulatory compliance and maintain the security of their most sensitive data in the cloud. Simplified workloads: Managing the data required to train LLMs is a complex data science process. Using the VAST Data Platform’s high performance, single tier and feature rich capabilities, Genesis Cloud is delivering data services that simplify and streamline data set preparation to better facilitate model training. Quick and easy to deploy: The intuitive design of the VAST Data Platform simplifies the complexities traditionally associated with other data management offerings, providing Genesis Cloud with a seamless and efficient deployment experience. Improved GPU utilisation: By providing fast, real-time access to data across public and private clouds, VAST eliminates data loading bottlenecks to ensure high GPU utilisation, better efficiency and ultimately lower costs to the end customer.  Future proof investment with robust enterprise features: The VAST Data Platform consolidates storage, database, and global namespace capabilities that offer unique productisation opportunities for service providers.

AI and data centres: Why AI is so resource hungry
By Ed Ansett, Founder and Chairman, i3 Solutions Group At the end of 2023, any forecast of how much energy will be required by generative AI has been inexact. Headlines tend towards guesstimates of five, 10, and 30x power needed for AI and enough power to run hundred thousands of homes and more. Meanwhile reports in specialist publications talk of power densities rising to 50kW or 100kW per rack. Why is generative AI so resource hungry? What moves are being made to calculate its potential energy cost and carbon footprint? Or as one research paper puts it, what is the 'huge computational cost of training these behemoths'. Today, much of this information is not readily available. Analysts have forecast their own estimates for specific workload scenarios, but with few disclosed numbers from the cloud hyperscalers at the forefront of model building, there is very little hard data to go on at this time. Where analysis has been conducted, the carbon cost of AI model building from training to inference has produced some sobering figures. According to a report in the Harvard Business Review, researchers have argued that training a ‘single large language deep learning model’, such as OpenAI’s GPT-4 or Google’s PaLM, is estimated to use around 300 tons of CO2. Other researchers calculated that training a medium-sized generative AI model using a technique called 'neural architecture search' used electricity and energy consumption equivalent to 626,000 tons of CO2 emissions. So, what’s going on to make AI so power hungry? Is it the data set, that is, the volume of data? The number of parameters used? The transformer model? The encoding, decoding and fine tuning? Or the processing time? The answer is, of course, a combination of all of the above. Data It is often said that GenAI Large Language Models (LLMs) and Natural Language Processing (NLP) require large amounts of training data. However, measured in terms of traditional data storage, this is not actually the case. For example, ChatGPT used commoncrawl.com data. Common Crawl says of itself that it is the primary training corpus in every LLM and that it supplied 82% of raw tokens used to train GPT-3. “We make wholesale extraction, transformation and analysis of open web data accessible to researchers. Over 250bn pages spanning 16 years. Three to five billion new pages added each month.” It is thought that ChatGPT-3 was trained on 45TB of Common Crawl plaintext, filtered down to 570GB of text data1. It is hosted on AWS for free as its contribution to Open Source AI data. But storage volumes, the billions of web pages or data tokens that are scraped from the web, Wikipedia and elsewhere, then encoded, decoded and fine-tuned to train ChatGPT and other models, should have no major impact on a data centre. Similarly, the terabytes or petabytes of data needed to train a text to speech, text to image or text to video model should put no extraordinary strain on the power and cooling systems in a data centre built for hosting IT equipment storing and processing hundreds or thousands of petabytes of data. An example of a text to image model is Large Scale AI Open Network (LAION), a German AI model with billions of images. One of its models, known as LAION 400m is a 10TB web data set. Another LAION5B has 5.85bn clip filtered text image pairs. One reason that training data volumes remain a manageable size is that it’s been the fashion amongst the majority of AI model builders to use Pre-Training Models (PTMs), instead of search models trained from scratch. Two examples of PTMs that are becoming familiar are Bidirectional Encoder Representations from Transformers (BERT) and the Generative Pre-trained Transformer (GPT) series, as in ChatGPT. Parameters Another measurement of AI training that are of interest to data centre operators are parameters. AI parameters are used by generative AI models during training. The greater the number of parameters, the greater the accuracy of the prediction of the desired outcome. ChatGPT-3 was built on 175bn parameters. But for AI, the number of parameters is already rising rapidly. WU Dao, a Chinese LLM first version used 1.75 trillion parameters. WU Dao, as well as being an LLM, is also providing text to image and text to video. Expect the numbers to continue to grow. With no hard data available, it is reasonable to surmise that the computational power required to run a model with 1.7 trillion parameters is going to be significant. As we move into more AI video generation, the data volumes and number of parameters used in models will surge. Transformers Transformers are a type of neural network architecture developed to solve the problem of sequence transduction or neural machine translation2. That means any task that transforms an input sequence to an output sequence. Transformer layers rely on loops so that where the input data moves into one transformer layer the data is looped back to its previous layer and out to the next layer. Such layers improve the predictive output of what comes next. It helps improve speech recognition, text-to-speech transformation, and more. How much is enough power? What researchers, analysts and the press are saying A report by S&P Global titled, 'POWER OF AI: Wild predictions of power demand from AI put industry on edge', quotes several sources such as: "Regarding US power demand, it's really hard to quantify how much demand is needed for things like ChatGPT," says David Groarke, Managing Director at Indigo Advisory Group. "In terms of macro numbers, by 2030 AI could account for 3% to 4% of global power demand. Google said right now AI is representing 10% to 15% of their power use or 2.3TWh annually." S&P Global continues, “Academic research conducted by Alex de Vries, a PhD candidate at the VU Amsterdam School of Business and Economics cites research by semiconductor analysis firm, SemiAnalysis. In a commentary published on 10 October in the journal, Joule, it is estimated that using generative AI, such as ChatGPT, in each Google search would require more than 500,000 of Nvidia's A100 HGX servers, totaling 4.1 million graphics processing units, or GPUs. At a power demand of 6.5kW per server, that would result in daily electricity consumption of 80GWh and annual consumption of 29.2TWh.” A calculation of the actual power used to train AI models was offered by RI.SE. It says, “Training a super-large language model like GPT-4, with 1.7 trillion parameters and using 13 trillion tokens or word snippets, is a substantial undertaking. OpenAI has revealed that it cost them $100 million and took 100 days, utilising 25,000 NVIDIA A100 GPUs. Servers with these GPUs use about 6.5kW each, resulting in an estimated 50GWh of energy usage during training.” This is important because the energy used by AI is rapidly becoming a topic of public discussion.Data centres are already on the map and ecologically focused organisations are taking note. According to 8billiontrees, “There are no published estimates as of yet for the AI industry’s total footprint, and the field of AI is exploding so rapidly that an accurate number would be nearly impossible to obtain. Looking at the carbon emissions from individual AI models is the gold standard at this time. The majority of the energy is dedicated to powering and cooling the hyperscale data centres, where all the computation occurs.” Conclusion As we wait for the numbers to emerge for past and existing power use for ML and AI, what is clear is that it is once models get into production and use, we will be in the exabyte and exaflop scale of computation. For data centre power and cooling, it is then that things become really interesting and more challenging.

Foundation stone laid for CtrlS’ GIFT City Datacenter at Gandhinagar
CtrlS Datacenters, a fast-growing Rated-4 data centre provider, has held its ground-breaking ceremony for its Greenfield data centre in GIFT City, Gujarat, with Bhupendrabhai Patel, Chief Minister of Gujarat, laying the foundation stone. CtrlS Datacenters plans to invest over Rs 250 crore and create 1,000 jobs (direct and indirect) in the ecosystem in multiple phases. CtrlS Datacenters was selected after carefully evaluating various datacentre companies based on its ability to develop an ecosystem, its unique business model in creating jobs, and its Rated-4 infrastructure. Gujarat will be getting its first Rated-4 data centre with all the managed services portfolio. Sridhar Pinnapureddy, Chairman of CtrlS Datacenters, says, “The State of Gujarat is one of the fastest-growing economic hubs in India, making it a strategic destination for CtrlS' ongoing expansion. We are excited to bring our expertise to the State. Located in GIFT City, the data centre will be easily accessible to all major clusters of the State. GIFT City is a global financial hub and home to several large international and national BFSI companies and is an ideal location for us.”  He states further, “We are thankful to the Gujarat Government and GIFT City authorities for extending all the support for our project. CtrlS Gandhinagar 1 DC will serve as an integral part of the larger digital infrastructure ecosystem, enabling the digital growth needs and aspirations of BFSI and other industries in the region.” Speaking at the ceremony, Gujarat's Chief Minister, Bhupendrabhai Patel, said, “Amidst global interest in connecting with India, GIFT City stands as a beacon, drawing major global financial organisations. Welcoming Asia's largest Rated-4 data centre, CtrlS Datacenters, to Gujarat, reflects our commitment. The Gujarat state government pledges comprehensive support. I am confident that CtrlS' presence will inspire more companies to join the thriving ecosystem of GIFT City in the days ahead." CtrlS Datacenters’ foray and expansion into the State will further boost the digital infrastructure ecosystem in the region. GIFT City houses large banks, insurers, intermediaries, exchanges, trading companies, clearing companies, financial services companies, IT, ITeS and others. The region serves as a hub for several financial activities including offshore banking, capital markets, offshore asset management, offshore insurance, ancillary services, IT, ITeS and BPO Services.  CtrlS Datacenters is trusted by banks, telecom operators, financial services companies, and e-commerce players amongst others. The company offers the industry's best uptime SLA of 99.995% combined with fault-tolerant Rated-4 data centre facilities, the industry's lowest PUE of 1.3 (design), carrier-neutral facilities, and faster deployment. In addition to this, CtrlS Datacenters will extend its group company Cloud4C’s managed services, providing an edge to financial institutions operating in the GIFT City. Cloud4C is already working with several large financial services customers worldwide, as well as with multinational banks. CtrlS Datacenters has recently announced a $2 billion investment plan and has identified three strategic investment areas over the next six years: Augmented footprint of hyperscale data centres that are custom-built for AI and cloud workloads; achieving net zero; and augmented team strength and capabilities.

Crusoe announces data centre expansion with atNorth in Iceland
atNorth, a Nordic colocation, high-performance computing, and artificial intelligence service provider has announced its collaboration with Crusoe Energy Systems to collocate Crusoe Cloud GPUs in atNorth’s ICE02 data centre in Iceland. This is Crusoe’s first project in Europe and the partnership advances Crusoe’s mission to align the future of computing with the future of the climate. It plans to do this by powering Crusoe’s high-performance computing infrastructure with clean energy sources. “I’m thrilled that our quest to source low carbon power has led us to Iceland,” says Cully Cavness, Crusoe’s Co-Founder and President. “This partnership with atNorth allows us to bring the concentrated energy demand of compute infrastructure directly to the source of clean, renewable geothermal and hydro energy.” “It is very important to atNorth that we are collaborating with companies that share our approach to sustainability,” says E. Magnús Kristinsson, CEO of atNorth. “Crusoe’s commitment to maximise their compute while minimising their environmental impact made them a perfect fit.” The atNorth ICE02 site leverages more than 80MW of power, benefiting from the sustainable geothermal and hydro energy produced in Iceland. The country also benefits from low latency networks and fully redundant connectivity to customer bases in North America and Europe via multiple undersea fibre optic cables.  “AI and machine learning are driving the demand for data centres to grow at a record rate,” says Chris Dolan, Chief Data Centre Officer of Crusoe. “We’re excited about our initial commitment to atNorth and look forward to potentially expanding capacity even more in the future.”  The news follows atNorth’s recent acquisition of Gompute, a provider of High Performance Computing (HPC) and data centre services, and the announcement of three new sites: FIN04 in Kouvola, Finland; FIN02 in Helsinki, Finland; and DEN01 in Copenhagen, Denmark.

AI & Partners review the recent political approval of the EU AI Act
The recently concluded Artificial Intelligence Act by the European Union (EU AI Act) stands as a significant milestone, solidifying the EU's global leadership in the regulation of AI technologies. Crafted through collaboration between Members of the European Parliament (MEPs) and the Council, these comprehensive regulations are designed to ensure the responsible development of AI, with a steadfast commitment to upholding fundamental rights, democracy, and environmental sustainability. Key provisions and safeguards The Act introduces a set of prohibitions on specific AI applications that pose potential threats to citizens' rights and democratic values. Notably, it prohibits biometric categorisation systems based on sensitive characteristics, untargeted scraping of facial images for recognition databases, emotion recognition in workplaces and educational institutions, social scoring based on personal characteristics, and AI systems manipulating human behaviour. Exceptions for law enforcement are outlined, permitting the use of biometric identification systems in publicly accessible spaces under strict conditions, subject to judicial authorisation, and limited to specific crime categories. The Act emphasises targeted searches for serious crimes, prevention of terrorist threats, and the identification of individuals suspected of specific crimes. Obligations for high-risk AI systems Identified high-risk AI systems, with potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law, are subjected to clear obligations. This includes mandatory fundamental rights impact assessments applicable to sectors like insurance and banking. High-risk AI systems influencing elections and voter behaviour are also covered, ensuring citizens' right to launch complaints and receive explanations for decisions impacting their rights. Guardrails for general AI systems To accommodate the diverse capabilities of general-purpose AI (GPAI) systems, transparency requirements have been established. This involves technical documentation, compliance with EU copyright law, and detailed summaries about the content used for training. For high-impact GPAI models with systemic risk, additional obligations such as model evaluations, mitigation of systemic risks, adversarial testing, reporting on incidents, cyber security measures, and energy efficiency reporting are introduced. Support for innovation and SMEs Acknowledging the importance of fostering innovation and preventing undue pressure on smaller businesses, the Act promotes regulatory sandboxes and real-world testing. National authorities can establish these mechanisms to facilitate the development and training of innovative AI solutions before market placement. Extra-territorial application and political position The Act extends its reach beyond EU borders, applying to any business, irrespective of their location, when they engage with the EU. This underscores the EU's commitment to global standards for AI regulation. In its political position, the EU took a 'pro-consumer' stance, a decision that, while applauded for prioritising consumer rights, drew varying reactions from tech firms and innovation advocates who might have preferred a different balance between regulation and freedom. Sanctions and entry into force The Act introduces substantial fines for non-compliance, ranging from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover, depending on the severity and size of the company. Co-rapporteur, Brando Benifei, highlighted the legislation's significance, emphasising the Parliament's commitment to ensuring rights and freedoms are central to AI development. Co-rapporteur, Dragos Tudorache, underscored the EU's pioneering role in setting robust regulations, protecting citizens, SMEs, and guiding AI development in a human-centric direction. During a joint press conference, lead MEPs, Carme Artigas, Secretary of State for Digitalisation and AI, and Commissioner, Thierry Breton, expressed the importance of the Act in shaping the EU's digital future. They emphasised the need for correct implementation, ongoing scrutiny, and support for new business ideas through sandboxes. Next steps The agreed text is awaiting formal adoption by both Parliament and Council to become EU law. Committees within Parliament will vote on the agreement in an upcoming meeting. Conclusion The Artificial Intelligence Act represents a groundbreaking effort by the EU to balance innovation with safeguards, ensuring the responsible and ethical development of AI technologies. By addressing potential risks, protecting fundamental rights, and supporting innovation, the EU aims to lead the world in shaping the future of artificial intelligence. The Act's successful implementation will be crucial in realising this vision, and ongoing scrutiny will ensure the continued alignment with the EU's commitment to rights, democracy, and technological progress. For more information on AI & Partners, click here.

Vertiv and Intel join forces to accelerate AI adoption  
Vertiv has announced that it is collaborating with Intel to provide a liquid cooling solution that will support the revolutionary new Intel Gaudi3 AI accelerator, scheduled to launch in 2024. AI applications and high-performance computing emit higher amounts of heat, and organisations are increasingly turning to liquid cooling solutions for more efficient and eco-friendly cooling options.   The Intel Gaudi3 AI accelerator will enable both liquid-cooled and air-cooled servers, supported by Vertiv pumped two-phase (P2P) cooling infrastructure. The liquid-cooled solution has been tested up to 160kW accelerator power using facility water from 17°C up to 45°C (62.6°F to 113°F). The air-cooled solution has been tested up to 40kW of heat load that can be deployed in warm ambient air data centres up to 35°C (95°F). This medium pressure direct P2P refrigerant-based cooling solution will help customers implement heat reuse, warm water cooling, free air cooling and reductions in power usage effectiveness (PUE), water usage effectiveness (WUE) and total cost of ownership (TCO). “The Intel Gaudi3 AI accelerator provides the perfect solution for a Vertiv and Intel collaboration,” says John Niemann, SVP Global Thermal Line, Vertiv. “Vertiv continues to expand our broad liquid cooling portfolio, resulting in our ability to support leaders of next generation AI technologies, like Intel. Vertiv helps customers accelerate the adoption of AI quickly and reliably, while also helping them to achieve sustainability goals.” “The compute required for AI workloads has put a spotlight on performance, cost and energy efficiency as top concerns for enterprises today,” says Dr Devdatta Kulkarni, Principal Engineer and Lead Thermal Engineer on this project at Intel. “To support increasing thermal design power and heat flux for next-generation accelerators, Intel has worked with Vertiv and other ecosystem partners to enable an innovative cooling solution that will be critical in helping customers meet critical sustainability goals.”



Translate »