Artificial Intelligence


AI and data centres: Why AI is so resource hungry
By Ed Ansett, Founder and Chairman, i3 Solutions Group At the end of 2023, any forecast of how much energy will be required by generative AI has been inexact. Headlines tend towards guesstimates of five, 10, and 30x power needed for AI and enough power to run hundred thousands of homes and more. Meanwhile reports in specialist publications talk of power densities rising to 50kW or 100kW per rack. Why is generative AI so resource hungry? What moves are being made to calculate its potential energy cost and carbon footprint? Or as one research paper puts it, what is the 'huge computational cost of training these behemoths'. Today, much of this information is not readily available. Analysts have forecast their own estimates for specific workload scenarios, but with few disclosed numbers from the cloud hyperscalers at the forefront of model building, there is very little hard data to go on at this time. Where analysis has been conducted, the carbon cost of AI model building from training to inference has produced some sobering figures. According to a report in the Harvard Business Review, researchers have argued that training a ‘single large language deep learning model’, such as OpenAI’s GPT-4 or Google’s PaLM, is estimated to use around 300 tons of CO2. Other researchers calculated that training a medium-sized generative AI model using a technique called 'neural architecture search' used electricity and energy consumption equivalent to 626,000 tons of CO2 emissions. So, what’s going on to make AI so power hungry? Is it the data set, that is, the volume of data? The number of parameters used? The transformer model? The encoding, decoding and fine tuning? Or the processing time? The answer is, of course, a combination of all of the above. Data It is often said that GenAI Large Language Models (LLMs) and Natural Language Processing (NLP) require large amounts of training data. However, measured in terms of traditional data storage, this is not actually the case. For example, ChatGPT used commoncrawl.com data. Common Crawl says of itself that it is the primary training corpus in every LLM and that it supplied 82% of raw tokens used to train GPT-3. “We make wholesale extraction, transformation and analysis of open web data accessible to researchers. Over 250bn pages spanning 16 years. Three to five billion new pages added each month.” It is thought that ChatGPT-3 was trained on 45TB of Common Crawl plaintext, filtered down to 570GB of text data1. It is hosted on AWS for free as its contribution to Open Source AI data. But storage volumes, the billions of web pages or data tokens that are scraped from the web, Wikipedia and elsewhere, then encoded, decoded and fine-tuned to train ChatGPT and other models, should have no major impact on a data centre. Similarly, the terabytes or petabytes of data needed to train a text to speech, text to image or text to video model should put no extraordinary strain on the power and cooling systems in a data centre built for hosting IT equipment storing and processing hundreds or thousands of petabytes of data. An example of a text to image model is Large Scale AI Open Network (LAION), a German AI model with billions of images. One of its models, known as LAION 400m is a 10TB web data set. Another LAION5B has 5.85bn clip filtered text image pairs. One reason that training data volumes remain a manageable size is that it’s been the fashion amongst the majority of AI model builders to use Pre-Training Models (PTMs), instead of search models trained from scratch. Two examples of PTMs that are becoming familiar are Bidirectional Encoder Representations from Transformers (BERT) and the Generative Pre-trained Transformer (GPT) series, as in ChatGPT. Parameters Another measurement of AI training that are of interest to data centre operators are parameters. AI parameters are used by generative AI models during training. The greater the number of parameters, the greater the accuracy of the prediction of the desired outcome. ChatGPT-3 was built on 175bn parameters. But for AI, the number of parameters is already rising rapidly. WU Dao, a Chinese LLM first version used 1.75 trillion parameters. WU Dao, as well as being an LLM, is also providing text to image and text to video. Expect the numbers to continue to grow. With no hard data available, it is reasonable to surmise that the computational power required to run a model with 1.7 trillion parameters is going to be significant. As we move into more AI video generation, the data volumes and number of parameters used in models will surge. Transformers Transformers are a type of neural network architecture developed to solve the problem of sequence transduction or neural machine translation2. That means any task that transforms an input sequence to an output sequence. Transformer layers rely on loops so that where the input data moves into one transformer layer the data is looped back to its previous layer and out to the next layer. Such layers improve the predictive output of what comes next. It helps improve speech recognition, text-to-speech transformation, and more. How much is enough power? What researchers, analysts and the press are saying A report by S&P Global titled, 'POWER OF AI: Wild predictions of power demand from AI put industry on edge', quotes several sources such as: "Regarding US power demand, it's really hard to quantify how much demand is needed for things like ChatGPT," says David Groarke, Managing Director at Indigo Advisory Group. "In terms of macro numbers, by 2030 AI could account for 3% to 4% of global power demand. Google said right now AI is representing 10% to 15% of their power use or 2.3TWh annually." S&P Global continues, “Academic research conducted by Alex de Vries, a PhD candidate at the VU Amsterdam School of Business and Economics cites research by semiconductor analysis firm, SemiAnalysis. In a commentary published on 10 October in the journal, Joule, it is estimated that using generative AI, such as ChatGPT, in each Google search would require more than 500,000 of Nvidia's A100 HGX servers, totaling 4.1 million graphics processing units, or GPUs. At a power demand of 6.5kW per server, that would result in daily electricity consumption of 80GWh and annual consumption of 29.2TWh.” A calculation of the actual power used to train AI models was offered by RI.SE. It says, “Training a super-large language model like GPT-4, with 1.7 trillion parameters and using 13 trillion tokens or word snippets, is a substantial undertaking. OpenAI has revealed that it cost them $100 million and took 100 days, utilising 25,000 NVIDIA A100 GPUs. Servers with these GPUs use about 6.5kW each, resulting in an estimated 50GWh of energy usage during training.” This is important because the energy used by AI is rapidly becoming a topic of public discussion.Data centres are already on the map and ecologically focused organisations are taking note. According to 8billiontrees, “There are no published estimates as of yet for the AI industry’s total footprint, and the field of AI is exploding so rapidly that an accurate number would be nearly impossible to obtain. Looking at the carbon emissions from individual AI models is the gold standard at this time. The majority of the energy is dedicated to powering and cooling the hyperscale data centres, where all the computation occurs.” Conclusion As we wait for the numbers to emerge for past and existing power use for ML and AI, what is clear is that it is once models get into production and use, we will be in the exabyte and exaflop scale of computation. For data centre power and cooling, it is then that things become really interesting and more challenging.

Foundation stone laid for CtrlS’ GIFT City Datacenter at Gandhinagar
CtrlS Datacenters, a fast-growing Rated-4 data centre provider, has held its ground-breaking ceremony for its Greenfield data centre in GIFT City, Gujarat, with Bhupendrabhai Patel, Chief Minister of Gujarat, laying the foundation stone. CtrlS Datacenters plans to invest over Rs 250 crore and create 1,000 jobs (direct and indirect) in the ecosystem in multiple phases. CtrlS Datacenters was selected after carefully evaluating various datacentre companies based on its ability to develop an ecosystem, its unique business model in creating jobs, and its Rated-4 infrastructure. Gujarat will be getting its first Rated-4 data centre with all the managed services portfolio. Sridhar Pinnapureddy, Chairman of CtrlS Datacenters, says, “The State of Gujarat is one of the fastest-growing economic hubs in India, making it a strategic destination for CtrlS' ongoing expansion. We are excited to bring our expertise to the State. Located in GIFT City, the data centre will be easily accessible to all major clusters of the State. GIFT City is a global financial hub and home to several large international and national BFSI companies and is an ideal location for us.”  He states further, “We are thankful to the Gujarat Government and GIFT City authorities for extending all the support for our project. CtrlS Gandhinagar 1 DC will serve as an integral part of the larger digital infrastructure ecosystem, enabling the digital growth needs and aspirations of BFSI and other industries in the region.” Speaking at the ceremony, Gujarat's Chief Minister, Bhupendrabhai Patel, said, “Amidst global interest in connecting with India, GIFT City stands as a beacon, drawing major global financial organisations. Welcoming Asia's largest Rated-4 data centre, CtrlS Datacenters, to Gujarat, reflects our commitment. The Gujarat state government pledges comprehensive support. I am confident that CtrlS' presence will inspire more companies to join the thriving ecosystem of GIFT City in the days ahead." CtrlS Datacenters’ foray and expansion into the State will further boost the digital infrastructure ecosystem in the region. GIFT City houses large banks, insurers, intermediaries, exchanges, trading companies, clearing companies, financial services companies, IT, ITeS and others. The region serves as a hub for several financial activities including offshore banking, capital markets, offshore asset management, offshore insurance, ancillary services, IT, ITeS and BPO Services.  CtrlS Datacenters is trusted by banks, telecom operators, financial services companies, and e-commerce players amongst others. The company offers the industry's best uptime SLA of 99.995% combined with fault-tolerant Rated-4 data centre facilities, the industry's lowest PUE of 1.3 (design), carrier-neutral facilities, and faster deployment. In addition to this, CtrlS Datacenters will extend its group company Cloud4C’s managed services, providing an edge to financial institutions operating in the GIFT City. Cloud4C is already working with several large financial services customers worldwide, as well as with multinational banks. CtrlS Datacenters has recently announced a $2 billion investment plan and has identified three strategic investment areas over the next six years: Augmented footprint of hyperscale data centres that are custom-built for AI and cloud workloads; achieving net zero; and augmented team strength and capabilities.

Crusoe announces data centre expansion with atNorth in Iceland
atNorth, a Nordic colocation, high-performance computing, and artificial intelligence service provider has announced its collaboration with Crusoe Energy Systems to collocate Crusoe Cloud GPUs in atNorth’s ICE02 data centre in Iceland. This is Crusoe’s first project in Europe and the partnership advances Crusoe’s mission to align the future of computing with the future of the climate. It plans to do this by powering Crusoe’s high-performance computing infrastructure with clean energy sources. “I’m thrilled that our quest to source low carbon power has led us to Iceland,” says Cully Cavness, Crusoe’s Co-Founder and President. “This partnership with atNorth allows us to bring the concentrated energy demand of compute infrastructure directly to the source of clean, renewable geothermal and hydro energy.” “It is very important to atNorth that we are collaborating with companies that share our approach to sustainability,” says E. Magnús Kristinsson, CEO of atNorth. “Crusoe’s commitment to maximise their compute while minimising their environmental impact made them a perfect fit.” The atNorth ICE02 site leverages more than 80MW of power, benefiting from the sustainable geothermal and hydro energy produced in Iceland. The country also benefits from low latency networks and fully redundant connectivity to customer bases in North America and Europe via multiple undersea fibre optic cables.  “AI and machine learning are driving the demand for data centres to grow at a record rate,” says Chris Dolan, Chief Data Centre Officer of Crusoe. “We’re excited about our initial commitment to atNorth and look forward to potentially expanding capacity even more in the future.”  The news follows atNorth’s recent acquisition of Gompute, a provider of High Performance Computing (HPC) and data centre services, and the announcement of three new sites: FIN04 in Kouvola, Finland; FIN02 in Helsinki, Finland; and DEN01 in Copenhagen, Denmark.

AI & Partners review the recent political approval of the EU AI Act
The recently concluded Artificial Intelligence Act by the European Union (EU AI Act) stands as a significant milestone, solidifying the EU's global leadership in the regulation of AI technologies. Crafted through collaboration between Members of the European Parliament (MEPs) and the Council, these comprehensive regulations are designed to ensure the responsible development of AI, with a steadfast commitment to upholding fundamental rights, democracy, and environmental sustainability. Key provisions and safeguards The Act introduces a set of prohibitions on specific AI applications that pose potential threats to citizens' rights and democratic values. Notably, it prohibits biometric categorisation systems based on sensitive characteristics, untargeted scraping of facial images for recognition databases, emotion recognition in workplaces and educational institutions, social scoring based on personal characteristics, and AI systems manipulating human behaviour. Exceptions for law enforcement are outlined, permitting the use of biometric identification systems in publicly accessible spaces under strict conditions, subject to judicial authorisation, and limited to specific crime categories. The Act emphasises targeted searches for serious crimes, prevention of terrorist threats, and the identification of individuals suspected of specific crimes. Obligations for high-risk AI systems Identified high-risk AI systems, with potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law, are subjected to clear obligations. This includes mandatory fundamental rights impact assessments applicable to sectors like insurance and banking. High-risk AI systems influencing elections and voter behaviour are also covered, ensuring citizens' right to launch complaints and receive explanations for decisions impacting their rights. Guardrails for general AI systems To accommodate the diverse capabilities of general-purpose AI (GPAI) systems, transparency requirements have been established. This involves technical documentation, compliance with EU copyright law, and detailed summaries about the content used for training. For high-impact GPAI models with systemic risk, additional obligations such as model evaluations, mitigation of systemic risks, adversarial testing, reporting on incidents, cyber security measures, and energy efficiency reporting are introduced. Support for innovation and SMEs Acknowledging the importance of fostering innovation and preventing undue pressure on smaller businesses, the Act promotes regulatory sandboxes and real-world testing. National authorities can establish these mechanisms to facilitate the development and training of innovative AI solutions before market placement. Extra-territorial application and political position The Act extends its reach beyond EU borders, applying to any business, irrespective of their location, when they engage with the EU. This underscores the EU's commitment to global standards for AI regulation. In its political position, the EU took a 'pro-consumer' stance, a decision that, while applauded for prioritising consumer rights, drew varying reactions from tech firms and innovation advocates who might have preferred a different balance between regulation and freedom. Sanctions and entry into force The Act introduces substantial fines for non-compliance, ranging from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover, depending on the severity and size of the company. Co-rapporteur, Brando Benifei, highlighted the legislation's significance, emphasising the Parliament's commitment to ensuring rights and freedoms are central to AI development. Co-rapporteur, Dragos Tudorache, underscored the EU's pioneering role in setting robust regulations, protecting citizens, SMEs, and guiding AI development in a human-centric direction. During a joint press conference, lead MEPs, Carme Artigas, Secretary of State for Digitalisation and AI, and Commissioner, Thierry Breton, expressed the importance of the Act in shaping the EU's digital future. They emphasised the need for correct implementation, ongoing scrutiny, and support for new business ideas through sandboxes. Next steps The agreed text is awaiting formal adoption by both Parliament and Council to become EU law. Committees within Parliament will vote on the agreement in an upcoming meeting. Conclusion The Artificial Intelligence Act represents a groundbreaking effort by the EU to balance innovation with safeguards, ensuring the responsible and ethical development of AI technologies. By addressing potential risks, protecting fundamental rights, and supporting innovation, the EU aims to lead the world in shaping the future of artificial intelligence. The Act's successful implementation will be crucial in realising this vision, and ongoing scrutiny will ensure the continued alignment with the EU's commitment to rights, democracy, and technological progress. For more information on AI & Partners, click here.

Vertiv and Intel join forces to accelerate AI adoption  
Vertiv has announced that it is collaborating with Intel to provide a liquid cooling solution that will support the revolutionary new Intel Gaudi3 AI accelerator, scheduled to launch in 2024. AI applications and high-performance computing emit higher amounts of heat, and organisations are increasingly turning to liquid cooling solutions for more efficient and eco-friendly cooling options.   The Intel Gaudi3 AI accelerator will enable both liquid-cooled and air-cooled servers, supported by Vertiv pumped two-phase (P2P) cooling infrastructure. The liquid-cooled solution has been tested up to 160kW accelerator power using facility water from 17°C up to 45°C (62.6°F to 113°F). The air-cooled solution has been tested up to 40kW of heat load that can be deployed in warm ambient air data centres up to 35°C (95°F). This medium pressure direct P2P refrigerant-based cooling solution will help customers implement heat reuse, warm water cooling, free air cooling and reductions in power usage effectiveness (PUE), water usage effectiveness (WUE) and total cost of ownership (TCO). “The Intel Gaudi3 AI accelerator provides the perfect solution for a Vertiv and Intel collaboration,” says John Niemann, SVP Global Thermal Line, Vertiv. “Vertiv continues to expand our broad liquid cooling portfolio, resulting in our ability to support leaders of next generation AI technologies, like Intel. Vertiv helps customers accelerate the adoption of AI quickly and reliably, while also helping them to achieve sustainability goals.” “The compute required for AI workloads has put a spotlight on performance, cost and energy efficiency as top concerns for enterprises today,” says Dr Devdatta Kulkarni, Principal Engineer and Lead Thermal Engineer on this project at Intel. “To support increasing thermal design power and heat flux for next-generation accelerators, Intel has worked with Vertiv and other ecosystem partners to enable an innovative cooling solution that will be critical in helping customers meet critical sustainability goals.”

Vertiv trends see intense focus on AI enablement and energy management
Intense, urgent demand for artificial intelligence (AI) capabilities and the duelling pressure to reduce energy consumption, costs and greenhouse gas emissions, loom large over the data centre industry heading into 2024. The proliferation of AI, as Vertiv predicted two years ago, along with the infrastructure and sustainability challenges inherent in AI-capable computing can be felt across the industry and throughout the 2024 data centre trends forecast from Vertiv. “AI and its downstream impact on data centre densities and power demands have become the dominant storylines in our industry,” says Vertiv CEO Giordano Albertazzi. “Finding ways to help customers both support the demand for AI and reduce energy consumption and greenhouse gas emissions is a significant challenge requiring new collaborations between data centres, chip and server manufacturers, and infrastructure providers.” These are the trends Vertiv’s experts expect to dominate the data centre ecosystem in 2024: AI sets the terms for new builds and retrofits: Surging demand for artificial intelligence across applications is pressuring organisations to make significant changes to their operations. Legacy facilities are ill-equipped to support widespread implementation of the high-density computing required for AI, with many lacking the required infrastructure for liquid cooling. In the coming year, more and more organisations are going to realise half-measures are insufficient and opt instead for new construction – increasingly featuring prefabricated modular solutions that shorten deployment timelines – or large-scale retrofits that fundamentally alter their power and cooling infrastructure. Such significant changes present opportunities to implement more eco-friendly technologies and practices, including liquid cooling for AI servers, applied in concert with air cooled thermal management to support the entire data centre space. Expanding the search for energy storage alternatives: New energy storage technologies and approaches have shown the ability to intelligently integrate with the grid and deliver on a pressing objective, reducing generator starts. Battery energy storage systems (BESS) support extended runtime demands by shifting the load as necessary and for longer durations and can integrate seamlessly with alternative energy sources, such as solar or fuel cells. This minimises generator use and reduces their environmental impact. BESS installations will be more common in 2024, eventually evolving to fit 'bring your own power' (BYOP) models and delivering the capacity, reliability and cost-effectiveness needed to support AI-driven demand. Enterprises prioritise flexibility: While cloud and colocation providers aggressively pursue new deployments to meet demand, organisations with enterprise data centres are likely to diversify investments and deployment strategies. AI is a factor here as organisations wrestle with how best to enable and apply the technology while still meeting sustainability objectives. Businesses may start to look to on-premise capacity to support proprietary AI, and edge application deployments may be impacted by AI tailwinds. Many organisations can be expected to prioritise incremental investment, leaning heavily on prefabricated modular solutions, and service and maintenance to extend the life of legacy equipment. Such services can provide ancillary benefits, optimising operation to free up capacity in maxed-out computing environments and increasing energy efficiency in the process. Likewise, organisations can reduce Scope 3 carbon emissions by extending the life of existing servers rather than replacing and scrapping them. The race to the cloud faces security hurdles: Gartner projects global spending on public cloud services to increase by 20.4% in 2024, and the mass migration to the cloud shows no signs of abating. This puts pressure on cloud providers to increase capacity quickly to support demand for AI and high performance compute, and they will continue to turn to colocation partners around the world to enable that expansion. For cloud customers moving more and more data offsite, security is paramount, and according to Gartner, 80% of CIOs plan to increase spending on cyber/information security in 2024. Disparate national and regional data security regulations may create complex security challenges as efforts to standardise continue.

New course from The Data Lab to boost understanding of AI
AI technology is evolving at an incredible pace with a raft of new solutions continually being brought to the market. But while the benefits that AI could provide are clear, many business leaders are at risk of being overwhelmed by the practical application of AI for their organisation.  To address these concerns, The Data Lab has launched a brand-new, online and self-paced course, designed to provide business leaders with the practical skills to use AI responsibly within their organisations.  The course, Driving Value from AI, has been designed to address the primary questions leaders across all sectors have about AI. The four-week course which is delivered over 14 hours is entirely non-technical, meaning anyone can benefit from participating in it, regardless of their existing knowledge about data and technology.  Bringing together expert insights from practitioners across Scotland, and expertly guided through the course by Strategy Advisor and Coach Craig Paterson, learners will be empowered to better understand how AI could benefit their organisation in a practical training format.   Anna Ashton Scott, Programme Manager for Professional Development at The Data Lab, who led the course development team, says, “Business leaders are trying their best to understand and keep up with the ever-changing AI landscape. They may feel embarrassed or vulnerable asking questions about AI and worried about its impact on their employees, organisations and livelihoods. When building the 'Driving Value from AI' programme, we wanted to make it as practical as possible. For anyone who signs up, they’ll immediately see how they can translate their knowledge into practical benefits for their organisation, and remove any hesitation they may have had around AI.”  AI tools are already being found to benefit organisations, a study by Stanford University and MIT found that the technology can increase productivity by 14%. Separately, data from Anthem showed that companies do better by taking an incremental rather than a transformative approach to developing and implementing AI, and by focusing on augmenting human capabilities rather than replacing them entirely.   Anna Ashton Scott adds, “What we can achieve with AI will keep evolving and changing, so business leaders must also be nimble and responsive to emerging ethical responsibilities. Leaders must take action to ensure that ethical use of AI is built into operational plans. By doing so, they not only protect the organisation but also provide reassurance to their customers and stakeholders. This new course will ensure that all who participate can take relevant leanings into their organisations and act immediately." Until 22 December 2023, those interested in the course can gain access to a 50% discount using the code: EARLYBIRD50. To register or find out more about the course visit the website.

Research reveals that 95% of security leaders are calling for AI cyber regulations
Research from RiverSafe has revealed that 95% of businesses are urgently advocating for AI cyber regulations, ahead of November’s AI Safety Summit. The report, titled 'AI Unleashed: Navigating Cyber Risks Report', conducted by Censuswide, revealed the attitudes of 250 cyber security leaders towards the impact of AI on cyber security. Three in four businesses (76% of surveyed businesses) revealed that the implementation of AI within their operations has been halted due to the substantial cyber risks associated with this technology. Security concerns have also prompted 22% of organisations to prohibit their staff from using AI chatbots, highlighting the deep-rooted apprehension regarding AI's potential vulnerabilities. To manage risks, two-thirds (64%) of respondents have increased their cyber budgets this year, demonstrating a commitment to bolstering their cyber security defences. Suid Adeyanju, CEO at RiverSafe, says, "While AI has many benefits for businesses, it is clear that cyber security leaders are facing the brunt of the risks. AI-enabled attacks can increase the complexity of security breaches, exposing organisations to data incidents, and we still have not explored the full extent of the risks that AI can pose. Rushing into AI adoption without first prioritising security is a perilous path, so striking a delicate balance between technological advancement and robust cyber security is paramount." Two thirds of businesses (63%) expect a rise in data loss incidents, while one in five (18%) respondents admitted that their businesses had suffered a serious cyber breach this year, emphasising the urgency of robust cyber security measures. A link to the full report can be found here.

VAST Data and Lambda partner for Gen AI training
VAST Data and Lambda has announced a strategic partnership that will enable the world's first hybrid cloud experience dedicated to AI and deep learning workloads. Together, Lambda and VAST are building an NVIDIA GPU-powered accelerated computing platform for Generative AI across public and private clouds. Lambda has selected the VAST Data Platform, the data platform designed from the ground up for the AI era, to power Lambda’s on-demand GPU cloud, providing customers with the fastest and most optimised GPU deployments for Large Language Model (LLM) training and inference workloads in the market. Lambda customers will also have access to the VAST DataSpace, a global namespace to store, retrieve, and process data with high performance across hybrid cloud deployments. “Lambda is committed to partnering with the most innovative AI infrastructure companies in the market to engineer the fastest, most efficient, and most highly optimised GPU-based deep learning solutions available,” says Mitesh Agrawal, Head of Cloud and COO at Lambda. “The VAST Data Platform enables Lambda customers with private cloud deployments to burst swiftly into Lambda’s public cloud as workloads demand. Going forward, we plan to integrate all of the features of VAST’s Data Platform to help our customers get the most value from their GPU cloud investments and from their data.” Lambda chose the VAST Data Platform for its balance of delivering: Simplified AI infrastructure: The NVIDIA DGX SuperPOD certification of the VAST Data Platform allows Lambda to simplify data management and improve data access across its private cloud clusters HPC performance with enterprise simplicity: Its highly performant architecture is built for AI workloads, allowing for faster training of LLMs, and preventing bottlenecks in order to extract the maximum performance from GPUs Data insights and management: The VAST DataBase offers structured and unstructured data analytics and insights that can be rolled out easily Data security: It provides multiple security layers across the VAST Data Platform including encryption, immutable snapshots and auditability, providing customers with a zero-trust configuration for data in flight and at rest in a multi-tenant cloud environment Flexible scalability: It also simplifies multi-site and hybrid cloud operations to allow customers to easily scale to hundreds of petabytes and beyond as they grow “Lambda and VAST are bound by a shared vision to deliver the most innovative, performant, and scalable infrastructure purpose-built from day one with AI and Deep Learning in mind,” says Renen Hallak, Founder and CEO of VAST Data. “We could not be happier to partner with a company like Lambda, who are at the forefront of AI public and private cloud architecture. Together, we look forward to providing organisations with cloud solutions and services that are engineered for AI workloads, offering faster LLM training, more efficient data management, and enabling global collaboration to help customers solve some of the world’s most pressing problems.”

Macquarie Data Centres calls on enterprises to build stronger AI foundations
Macquarie Technology Group's Head of Private Cloud, Jonathan Staff, has called on enterprises and technology providers to do more to lay down the right foundations on which to build AI, warning that cutting corners will come with major risks down the line. Speaking at the Future of Tech, Innovation and Work Conference in Sydney, the technology expert told business leaders they need to approach their AI strategy with a holistic, long-term lens.   “Businesses everywhere are scrambling to figure out how they can leverage AI and make sure they stay ahead of the competition. But in this mad rush to the finish line, we’re seeing lots of companies fail to invest in the right foundations needed to scale in the future,” says Jonathan.  “AI is a huge investment, and there is a lot at stake. It is expensive to ‘lift and shift’ these operations once they’re set up, so getting it right from the get-go is so important.”  Jonathan highlights the challenges of securing the right infrastructure to maximise efficiency, a key priority for Macquarie Data Centres and a factor it says many overlook when embarking on their AI journey.   In response to AI’s greater demands for power, cooling and specialised technology the data centre company is focusing on providing the high-density environments required by these power-hungry AI-engines.   The company has recently revealed plans to supercharge its next and largest facility, IC3 Super West, which is being purpose built for the AI era. The Sydney based data centre will offer AI-ready environments and be flexible enough to accommodate technologies, such as advanced GPUs and liquid cooling. Macquarie recently secured a 41% increase in power to IC3 Super West, bringing the total load of its campus to 63MW.  The industry veteran also stressed the importance of making sure new AI tools are properly integrated into a company's existing systems.   “Organisations need to think carefully about how the AI is going to talk to your existing systems. If you build a new AI tool as a siloed project, and it takes off, then you’re going to have huge problems and probably huge costs, trying to retrofit it and incorporate it into pre-existing systems.  “Organisations need to lay the right foundations to make sure everything is connected now, and that all the systems will provide enough runway to scale and grow quickly in the future.”  Jonathan calls upon the industry to prepare themselves for an AI-driven future and think about how they can adapt to capitalise on the opportunities. However, he also stressed that it needs to be done in a way that is compliant with current and future data regulations.  “AI is currently the wild west, but you can expect regulation around sovereignty and data compliance to get tighter in many countries. Businesses need to choose partners that have the right certifications and policies in place to ensure compliance now and into the future."



Translate »