Features


NetApp optimises VMware environments with new capabilities
NetApp, an intelligent data infrastructure company, has announced new capabilities that support VMware Cloud Foundation deployments. Mutual customers will be able to leverage NetApp solutions to right-size their IT environments to run VMware workloads at scale efficiently. For more than a decade, NetApp and VMware have collaborated to ensure the success of their joint customers and to help them unlock the full value of their VMware investments. During that time, NetApp has been a key engineering design partner with VMware and is continuing to drive innovation in highly available, scalable and performant storage as a design partner for its Next-Generation vSphere Virtual Volumes (vVols). Now, NetApp is announcing new capabilities that will enable joint customers to run their VMware deployments more efficiently. “NetApp and Broadcom are working together to take the uncertainty out of hybrid cloud environments,” explains Jonsi Stefansson, Senior Vice President and Chief Technology Officer at NetApp. “More than 20,000 customers rely on NetApp to support their VMware workloads. NetApp's continued close collaboration with Broadcom following the acquisition of VMware ensures our solutions seamlessly interoperate so our mutual customers can leverage a single intelligent data infrastructure to operate their VMware workloads more efficiently.” NetApp is helping to optimise costs, simplify operations, and increase flexibility for customers running VMware environments by offering: • Expanded support for VMware Cloud Foundation (VCF): NetApp and Broadcom customers will now be able to simplify their VCF hybrid cloud environments by using NetApp ONTAP software for all storage requirements, including standard and consolidated architectures. The latest release of ONTAP Tools for VMware (OTV) will support SnapMirror active sync to provide symmetric active-active data replication capabilities for NetApp storage systems running VMware workloads. SnapMirror active sync allows customers to operate more efficiently by offloading data protection from their virtualised compute and improving data availability. • New capabilities for Azure VMware Solution (AVS): To support customers that are extending or migrating their vSphere workloads to the cloud, customers can now leverage Spot Eco by NetApp with AVS reserved instances to get the most value out of their deployments. Using Spot Eco to manage AVS reserved instances while also using Azure NetApp Files to offload data storage can reduce compute costs significantly. • Enhanced VM Optimisation features for NetApp Cloud Insights: NetApp is introducing Cloud Insights VM Optimisation, expanding its comprehensive solution for optimising virtual environments, including VMware. Cloud Insights VM Optimisation will give customers tools to reduce costs by increasing VM density, run storage at the best price-to-performance ratio for their environment, and monitor their entire environment to ensure availability, performance, and adherence to configuration best practices across the entire stack. To help customers optimise the compute, memory and storage resources of their VMware environments, NetApp is also offering customers a free 30-day trial of Cloud Insights to most cost-effectively migrate to the new VMware software subscriptions. These offerings follow last month’s release of enhancements to NetApp BlueXP disaster recovery service, which provides guided workflows to design and execute automated disaster recovery plans for VMware workloads across hybrid cloud environments with newly added support for VMFS datastores. “As organisations modernise infrastructure with VMware Cloud Foundation, they want to know that the services upon which they rely from industry-leaders such as NetApp will continue to work seamlessly and deliver the value they have come to expect,” says Paul Turner, Vice President of Products, VCF Division at Broadcom. “Having NetApp as a close collaborator helps our mutual customers deploy innovative data and storage services on top of their private cloud platform, and ensure they are getting the most value out of their VMware environments.” “We have made Microsoft Azure the cloud of choice for VMware environments, and offer fast and cost-effective solutions enabling many customers to move their VMware workloads to the cloud,” says Brett Tanzer, Vice President of Product Management at Microsoft. “As VMware customers navigate changes to operating virtualised environments, we have given our customers a way to lock in secure and predictable pricing over multiple years. NetApp's data management and cloud observability capabilities help our customers ensure those deployments are delivering the return on investment they need.” “In an ever more complicated world of cloud, data, and infrastructure operations, IT teams are increasingly looking for holistic platforms over point solutions,” notes Scott Sinclair, Practice Director, Enterprise Strategy Group. “These joint updates from NetApp and Broadcom enable customers to use NetApp’s intelligent data infrastructure to consolidate multiple data operations onto a single platform with industry-leading data management and CloudOps capabilities. That will help customers drive greater operational and infrastructure efficiencies that reduce the total cost of ownership for their VMware investments.” For more from NetApp, click here.

DataVita launches alternative to public cloud services
DataVita, Scotland’s largest data centre and cloud services provider, has launched National Cloud, a platform that addresses the challenges many organisations face when using the public cloud that supports the wider use of artificial intelligence (AI). National Cloud is designed for workloads that don’t fit well into the traditional public cloud and it offers full transparency and predictability over costs – with no hidden fees or egress charges – removing the risk of the potentially infinite costs that come with existing pay-as-you-go public cloud models. The service is delivered from some of the most sustainable data centres in the UK, offering the lowest digital emissions of any cloud provider in the country, DataVita claims. This also ensures data residency within the UK – whereas many public could operators use overseas facilities – addressing compliance and security concerns for public services and regulated industries. Built on open architecture and interoperable systems, National Cloud allows integration and movement between different products which reduces vendor lock-in risks and enables hybrid cloud capabilities. It is purpose-built to handle complex workloads, supporting the requirements of medium and large enterprises, tech start-ups, and public sector organisations. DataVita has forged a strategic partnership with Hewlett Packard Enterprise (HPE) to deliver the new platform, combining the agility of a specialised cloud provider with the robust technology, industry experience, and simplicity of an established market leader. Danny Quinn, MD of DataVita (pictured above), says, "National Cloud is our answer to the recurring issues organisations face with public cloud services. We're actively seeking out workloads that public clouds can't efficiently support, particularly those driven by AI's growing demands and organisations requiring intricate hybrid cloud architectures. Our platform offers the performance, security, and scalability needed for these intensive applications, all while providing cost predictability and sustainability. "Available across the UK, we're already seeing significant interest from organisations looking to escape the limitations of public cloud services and gain more control over their critical workloads and data. Our focus on transparency, sustainability, and expertly tailored solutions sets us apart in the market." Xavier Poisson, Global Vice President for Service Providers & Colocation Providers at HPE, adds, “The National Cloud combines HPE GreenLake cloud’s leading capabilities to manage, observe, automate and secure complex cloud environments, with DataVita’s experience as a leading provider of cloud and connectivity services. With this new platform, customers across the UK will have access to a trusted infrastructure that allows them to stay in full control of their data and derive value from it.” For more from DataVita, click here.

Feature - Data centre growth requires sustainable thinking
The development of AI is having a huge impact on almost every industry, and none more so than within data centres. It is expected that global data centre electricity demand will have doubled by 2026 due to the growth in AI. So, how do we ensure our data centres are operating as efficiently as possible? Russell Dailey, Global Business Development Manager, Data Centres at Distech Controls, explains. We are generating more and more data in all aspects of lives, whether that’s through our business operations, the use of social media and even our shopping habits with the growth of e-commerce. Our new dependence on web services and digital infrastructure requires a greater number of data centres, and we need them to operate more reliably and efficiently than ever before. According to the International Energy Agency (IEA), in 2022, data centres used 460 terawatt hours of electricity, and it expects this figure to double in just four years. Data centres could be using a total of 1,000 terawatt hours annually by 2026. This demand for electricity has a lot to do with the growth in AI technology. In a similar way to how the growth of e-commerce drove uptake for large industrial warehouses, AI is expected to more than double the need for global data centre storage capacity by 2027, according to JLL’s Data Centres 2024 Global Outlook. As data centres contribute substantially to global electricity consumption, more facilities are seeking to adopt enhanced sustainability strategies. To achieve net zero emissions targets or other environmental objectives, data centre companies must invest heavily in energy efficiency measures. A Building Management System (BMS) can form the cornerstone of these efforts, providing insights into energy usage and helping to reduce unnecessary energy waste with enhanced operational efficiency. Data centres are unique buildings, and a BMS within this environment requires careful planning and implementation. Let’s be open In the past, building systems have traditionally been proprietary and not flexible like open systems. Proprietary systems speak different languages, resulting in incomplete visibility, data, and reliability, and leave you tied to one, often expensive, service provider. However, that is changing, and open systems are becoming ever more popular in commercial buildings and have numerous benefits for data centres.Systems offer monitoring and analytics at the local controller, reducing network complexity, increasing redundancy and security. With Distech Controls, operators can keep their facility at optimal performance through a proven IP-based solution that creates a more secure and flexible network enabling easy integration of systems with a wide range of IT and business applications. Distech Controls’ commitment to open protocols and industry IT standards, combined with its best-in-breed technology, creates a sustainable foundation that supports and evolves with a building system’s life cycle. Efficient and forward thinking Open systems also have an effect on the sustainability of a data centre. They can bring everything together in a cohesive and centralized fashion allowing users to visualise information, assess relationships, establish benchmarks and then optimise energy efficiency accordingly. Distech Controls’ solutions meet even the most demanding data centre control requirements (even remotely) via fully programmable controls and advanced graphical configuration capabilities. They leverage technology such as RESTful API, BACnet IP, connected controllers, and unified systems, to help future-ready your data centre as technology continues to advance. The importance of security The smarter buildings become, the greater the importance of cyber security. There are some fundamentals that building owners and system integrators need to consider when it comes to the security of their BMS. As a starting point, the devices or operational technology (OT) should be on a different network to the IT system as they have separate security requirements and various people need to access them. As an example, contractors overseeing BMS devices do not need access to HR information. Each device should be locked down securely so they can only communicate in the way that is required. There should be no unnecessary inbound or outbound traffic from the devices. This links neatly to monitoring. It is vital to monitor the devices after installation and commissioning to ensure there is no untoward traffic to the devices that could threaten a buildings or company’s security. Some manufacturers, such as, Distech Controls, are ensuring its products are secure straight out of the box. Security features are built directly into hardware and software like TLS 256- bit encryption, built-in HTTPS server and HTTPS certificates. For instance, the ECLYPSE APEX incorporates a secure boot and additional physical security measures to help overcome today’s security challenges. Distech Controls’ solutions are specified by leading web service providers because of their high resiliency, flat IP system architecture and open protocol support. They also incorporate the right technologies to comply with the most stringent cybersecurity standards as well as RESTful API / MQTT for OT/IT interoperability purposes. These attributes allow data centre operators, integrators, and contractors the freedom to choose the best-in-class solutions for their data centre’s infrastructure management services. These advanced features enable significant operational efficiency improvements and energy cost reductions for data centre owners and managers. AI technology is already having a revolutionary effect in business and our personal lives. At Distech Controls, we are utilising its capabilities, and it is clear that this revolution is going to require more data centres. We need to look at ways we can make these specialist buildings as efficient as possible. Utilising an intelligent and open BMS is essential.

Malaysia’s first 5G orchestration platform announced
Maxis has partnered with Singtel to introduce Malaysia's first all-in-one platform for 5G network, edge computing, cloud and services orchestration, built on Singtel’s Paragon for telco networks. The platform will make 5G-Advanced (5G-A) and 5G technology, edge and multi-cloud computing more accessible to Malaysian businesses and accelerate digital transformation across various verticals such as manufacturing, logistics, healthcare and public services. The partnership was formalised during the Malaysia Commercialisation Year (MCY) Summit 2024, an event organised by the Ministry of Science, Technology and Innovation (MOSTI) to drive the national commercialisation ecosystem. The Summit was officiated by Guest of Honour YAB Dato’ Seri Anwar Ibrahim, Prime Minister of Malaysia. The Memorandum of Understanding (MoU) exchange took place at the Maxis booth between Goh Seow Eng, CEO of Maxis, and Manoj Prasanna Kumar, Chief Technology Officer of Singtel Digital InfraCo. Made available in Malaysia through Maxis’ enterprise arm, Maxis Business, the platform will enable ‘on-demand’ edge computing services, providing customers with access to low-latency computing, GPU as a Service (GPUaaS) and storage. With its multi-access edge computing (MEC) capabilities, data from end-users and devices can be processed at the edge. This, combined with Artificial Intelligence (AI), provides real-time processing and intelligent decision-making. Paragon offers a powerful marketplace available to both Maxis customers and partners, making it easy for enterprises to create network slices on-demand and deploy mission-critical 5G applications on MEC – all with the click of a button. Goh Seow Eng, CEO of Maxis, says, "Our collaboration addresses customer needs by providing a unified 5G platform that simplifies orchestration across network and cloud environments. The platform enables greater access, speed and flexibility for businesses to deploy and manage 5G and cloud computing solutions seamlessly. "This will help them to focus on their core business, further strengthening their competitiveness globally. As the country’s leading integrated telco, we look forward to accelerating 5G-A and 5G adoption and the digitalisation of Malaysian businesses as the preferred digital business partner.” Bill Chang, CEO of Singtel Digital InfraCo, adds, "We’ve seen a strong shift in demand from enterprises for 5G and edge computing capabilities to accelerate their digital transformation. Singtel has been leading the charge, forging various strategic partnerships with telcos globally with Paragon, being instrumental in this adoption. "Paragon enables faster monetisation of 5G infrastructure by reducing complexities for telcos to deliver and scale 5G use cases. We are pleased to partner with Maxis to mutually expand and deepen the service opportunities of 5G and edge monetisation in Malaysia.” The platform’s capabilities can benefit businesses through a wide range of enterprise applications across different verticals that require high-speed data processing, such as real-time analytics, mixed reality and autonomous systems. The platform will be locally hosted and deployed in Malaysia, to cater to the cybersecurity and data sovereignty requirements for Malaysian businesses. Platforms built and powered by Paragon have now successfully been deployed in four ASEAN markets. With Maxis bringing the capability exclusively to Malaysia, multinational companies will be able to operate with ease, with a common experience and unified architecture across the region. The deployment is timely given Malaysia’s upcoming ASEAN chairmanship in 2025, where the nation will play a more active role leading the regional bloc’s economic and digital aspirations. Maxis says that the partnership demonstrates its commitment to bringing innovation to the market and enabling Industry 4.0 transformation through real-world use cases. In addition to 5G and cloud-native solutions, this commitment includes leading the development of next generation solutions in Malaysia around AI and Generative AI, IoT, and 5G-advanced technology. Singtel Paragon is a comprehensive solution that enables enterprises to connect with the 5G network and securely deploy their edge computing applications and services rapidly on the telco’s infrastructure, thus reducing time-to-market and shortening the innovation curve. Paragon will also lower the barriers for adopting MEC services, unlocking efficiency and business innovation for enterprises. For more from Singtel, click here.

RSM increases network resilience with Macquarie
Macquarie Telecom, part of Macquarie Technology Group, has signed an agreement with RSM Australia across both SD-WAN network and voice services. The partnership first began in late 2019 amid a period of significant growth for RSM. From the implementation of cutting-edge SD-WAN technology across its 32 locations, with a significant presence in regional Australia, RSM has seen a remarkable transformation in network performance and user experience. This success drove RSM to extend the partnership to include SIP trunking in 2021 – the use of VoIP – to maintain telephone service across the large and far-flung organisation. RSM, a global provider of accounting and professional services that was established over 100 years ago, realised its existing MPLS network was not the best solution to support its rapidly expanding business – particularly in terms of resiliency, capacity, and cost. The previous network, providing private connectivity between RSM’s locations and cloud, did not deliver the expected resiliency. This inadequacy, coupled with the high cost and challenging relationship with the previous provider, prompted RSM to seek a more reliable, cost-effective, and customer focused solution. The company recognised the potential of SD-WAN technology to greatly improve network bandwidth and resiliency, and went to market for a partner to deliver this solution. Macquarie Telecom emerged as the top contender following a rigorous due diligence process, and the roll-out began in late 2019, just prior to the onset of the COVID-19 pandemic. “Despite the challenges posed by the pandemic, Macquarie Telecom’s proactive and responsive approach ensured the deployment continued smoothly,” says Andre Bracetti, IT Manager at RSM.  Deployment was completed by mid-2020, significantly enhancing RSM’s network performance and reliability as a result. This success was further highlighted by RSM’s recent decision to extend the partnership. RSM’s commitment to reaching regional areas has been a key component in its decision to initially engage, and now extend the partnership, with Macquarie Telecom. According to Andre, a crucial part of the firm’s history and strategy is to be closer to its clients, particularly those in regional areas: “SD-WAN has completely transformed the service we can provide our clients, from Bunbury to Ballarat and everywhere in between. Previously, we faced limited bandwidth and frequent outages, especially in our regional offices. With the new technology, we’ve increased capacity and resiliency, reducing IT outages, and significantly improving network reliability.” Previously, some of RSM’s regional offices had access to very limited bandwidth, which often led to poor user experiences. With SD-WAN, a secondary link was added to every location which not only improved capacity but also reliability, drastically improving user experience. Client experience is a top priority for RSM, and Andre notes that Macquarie Telecom’s shared passion for customer experience made the partnership a perfect fit. He comments, “We have what we refer to as a ‘priority one’ event. This means any type of disruption that impacts multiple people in multiple locations. We chart the trends of priority one events over time and there is an obvious downturn in network outage events since Macquarie Telecom implemented SD-WAN. Even when we’re alerted to an issue, with the resiliency that Macquarie Telecom SD-WAN provides, there’s no impact on user experience.” The partnership has allowed RSM to focus on strategic initiatives rather than managing network stability issues. This shift has enabled the team to concentrate on improving system performance, enhancing security, improving digital literacy, and supporting business acquisitions and office moves.  Andre continues, “The role of an ideal partner in innovation is important. We chose Macquarie Telecom because of its competitive edge and the reliability of service. This partnership has minimised distractions and allowed us to focus on strategic growth and client satisfaction.” Aaron Tighe, Western Australia State Manager, Macquarie Telecom, adds, “RSM is deeply committed to maintaining world-class customer service. At Macquarie, we seek to make a difference in markets that are overcharged and underserved, with customer experience magic at the centre. It is a privilege to support RSM as the company excels in a challenging market.” Looking ahead, RSM continues to prioritise innovation and expanding its regional reach, with a recent practice established in Gosford and others on the horizon. The partnership with Macquarie Telecom will play a crucial role in this journey, providing the technological foundation necessary to explore new opportunities and enhance service offerings.  For more from Macquarie, click here.

A sustainable future for data centres
By Ruari Cairns, Director of Risk Management and European Operations, True (powered by Open Energy Market). In recent years, the data centre industry has witnessed significant growth and innovation, with notable developments such as Google's £1 billion investment in a new data centre in Hertfordshire and Octopus Energy’s commitment to utilising energy from processing centres to heat swimming pools. These advancements underscore the industry’s critical role in supporting our increasingly digitalised world. Since 2010, demand for digital services has increased rapidly with the number of global internet users more than doubling, and internet traffic increasing 25-fold, according to the International Energy Agency (IEA). Data centres serve as the backbone of digital infrastructure, providing the necessary storage, processing, and connectivity to not only allow day-to-day activity, but also to enable businesses to innovate, improve efficiency, and stay competitive in an increasingly digital age. The environmental impact of data centres Data centres are undeniably energy-intensive operations. A report from the IEA reveals that data centres and data transmission networks collectively accounted for approximately 330 Mt CO2 equivalent in 2020. Despite their environmental footprint, data centres are indispensable. Therefore, innovation is vital to decarbonise and make this sector more energy efficient. Challenges surrounding sustainability Encouragingly, major corporations are placing greater emphasis on environmental credentials. A prime example is Google Cloud, which has committed to achieving a carbon-neutral footprint and transitioning to completely carbon-free energy across all its global data centres by 2030. This ambitious target highlights the growing demand for sustainable solutions in the sector. However, regardless of the growing emphasis, data centres face significant challenges in achieving these green objectives. Balancing energy procurement during operational ramp-up periods and navigating regulatory complexities pose strategic hurdles. The insatiable demand for data presents capacity issues and strains on energy availability, necessitating innovation and collaboration for a greener future. Looking ahead to the next 12-18 months, not only will the data centre industry face pressures around sustainability, but it also may encounter capacity constraints and energy availability challenges. Power purchase agreements To achieve sustainability goals, data centres will adopt various strategies in 2024, including securing long-term PPAs for renewable energy procurement. With PPAs, data centres enter into agreements with renewable energy providers to ensure a consistent and sustainable source of electricity. According to a report by BloombergNEF, corporate PPAs for renewable energy reached a record of 23.7 gigawatts in 2020, with data centres being one of the key sectors driving this growth. This trend is likely to continue to grow, with more data centres looking for reliable green energy sources. Energy audits It is also important that data centres increase the regularity with which they perform energy audits. Energy audits identify areas of high energy consumption and support data centres to implement energy-efficient measures. This can include optimising server utilisation, upgrading to energy-efficient hardware, and implementing advanced cooling technologies. According to a study, commissioned by the US Department of Energy, energy audits can lead to energy savings of up to 30%. Infrastructure design Data centres will continue to prioritise energy efficiency in their infrastructure design into 2024. This includes using energy-efficient servers, cooling systems, and power distribution units. Advanced cooling technologies, such as liquid cooling, will also gain traction to improve energy efficiency. Advanced systems and infrastructure can support data centres to optimise cooling operations by adjusting cooling levels based on heat loads and demand. This minimises overcooling and ensures resources are used more efficiently. In some cases, cooling systems use up to 40% of the total energy a data centre needs. By implementing advanced, green, cooling technologies, data centres can make substantial and critical energy and carbon savings. Onsite generation Due to the growing concerns about grid capacity, data centres will increasingly invest in on-site energy generation technologies. This will be primarily through solar and wind turbines, as these are easily tailored to each location’s needs. By generating their own clean energy, data centres can reduce their dependence on the grid and minimise their carbon footprint. Highlighting this point, Google recently signed its first PPA in Ireland for a 58-megawatt solar site to help its offices and data centre in Ireland reach 60% carbon-free energy in 2025. The trend will continue this way as more data centres seek the security that comes with onsite generation. Conclusion As the digital revolution accelerates, the sustainability of data centres becomes key. By prioritising sustainability, navigating regulatory challenges, and adopting strategic energy procurement strategies, data centres can pave the way for a greener and more resilient future. Collaboration between industry stakeholders and government support will be pivotal in driving collective progress towards a sustainable data centre ecosystem.

Vultr and Run:ai deliver advanced NVIDIA GPU orchestration
Vultr, the privately held cloud computing platform, today announced that Run:ai, a leader in AI optimisation and orchestration, is the latest ecosystem partner to join its Cloud Alliance. Run:ai’s advanced AI workload orchestration platform, coupled with Vultr’s robust, scalable cloud infrastructure - including Vultr Cloud GPUs, accelerated by NVIDIA computing technologies and Vultr Kubernetes Engine - provides the enhanced computational power needed to accelerate AI initiatives across Vultr’s global network of 32 cloud data centre locations. As businesses across industries look to deploy their AI initiatives, they often grapple with scaling AI training jobs, fragmented AI development tools, and long queue times for AI experimentation. The partnership between Vultr and Run:ai addresses these challenges, offering a cutting-edge solution that enhances resource utilisation, supports rapid AI deployment, and provides customisable scalability through their integrated infrastructure and advanced AI workload orchestration. “Enterprises around the world are vying to deploy transformative, AI-driven solutions,” says Sandeep Brahmarouthu, Head of Global Business Development at Run:ai. “Our partnership with Vultr will give these organisations a comprehensive solution suite designed to address the technical challenges of AI project development. Now, businesses are empowered with unparalleled adaptability, performance, and control, setting a new standard for AI in today’s rapidly evolving digital landscape.” Vultr’s scalable infrastructure guarantees unified AI stack management, thanks to seamless integrations with existing Cloud Alliance partners, Qdrant and Console Connect. Qdrant, a high-performance vector database with retrieval-augmented generation (RAG) capabilities, manages and queries large volumes of vector data, enhancing tasks like similarity search and recommendation systems. Console Connect facilitates private, high-speed networking to ensure secure, low-latency data transfer between these components, optimising the overall AI/ML pipeline. Now, Run:ai has become the newest member of the Cloud Alliance, with its advanced AI workload orchestration platform. This integrated stack, centred around Vultr, provides a robust, scalable, and efficient solution for handling the most demanding AI/ML workloads. As a result, customers can benefit from: • Enhanced GPU utilisation – Maximise GPU efficiency with Run:ai’s dynamic scheduling and fractional GPU capabilities, reducing idle times and optimising resource use on Vultr’s scalable infrastructure.• Accelerated AI development – Speed up AI development cycles with Vultr’s high-performance cloud infrastructure and Run:ai’s comprehensive orchestration platform, reducing time to market for AI models.• Simplified lifecycle management – Streamline the entire AI lifecycle from development to deployment with integrated tools for dynamic resource allocation, efficient training, and comprehensive monitoring.• Cost-effective operations – Minimise operational costs with Vultr’s affordable cloud solutions and Run:ai’s efficient resource management, ensuring economical AI project execution.• Robust security and compliance – Bolster the security and compliance of AI workloads with advanced features like role-based access control and detailed audit logs, backed by Vultr’s secure infrastructure. Kevin Cochrane, CMO of Vultr, comments, “We are committed to giving our customers the best-of-breed technologies needed to help achieve their business goals. By partnering with Run:ai, we’ve provided a unique solution tailored specifically for AI workloads. Our integrated platform will not only ensure high performance and cost efficiency for customers worldwide, but also give them the agility needed to navigate the evolving demands of modern AI environments.” Vultr is a member of the NVIDIA Partner Network. For more from Vultr, click here.

Can AI limit the environmental damage it’s responsible for?
Data centres, expected to account for 6% of the world’s carbon footprint by 2030, are undergoing a period of transformation, driven by the rise of AI and the pressing need to combat climate change. With such rapid growth comes unforeseen environmental impacts, highlighting the significance of the application of AI technologies in optimising energy use. It is undeniable that the data-intensive workloads generated by AI will see power consumption soar to unprecedented levels. However, the technology itself can help develop the next generation of data centres that are both high-capacity and more sustainable. According to Julien Deconinck, Managing Director at DAI Magister, environmental concerns are driving the development of innovative AI solutions that optimise energy usage in data centres, while reducing operating costs. Julien explains, “Over the next five years, the amount of data generated will surpass the total produced in the past decade, necessitating a significant expansion of storage capacity in data centres worldwide. Another key factor contributing to this rising energy demand is the escalating computational power required for AI training, which is doubling every six months. “Tech giants, recognising the scale of the problem and their significant contribution to it, are racing to mitigate the environmental impact of their operations. These companies face mounting pressure to reduce their carbon footprint and meet neutrality targets. “Most data centres aim to operate in a ‘steady state’, striving to maintain consistent and predictable energy consumption over time to manage costs and ensure reliable performance. As a result, they’re dependent on the local electricity grid, where outputs can fluctuate significantly. AI-driven solutions offer enormous potential to address these challenges by optimising energy usage and predicting and managing demand more effectively. “Integrating renewable energy sources like solar and wind into the grid can improve data centre sustainability, but this presents challenges due to their variable availability. AI addresses this by forecasting renewable energy availability using weather data and predictive analytics. This enables data centres to shift non-critical workloads to peak renewable energy production periods, maximising the use of clean energy and reducing reliance on fossil fuels. “When assessing the efficiency of a facility, the power usage effectiveness (PUE) measure serves as a crucial metric for indicating output. By monitoring and adjusting operational parameters in real time, AI sensors autonomously adjust power supply voltages, reducing consumption without compromising performance. “AI algorithms analysing usage patterns and optimising workload distribution further reduce this energy waste associated with inadequate server management and inconsistent allocation. The optimisation of computing resources in data centres minimises the need for, and use of, excess capacity, both lowering operating costs and maximising performance capabilities.” AI can also pre-empt system issues that can lead to breakdowns or long-term disruption. “AI sensors are facilitating predictive maintenance by analysing real-time data to detect anomalies or deviations in consumption patterns. Once identified, AI systems alert the issue to operators, preventing the activation of energy-intensive emergency cooling systems. “Integration of AI sensors is further beneficial in thermal modelling, enabling dynamic adjustments to systems, accounting for high-intensity computing tasks and external temperature fluctuations by predicting potential hotspots within the facility, based on data collected.” Julien concludes, “Together, AI and green technologies are set to revolutionise data centre operations by allowing them to manage larger capacities while reducing their carbon footprint. This not only supports sustainability objectives but also safeguards the transition to low-carbon, high-capacity data centres as the demand for data storage and processing continues to surge brought about by the rise of AI.”

Black Box named a 'Partner of the Year' by Juniper
Black Box, a global provider of innovative communication and technology solutions, has been recognised as Worldwide GSI AIDE 2023 Partner of the Year by Juniper Networks, a leader in secure AI-native networks. Each year, Juniper Networks recognises partners based on their ability to drive innovative AI-native business solutions, providing exceptional customer and user experiences while achieving their financial goals. Black Box was recognised as the 2023 winner in the Global Solutions Integrator (GSI) category for its ability to modernise automated cloud-based network solutions integrating Juniper Networks' AI-native technologies. Juniper's 2023 Partner of the Year Awards are hosted as part of the Juniper Partner Advantage Programme. The programme not only recognises partners for their outstanding performance in delivering digital transformation to customers, but also helps partners build, sustain and grow their Juniper Practice with the right support and tools to leverage the next generation of networking solutions. Gordon Mackintosh, GVP of Partner Organisation, Juniper Networks, comments, "We are thrilled to recognise Black Box as the Worldwide GSI AIDE 2023 Partner of the Year. The company's commitment to driving innovative, AI-native business solutions has been exemplary, and its ability to modernise automated cloud-based network solutions integrating our AI-native technologies is truly commendable. "This recognition is a testament to its outstanding performance in delivering exceptional customer and user experiences while achieving its financial goals. We look forward to continuing our partnership and supporting Black Box in leveraging the next generation of networking solutions as part of our Juniper Partner Advantage Programme." Sanjeev Verma, CEO of Black Box, adds, "Being recognised as the Juniper Worldwide GSI AIDE Partner of the Year for 2023 is a testament to Black Box's unwavering dedication to technological advancement and industry leadership. "This accolade underscores our strategic alignment with Juniper Networks, a true trailblazer in AI and networking, and reinforces our commitment and contribution to shaping the future of technology. More than just recognition, it is a powerful affirmation of the value Black Box places on its partnerships." Additionally, Black Box is proud to announce its new designation as a Global Elite Plus Partner in the Juniper Partner Advantage Programme. This prestigious status highlights Black Box's capabilities and commitment to delivering innovative solutions on a global scale. The Global Elite Plus designation further cements Black Box's position as a leader in the industry, the company says, enhancing its ability to provide its service and expertise to clients worldwide. For more from Black Box, click here.

VAST Data Platform certified for NVIDIA cloud partners
VAST Data, the AI data platform company, today announced that the VAST Data Platform has been certified as a high-performance storage solution for NVIDIA Partner Network cloud partners. VAST Data says that this certification underscores its position as a leading data platform provider for AI cloud infrastructure, and that it further strengthens the company's collaboration with NVIDIA in building out next-generation AI factories. VAST provides a unified set of storage and data services that empower cloud service providers (CSPs) to offer a comprehensive catalogue of data-centric offerings that are deeply integrated with NVIDIA technologies. Uniquely designed to meet the stringent requirements of large-scale AI cloud infrastructure, the VAST Data Platform supports training and fine-tuning AI models of all sizes and modes, ranging from multimodal and small language models with less than 10 billion parameters to the world’s largest models consisting of over one trillion parameters. “This certification with NVIDIA builds on VAST’s already tremendous success with CSPs as the de facto AI data platform solution for large-scale, cloud infrastructure,” says John Mao, Vice President, Technology Alliances at VAST Data. “With the VAST Data Platform independently validated and certified through the NVIDIA Partner Network, organisations can more confidently and securely deploy their AI models at unprecedented scale across thousands of GPUs.” With seamless scalability and industry-leading uptime, the VAST Data Platform enables service providers with capabilities that span the complete AI pipeline - from multi-protocol data ingest, accelerated data pre-processing for feature engineering, and high-performance storage for model training with fast checkpoints and restores to edge data management for model serving and end-to-end cataloguing to meet audit and compliance requirements. The VAST Data Platform offers CSPs: ● Service provider-grade reliability: the VAST Data Platform helps observed production systems achieve 99.9999% availability● Secure multi-tenancy: with a zero-trust framework, per tenant data encryption, flexible network segmentation, and robust audit capabilities, VAST enables CSPs to deliver secure cloud services at scale● Fine-grained workload isolation: with granular quality-of-service policies that prevent multi-tenant I/O contention, the VAST Data Platform helps CSPs ensure numerous customers have the performance and data access they need for AI workloads from a single cluster● Infrastructure cost savings: service providers and enterprises alike can consolidate storage silos with an affordable single all-flash solution and eliminate the overhead of data copies and sprawl● Improved operational efficiency: VAST offers robust APIs and SDKs, and all day two operations (such as upgrades) can be performed online, leading to a better customer experience with fewer admins required. The VAST Data Platform serves as the comprehensive software infrastructure required to capture, catalogue, refine, enrich, and preserve data through real-time deep data analysis and learning. This comprehensive approach empowers AI clouds to offer a diverse range of data services to their customers, further enhancing their AI capabilities. For more from VAST Data, click here.



Translate »