Saturday, May 24, 2025

Artificial Intelligence


DataStax launches hyper-converged data platform
DataStax, a Generative AI data company, today announced the upcoming launch of DataStax HCDP (Hyper-Converged Data Platform), in addition to the upcoming release of DataStax Enterprise (DSE) 6.9. Both products enable customers to add generative AI and vector search capabilities to their self-managed, enterprise data workloads. The company describes DataStax HCDP as "the future of self-managed data", and the platform is designed for modern data centres and Hyper-Converged Infrastructure (HCI) to support the breadth of data workloads and AI systems. It supports powerful on-premises enterprise data systems built to AI-enable data and is ideal for enterprise operators and architects who want to take advantage of data centre modernisation investments to reduce operating costs and operational overhead. HCDP brings OpenSearch and Apache Pulsar for generative AI RAG knowledge retrieval The combination of OpenSearch’s widely adopted Enterprise Search capabilities, with the high-performance vector search capabilities of the DataStax cloud-native, NoSQL Hyper-Converged Database, enables users to retrieve data in highly flexible ways, catalysing the ability to get RAG and knowledge applications into production. “With more than 2,000 contributors, including upwards of 200 maintainers spanning 19 organisations, OpenSearch has built a vibrant, growing community and is used by companies around the world,” says Mukul Karnik, General Manager, Search Services at AWS. “We’ve seen a growing number of companies building AI-powered capabilities using OpenSearch, and RAG workloads in particular can benefit from OpenSearch’s extensive full-text search capabilities. We are excited to have DataStax join the community and offer OpenSearch as part of HCDP, and we look forward to seeing how their customers use it to build a range of new generative AI-powered services and applications.” Hyper-converged streaming (HCS) built with Apache Pulsar is designed to provide data communications for a modern infrastructure. With native support of inline data processing and embedding, HCS brings vector data to the edge, allowing for faster response times and enabling event data for better contextual generative AI experiences. With the ability to consolidate workloads from legacy applications, HCS provides application communication modernisation and enables users to operate across multiple cloud environments or between on-premise and cloud to provide a high-performance, scalable data communications layer that simplifies data integration across all environments. HCDP provides rapid provisioning and powerful, intuitive data APIs built around the DataStax one-stop GenAI stack for enterprise retrieval-augmented generation (RAG), and it’s all built on the proven, open-source Apache Cassandra platform. DataStax 6.9 brings generative AI capabilities to existing on-premises data DataStax Enterprise 6.9 offers enterprise operators and architects with massive on-premises Cassandra and DSE workloads the fastest upgrade to lower total cost of ownership and operational overhead, while adding powerful GenAI capabilities to their existing on-premises data. Its vector add-on brings vector search capabilities to both existing DSE and lift-and-shift Cassandra workloads, and the addition of DataStax Mission Control makes the operations of on-premises data workloads dramatically easier with cloud-native operations and observability on Kubernetes. “We're looking forward to DSE 6.9 for its advancements in real-time digital banking services. Currently, we use DSE for critical tasks like data offloading and real-time data services. Its full-text search enhances customer engagement without adding to core system loads,” notes Marcin Dobosz, Director of Technology, Neontri. “With features like the Vector Add-On and DataStax Mission Control, DSE 6.9 aligns with our goal of modernising technology stacks for financial institutions, enhancing productivity, search relevance, and operational efficiency to better meet clients' needs." Ed Anuff, Chief Product Officer, DataStax, concludes, “We’re delivering the future of self-managed data for modern infrastructure and AI systems, and enabling enterprises to get the most out of their data centre modernisation and total cost of ownership. In addition to supporting all of the new HCI investments on the most powerful systems in new AI data and edge clouds, we’re also enabling users to benefit from powerful integrations with leading technologies like OpenSearch and Apache Pulsar.”

"We’ve got to talk about AI"
Centiel, a global Uninterruptible Power Supply manufacturer, looks at the role of AI in relation to data centres while exploring some of the challenges and opportunities. There is much talk about how data centres will deal with AI (machine learning). One major challenge is that AI requires three times the power needed compared to normal servers. Facilities will also need extra space to house additional equipment to manage and power the data. We are already seeing AI in use in our daily lives. On LinkedIn we are invited to use AI to help us write a social media post and ChatGPT has caught the imagination of copywriters across the world. However, this is just the start. AI’s capability will grow exponentially over the next few years. This phenomenon is set against a society already struggling to provide enough energy and infrastructure for existing services. Data centres are already placing a huge strain on the national grid and we also have EV chargers and additional homes to add to the grid. We must look at alternatives. At Centiel, members of our team have been at the forefront of technology for many years bringing to market the first three-phase transformerless UPS and the first, second, third and fourth generations of true modular, hot-swappable three-phase UPS. Most recently, we turned our attention to helping data centres achieve net-zero targets. Following four years’ development we recently launched StratusPower a UPS to provide complete peace of mind in relation to power availability while helping data centres become more sustainable. StratusPower shares all the benefits of our award-winning three phase, true modular UPS CumulusPower - including '9 nines' (99.9999999%) availability to effectively eliminate system downtime; class leading 97.6% on-line efficiency to minimise running costs; true 'hot swap' modules to eliminate human error in operation – but now also includes long-life components to improve sustainability. Large data centres can need 100MW or more of power to support their load and in the future, they will need to generate their own power through renewables like photovoltaic arrays, perhaps located on the roofs of their buildings, or wind farms located in adjacent fields. Currently, mains AC power is rectified to create a DC bus that is used to charge batteries and provide an input to an inverter. But what about a future where the DC bus can be supplied from mains power and/or renewable sources? There is little doubt that future grid instability and unreliability will need to be corrected by the use of renewables, and StratusPower is ready to meet this future. The product is built with the future of renewables in mind, ensuring stability and reliability as we transition to cleaner energy sources. With a power density greater than 1MW per M2, StratusPower also offers a compact solution. Al is set to put more strain on infrastructure and power supplies which are already struggling to cope. However, the technology is now available to harness alternative energy sources, and our team of trusted advisors at Centiel can discuss the options to help future-proof facilities in the face of the AI revolution. We’ve got to talk about AI, so why not give Centiel’s team a call? To learn more, visit www.centiel.co.uk. For more from Centiel, click here.

Digital Realty partners with Oracle to accelerate AI growth
Digital Realty, a global provider of cloud and carrier-neutral data centre, colocation, and interconnection technologies, has announced a collaboration with Oracle to accelerate the growth and adoption of artificial intelligence (AI) among enterprises. The strategic collaboration aims to develop hybrid integrated innovations that address data gravity challenges, expedite time to market for enterprises deploying next-generation AI services, and unlock data and AI-based business outcomes. As part of this collaboration, Oracle will deploy critical GPU-based infrastructure in a dedicated Digital Realty data centre in Northern Virginia. This deployment, which leverages PlatformDIGITAL - Digital Realty’s open, purpose-built global data centre solution - will cater to a wide range of enterprises and AI customers, helping them to address critical infrastructure challenges, including those experienced with NVIDIA and AMD deployments. By leveraging the expertise and resources of both Oracle and Digital Realty, customers with mixed requirements can benefit from a tailored and efficient offering that meets their specific needs for GPU-based infrastructure. This announcement further strengthens Digital Realty's existing partnership with Oracle, which currently encompasses multiple deployments across the globe, 11 Oracle Cloud Infrastructure (OCI) FastConnect points of presence, a global ServiceFabric presence, and a recent Oracle Solution Centre deployment in one of Digital Realty’s Frankfurt data centres. In the fourth quarter of last year, Digital Realty successfully implemented an OCI Dedicated Region deployment for a major financial services customer, showcasing the potential of the collaboration between Oracle and Digital Realty in meeting enterprise customers' hybrid cloud requirements. Patrick Cyril, Global Vice President, Technical Sales & Customer Excellence – Revenue Operations, Oracle, says, “We’re excited to be working with Digital Realty to bring innovative solutions to the market that empower our enterprise customers workloads and their ecosystems to harness the boundless possibilities of AI. Together, we're not just pioneering technology; we're unlocking a future where every challenge is met with unparalleled innovation and every opportunity is maximised.” Chris Sharp, Chief Technology Officer at Digital Realty, comments, “We’re delighted to build upon our relationship with Oracle and enable the next generation of hybrid and private AI adoption among enterprises. Together, we're bringing the extensive capabilities of the cloud to enterprises’ private data sets through secure interconnection, unlocking new data-driven business outcomes.” Digital Realty states that its collaboration with Oracle signifies a significant step forward in the advancement of AI technologies. Furthermore, the company hopes that by combining their expertise and resources, together they can revolutionise the AI landscape and empower enterprises to unlock the full potential of their data. For more from Digital Realty, click here.

STACKIT and Dremio partner to pioneer data sovereignty
Dremio, the unified lakehouse platform for self-service analytics and AI, and STACKIT, a data-sovereign cloud provider in Europe and part of the IT organisation within Schwarz Group, today announced a strategic partnership that provides European organisations with the first fully managed, cloud-based modern lakehouse offering capable of meeting today’s stringent data residency requirements. Unveiled at Dremio’s annual user conference, Subsurface Live, this collaboration reportedly marks a significant milestone in STACKIT's mission to expand its expertise and product range in data and AI across Europe. Data residency requirements are the legal and regulatory stipulations that dictate where a company's data must physically reside. This means that physical servers hosting a business's data need to be located within a specified country or region to comply with local laws. With this partnership, STACKIT not only embraces open standards like Apache Iceberg, but also reaffirms its commitment to empowering customers with data sovereignty and freedom from vendor lock-in. By leveraging Dremio's unified lakehouse platform, STACKIT is set to overhaul data management by enabling organisations to transition seamlessly from traditional data lakes to high-performance and agile data lakehouses. Walter Wolf, Board Member of Schwarz IT, comments, “Our approach to data sovereignty hinges on leveraging open standards to facilitate seamless integration with applications spanning various business domains. Dremio serves as a solid foundation for this endeavour due to its emphasis on open formats such as Apache Iceberg.” Key benefits of the STACKIT and Dremio partnership include: ● Lower total cost of ownership: Customers can expect up to an 80% reduction in costs associated with analytics and AI projects thanks to Dremio's efficient data processing capabilities.● Faster time to market for analytics and AI: With Dremio, analytics and AI projects will see a significant boost in productivity enabling organisations to complete projects five to ten times faster than conventional methods.● Improved discoverability and governance via Iceberg Data Catalogue: Leveraging Git-inspired versioning, Dremio provides a robust data catalogue that supports data integrity, governance and traceability throughout the entire data lifecycle.● Flexibility and scalability: With the ability to run on any infrastructure, Dremio offers unparalleled flexibility and scalability allowing customers to adapt to changing business needs seamlessly. With its open lakehouse architecture, Dremio allows users to access and process data independently on the STACKIT platform through a data lakehouse service that ensures data protection and promotes data sovereignty. Together, the two companies enable businesses to derive valuable insights from sensitive data while adhering to the highest standards of data privacy and security. Thanks to the partnership, STACKIT customers can expect a significant cost reduction, flexible and consumption-based billing options, and overall affordability in their data residency and sovereignty efforts. Andreas Vogels, Central Europe Lead at Dremio, comments, "The collaboration between Dremio and STACKIT not only empowers organisations with the freedom to scale their data operations seamlessly but also ensures they can derive actionable insights from their data without constraints, no matter where the data resides. By leveraging Dremio's cloud-native architecture and STACKIT's commitment to digital sovereignty, customers can unlock the full potential of their data while maintaining control and flexibility in their cloud strategy.” Customers will be able to access the new data lakehouse service via a private preview, with support from STACKIT's data and AI consulting team to guide them through the initial steps. For more from Dremio, click here.

New micro modular data centre system with AI features
Vertiv, a global provider of critical digital infrastructure and continuity solutions, today introduced the Vertiv SmartAisle 3, a micro modular data centre system that utilises the power of Artificial Intelligence (AI), providing enhanced intelligence and enabling efficient operations within the data centre environment. Now available in Southeast Asia, Australia and New Zealand, the SmartAisle 3 can be configured up to 120kW of total IT load and is ideal for a wide range of industry applications, including banking, healthcare, government, and transportation. Building on the previous Vertiv SmartAisle technology, the SmartAisle 3 is a fully-integrated data centre ecosystem consisting of racks, uninterruptible power supply (UPS), thermal management and monitoring, and physical security. The latest iteration of the SmartAisle comes with AI functionality and self-learning features that help significantly optimise the micro data centre operational and energy efficiency. Each carriage or rack cabinet has a Smart Power Output Device or POD, which seamlessly manages the power distribution to rack PDUs and serves as a monitoring gateway that oversees carriage conditions including temperature, humidity and door status. With built-in cabling integration, front and rear carriage sensors, the SmartAisle 3 also eliminates the hassle of complex on-site cabling installation and saves on data centre white space. Moreover, the SmartAisle 3 further reduces the complexity of on-site setup with its one-click networking feature, which effortlessly configures the data centre system. It also has an AI self-learning function that intelligently monitors and adjusts temperature depending on the operating environment, helping to achieve energy savings by as much as 20% compared to systems without AI features, while maintaining optimum operation conditions. Cheehoe Ling, Vice President, Product Management at Vertiv Asia, explains, “As demand for data-intensive applications continues to rapidly grow, many businesses are requiring their data centre infrastructure to be deployed quickly and efficiently, and to be as scalable as possible. Vertiv enriched the Vertiv SmartAisle 3 with AI features to help our customers simplify their data centre operations, so they can have greater flexibility in their business operation and to help them achieve their energy efficiency goals. “The speed at which AI is evolving not only demands a workforce that is technically proficient. but also a digital economy built on infrastructure designed to scale,” says Faraz Ali, Product Manager IMS at Vertiv Australia and New Zealand. “By integrating AI directly into data centre operations, we’re giving businesses across Australia and New Zealand the opportunity to keep up with their AI ambitions. The new system’s automated intelligence makes light work of balancing critical infrastructure at optimum energy efficiency and power resilience, offering time back to critical talent, while promoting scalability as workload requirements expand.” With an intuitive 15-inch touch screen control panel and an option to upgrade to a 95-inch local door display, the Vertiv SmartAisle 3 provides enhanced system visibility and assists in troubleshooting to enable the system to operate at peak efficiency. The SmartAisle 3 includes the Vertiv Liebert APM modular uninterruptible power supply (UPS) system, the Vertiv Liebert CRV 4 in-row cooling, and the Vertiv Liebert RDU 501 intelligent monitoring system. It also comes pre-installed with a flexible busbar system that streamlines the overall system design by reducing power distribution installation complexity. The latest iteration of the SmartAisle 3 is part of Vertiv’s growing portfolio of flexible, fully-integrated modular solutions. With its combination of Environmental Management System (EMS), AI functionality and 'carriage' type rack architecture, the SmartAisle 3 can help customers boost operational efficiency through comprehensive intelligent monitoring and management, ease of installation, quick deployment time and highly efficient operating conditions, thus allowing customers to better adapt to the needs of today’s diverse compute environments. For more from Vertiv, click here.

AI and sustainability: Is there a problem? 
By Patrick Smith, Field CTO EMEA, Pure Storage AI can do more and more. Think of any topic and an AI or genAI tool can effortlessly generate an image, video or text. Yet the environmental impact of, say, generating a video by AI is often forgotten. For example, generating one image by AI consumes about the same amount of power as charging your mobile phone. A relevant fact when you consider that more and more organisations are betting on AI.   After all, training AI models requires huge amounts of data, and massive data centres are needed to store all this data. In fact, there are estimates that AI servers (in an average scenario) could consume in the range of 85 to 134Twh of power annually by 2027. This is equivalent to the total amount of energy consumed in the Netherlands in a year. The message is clear: AI consumes a lot of energy and will, therefore, have a clear impact on the environment.  Does AI have a sustainability problem?  To create a useful AI model, a number of things are needed. These include training data, sufficient storage space and GPUs. Each component consumes energy, but GPUs consume by far the largest amount of power. According to researchers at OpenAI, the amount of computing power used has been doubling every 3.4 months since 2012. This is a huge increase that is likely to continue into the future, given the popularity of various AI applications. This increase in computing power is having an increasing impact on the environment.   Organisations wishing to incorporate an AI approach should therefore carefully weigh the added value of AI against its environmental impact; while it’s unlikely a decision maker would put off a project or initiative, this is about having your cake and eating it. Looking at the bigger picture and picking technology which meets both AI and sustainability goals. In addition to this, the underlying infrastructure and the GPUs themselves need to become more energy-efficient. At its recent GTC user conference, NVIDIA highlighted exactly this, paving the way for more to be achieved with each GPU with greater efficiency.   Reducing the impact of AI on the environment  A number of industries are important during the process for training and deploying an AI model: The storage industry, data centre industry, and semiconductor industry. To reduce AI's impact on the environment, steps need to be taken in each of these sectors to improve sustainability.  The storage industry and the role of flash storage  In the storage industry, concrete steps can be taken to reduce the environmental impact of AI. An example is all-flash storage solutions which are significantly more energy-efficient than traditional disk-based storage (HDD). In some cases, all-flash solutions can deliver a 69% reduction in energy consumption compared to HDD. Some vendors are even going beyond off-the-shelf SSDs and developing their own flash modules, allowing the array’s software to communicate directly with flash storage. This makes it possible to maximise the capabilities of the flash and achieve even better performance, energy usage and efficiency, that is, data centres require less power, space and cooling.   Data centres power efficiency  Data centres can take a sustainability leap with better, more efficient cooling techniques, and making use of renewable energy. Many organisations, including the EU, are looking at Power Usage Efficiency (PUE) as a metric - how much power is going into a data centre vs how much is used inside. While reducing the PUE is a good thing, it’s a blunt and basic tool which doesn’t account for, or reward, the efficiency of the tech installed within the data centre.    Semiconductor industry  The demand for energy is insatiable, not least because semiconductor manufacturers - ,especially of the GPUs that form the basis of many AI systems - are making their chips increasingly powerful. For instance, 25 years ago, a GPU contained one million transistors, was around 100mm² in size and did not use that much power. Today, GPUs just announced contain 208 billion transistors, and consume 1200W of power per GPU. The semiconductor industry needs to be more energy efficient. This is already happening, as highlighted at the recent NVIDIA GTC conference, with CEO Jensen Huang saying that due to the advancements in the chip manufacturing process, GPUs are actually doing more work and so are more efficient despite the increased power consumption.   Conclusion  It’s been clear for years that AI consumes huge amounts of energy and therefore can have a negative environmental impact. The demand for more and more AI generated programmes, projects, videos and more will keep growing in the coming years. Organisations embarking on an AI initiative need to carefully measure the impact of their activities. Especially with increased scrutiny on emissions and ESG reporting, it’s vital to understand the repercussions of energy consumption by AI in detail and mitigate wherever possible.  Initiatives such as moving to more energy efficient technology, including flash storage, or improving data centre capabilities can reduce the impact. Every sector involved in AI can and should take concrete steps towards a more sustainable course. It is important to keep investing in the right areas to combat climate change!

Mathpix joins DataVerge colocation facility to support AI workflows
Mathpix, an AI-powered document automation and scientific communication company, has joined DataVerge, a carrier-neutral interconnection facility in Brooklyn. DataVerge surpassed Mathpix’s criteria, which included robust and redundant power, a fast connection to AWS US-East-1, scalability and proximity to its Brooklyn, New York headquarters, making it the colocation facility of choice. As more companies rely on AI for their business, colocation and data centres must deliver greater than ever levels of uninterrupted power and connectivity to support high-density AI workloads. Though many companies with a thriving presence in the New York metropolitan area, are now seeking to reap the benefits of AI, few New York area data centres are equipped with the abundance of power required to meet their AI needs. In addition to power, DataVerge, which is the largest interconnection facility in Brooklyn, New York, offers more than 50,000ft2 of customisable data centre space, along with secure IT infrastructure and rapid deployment and connection to more than 30 carriers as well as 24/7 access and support. Mathpix enables organisations to quickly and accurately convert PDFs and other types of documents, including handwritten text and images, into searchable, exportable and machine readable text used in large language models (LLMs) and other applications. According to Nico Jimenez, CEO of MathPix, “DataVerge enables us to colocate our own servers, which are equipped with our own GPUs. This setup provides us the opportunity to select the hardware we need to build and configure our servers so that they significantly reduce latency, and at a considerably lower price point than what the hyperscalers charge for colocation.” “AI is essential to how many New York area companies run their businesses,” says Jay Bedovoy, CTO of DataVerge. “Our colocation and data centres provide them with uninterrupted power and connectivity, as well as the scalability, high performance and proximity needed to avoid latency issues, which is critical for AI and other real-time interaction in decision-making applications.” 

Pure Storage accelerates enterprise AI adoption with NVIDIA
Pure Storage has announced new validated reference architectures for running generative AI use cases, including a new NVIDIA OVX-ready validated reference architecture. In collaboration with NVIDIA, the company is arming global customers with a proven framework to manage the high-performance data and compute requirements they need to drive successful AI deployments.  Building on this collaboration with NVIDIA, Pure Storage claims that it delivers the latest technologies to meet the rapidly growing demand for AI across today’s enterprises. New validated designs and proofs of concept include: Retrieval-Augmented Generation (RAG) pipeline for AI inference Certified NVIDIA OVX server storage reference architecture Vertical RAG development Expanded investment in AI partner ecosystem Industry significance: Today, the majority of AI deployments are dispersed across fragmented data environments — from the cloud to ill-suited (often legacy) storage solutions. Yet, these fragmented environments cannot support the performance and networking requirements to fuel AI data pipelines and unlock the full potential of enterprise data.  As enterprises further embrace AI to drive innovation, streamline operations, and gain a competitive edge, the demand for robust, high-performance, and efficient AI infrastructure has never been stronger. Pioneering enterprise AI deployments, particularly among a rapidly growing set of Fortune 500 enterprise customers, Pure Storage claims to provide a simple, reliable, and efficient storage platform for enterprises to fully leverage the potential of AI, while reducing the associated risk, cost, and energy consumption.

Vultr revolutionises global AI deployment with inference
Vultr has announced the launch of Vultr Cloud Inference. This new serverless platform revolutionises AI scalability and reach by offering global AI model deployment and AI inference capabilities. Today's rapidly evolving digital landscape has challenged businesses across sectors to deploy and manage AI models efficiently and effectively. This has created a growing need for more inference-optimised cloud infrastructure platforms with both global reach and scalability, to ensure consistent high performance. This is driving a shift in priorities as organisations increasingly focus on inference spending as they move their models into production. But with bigger models comes increased complexity. Developers are being challenged to optimise AI models for different regions, manage distributed server infrastructure, and ensure high availability and low latency. With that in mind, Vultr created cloud inference. Vultr Cloud Inference will accelerate the time-to-market of AI-driven features, such as predictive and real-time decision-making while delivering a compelling user experience across diverse regions. Users can simply bring their own model, trained on any platform, cloud, or on-premises, and it can be seamlessly integrated and deployed on Vultr’s global NVIDIA GPU-powered infrastructure. With dedicated compute clusters available in six continents, it ensures businesses can comply with local data sovereignty, data residency, and privacy regulations by deploying their AI applications in regions that align with legal requirements and business objectives. “Training provides the foundation for AI to be effective, but it's inference that converts AI’s potential into impact. As an increasing number of AI models move from training into production, the volume of inference workloads is exploding, but the majority of AI infrastructure is not optimised to meet the world’s inference needs,” says J.J. Kardwell, CEO of Vultr’s parent company, Constant. “The launch of Vultr Cloud Inference enables AI innovations to have maximum impact by simplifying AI deployment and delivering low-latency inference around the world through a platform designed for scalability, efficiency, and global reach.” With the capability to self-optimise and auto-scale globally in real-time, it ensures AI applications provide consistent, cost-effective, low-latency experiences to users worldwide. Moreover, its serverless architecture eliminates the complexities of managing and scaling infrastructure, delivering unparalleled impact, including: Flexibility in AI model integration and migration Reduced AI infrastructure complexity Automated scaling of inference-optimised infrastructure Private, dedicated compute resources “Demand is rapidly increasing for cutting-edge AI technologies that can power AI workloads worldwide,” says Matt McGrigg, Director of Global Business Development, Cloud Partners at NVIDIA. “The introduction of Vultr Cloud Inference will empower businesses to seamlessly integrate and deploy AI models trained on NVIDIA GPU infrastructure, helping them scale their AI applications globally.” As AI continues to push the limits of what’s possible and change the way organisations think about cloud and edge computing, the scale of infrastructure needed to train large AI models and to support globally distributed inference needs has never been greater. Vultr Cloud Inference is now available for early access via registration here.

Treske calls experts to bring technology skills to Australia
Australian data centre infrastructure partner, Treske, has called on AI and data centre industry experts to bring skills and education to Australia’s regional areas to help businesses across Australia capitalise on the AI era. “In non-metro areas, Australia’s technology skills gap is more than a local issue – it's a national challenge, which needs greater attention,” says Daniel Sargent, Managing Director and Founder of Treske. “By delivering education right to where it's needed, we’re aiming to help these regional communities thrive in the fast-paced digital economy, and ultimately, make sure the whole country benefits from the AI revolution – not just the major cities.” For its part, Treske will launch a first-of-its-kind event series where experts on AI critical infrastructure will come together to discuss regional preparedness for AI. The series will commence in Newcastle this March and explore the dynamic terrain of critical infrastructure and how it can be designed and built to bolster AI’s return on investment (ROI) in often under-resourced areas. “Regions typically feel like they need to go to city to be part of the cloud or AI. And although it’s promising to see more and more colocation data centres taking shape in rural areas, many are still missing the hybrid cloud approach, which requires on-premises infrastructure and on-the-ground skills availability,” continues Daniel. “Regional businesses want to adopt AI-powered tech – think internet-of-things (IoT) sensors in local council car parks or autonomous vehicles operating in mine sites or on farms – but they don’t have the data centre infrastructure available, nor close enough to the action. This means the efficiency and financial ROI these technologies are intended to yield aren’t being demonstrated, which is holding back businesses’ responsiveness to market demands.” According to BCG, 70% of Australian businesses are still yet to deliver their digital transformation efforts – a critical first step in effectively implementing AI. In fact, globally, 95% of organisations have an AI strategy in place, but only 14% are ready to fully integrate it into their businesses. Although there are several factors slowing adoption, AI ultimately requires secure, fast access to data to deliver results, which poses a complex challenge for critical infrastructure resilience and scalability. Joining Daniel on the panel are critical infrastructure experts Robert Linsdell, General Manager A/NZ and APAC at Ekkosense; Rob Steel, Channels and Projects Manager at Powershield; Mark Roberts, Asia Pacific IT Business Leader at Rittal; and Adam Wright, Director and Founder at Ecogreen Electrical and Plumbing. Specialists in their field, these individuals are motivated to ensure Australia’s regions aren’t being left out of education and development opportunities, and will deliver insights to the Newcastle and regional NSW audience: Power-ready AI adoption: Regional businesses should prepare themselves to grow alongside AI demands of today and tomorrow. This is a matter of resiliency, and being prepared in the regions demands on-premises energy efficient, reliable data centre infrastructure – the panel will offer insight into how this can be achieved with a pocketknife of racking, uninterruptible power supply (UPS) units, precision cooling and management systems. The panel will also discuss the need for government grants – resembling Australia’s solar power installation incentives – to lift local digital infrastructure resilience and readiness. Overcoming the far-flung skills challenge: In addition to the value of nearby industry education events, the panel will discuss how important it is to see tertiary skilled courses rolled out through state TAFEs, where there is generally more presence in the regions. The panel will also examine how robust infrastructure deployments can take pressure off local skills shortages. Remote AI wins: The panel will share examples of how AI with resilient infrastructure has proven to offer growth-changing outcomes in industries such as agriculture, local government, healthcare, mining, and manufacturing. On the back of the event, Treske is set to launch its UPS 101 guide – a handbook for IT resellers, managers and system administrators on questions to ask and issues to consider when deploying a UPS for their site or customer. “The UPS is often seen as a box of batteries, which reinstates IT power when grid power fails,” continues Daniel. “But it is so much more, and the wrong design can cost businesses thousands of dollars.”



Translate »
View the latest digital issue!