Advertise on DCNN Advertise on DCNN Advertise on DCNN
Friday, June 13, 2025

Artificial Intelligence


AI security and data availability to underpin 2025 tech trends
AI has continued to be transformative throughout 2024, with accelerating adoption by enterprises and a growing number of use cases. According to experts from data platform, Nasuni, the AI boom will continue in 2025, but will be defined by three key pillars: 1. 2025 will bring a new era of security maturity - The ability to protect and quickly recover data assets underpins every other business process in an AI-first world 2. Data readiness will be central to AI success - As we look toward 2025, data will no longer just support AI, it will shape and limit the scope of what AI can achieve 3. Enterprises will strive to find the real ROI in AI - 2025 will usher in a more measured approach to AI investment, as organisations will be increasingly focused on quantifiable ROI Discussing these predictions, Russ Kennedy, Chief Evangelist at Nasuni, says, “In 2025, data will be more valuable than ever as enterprises leverage AI to power their operations. However, as data’s value grows, so does its appeal to increasingly sophisticated threat actors. This new reality will continue driving organisations to rethink their security frameworks, making data protection and rapid recovery the backbone of any AI strategy. Attackers are evolving, using AI to create more insidious methods, like embedding corrupted models and targeting AI frameworks directly, which makes rapid data recovery as vital as data protection itself. “Businesses will need to deploy rigorous measures not only to prevent attacks, but to ensure that if the worst happens, they can quickly restore their AI-driven processes. 2025 will bring a new era of security maturity, one where the ability to protect and quickly recover data assets underpins every other business process in an AI-first world.” Jim Liddle, Chief Innovation Officer Data Intelligence and AI at Nasuni, comments, “As we look toward 2025, data will no longer just support AI – it will shape and limit the scope of what AI can achieve. A robust data management strategy will be essential, especially as AI continues advancing into unstructured data. For years, companies have successfully leveraged structured data for insights, but unstructured data – such as documents, images, and embedded files – has remained largely untapped. The continued advancements in AI’s ability to process the different types of unstructured data that reside within an enterprise are exciting, but they also require organisations to know what data they have and how and where it’s being used. “2025 will mark the era of ‘data readiness’ for AI. Companies that strategically curate and manage their data assets will see the most AI-driven value, while those lacking a clear data strategy may struggle to move beyond the basics. A data-ready strategy is the first step for any enterprise looking to maximise AI’s full potential in the coming years.” Nick Burling, Senior Vice President, Product at Nasuni, adds, “2025 will usher in a more measured approach to AI investment, as organisations will be increasingly focused on quantifiable ROI. While AI can deliver immense value, its high operational costs and resource demands mean that companies need to be more selective with their AI projects. Many enterprises will find that running data-heavy applications, especially at scale, requires not just investment but careful cost management. Edge data management will be a critical component, helping businesses to optimise data flow and control expenses associated with AI. “For organisations keen on balancing innovation with budgetary constraints, cost efficiency will drive AI adoption. Enterprises will focus on using AI strategically, ensuring that every AI initiative is justified by clear, measurable returns. In 2025, we’ll see businesses embrace AI not only for its transformative potential, but for how effectively it can deliver sustained, tangible value in an environment where budgets continue to be tightly scrutinised.” For more from Nasuni, click here.

Infinidat introduces RAG workflow deployment architecture
Infinidat, a provider of enterprise storage solutions, has introduced new Retrieval-Augmented Generation (RAG) workflow deployment architecture to enable enterprises to fully leverage generative AI (GenAI). The company states that this dramatically improves the accuracy and relevancy of AI models with up-to-date, private data from multiple company data sources, including unstructured data and structured data, such as databases, from existing Infinidat platforms. With Infinidat’s RAG architecture, enterprises utilise Infinidat’s existing InfiniBox and InfiniBox SSA enterprise storage systems as the basis to optimise the output of AI models, without the need to purchase any specialised equipment. Infinidat also provides the flexibility of using RAG in a hybrid multi-cloud environment, with InfuzeOS Cloud Edition, making the storage infrastructure a strategic asset for unlocking the business value of GenAI applications for enterprises. “Infinidat will play a critical role in RAG deployments, leveraging data on InfiniBox enterprise storage solutions, which are perfectly suited for retrieval-based AI workloads,” says Eric Herzog, CMO at Infinidat. “Vector databases that are central to obtaining the information to increase the accuracy of GenAI models run extremely well in Infinidat’s storage environment. Our customers can deploy RAG on their existing storage infrastructure, taking advantage of the InfiniBox system’s high performance, ow latency, and unique Neural Cache technology, enabling delivery of rapid and highly accurate responses for GenAI workloads.” RAG augments AI models using relevant and private data retrieved from an enterprise’s vector databases. Vector databases are offered by a number of vendors, such as Oracle, PostgreSQL, MongoDB and DataStax Enterprise. These are used during the AI inference process that follows AI training. As part of a GenAI framework, RAG enables enterprises to auto-generate more accurate, more informed and more reliable responses to user queries. It enables AI learning models, such as a Large Language Model (LLM) or a Small Language Model (SLM), to reference information and knowledge that is beyond the data on which it was trained. It not only customises general models with a business’ most updated information, but it also eliminates the need for continually re-training AI models, which are resource intensive. “Infinidat is positioning itself the right way as an enabler of RAG inferencing in the GenAI space,” adds Marc Staimer, President of Dragon Slayer Consulting. “Retrieval-augmented generation is a high value proposition area for an enterprise storage solution provider that delivers high levels of performance, 100% guaranteed availability, scalability, and cyber resilience that readily apply to LLM RAG inferencing. With RAG inferencing being part of almost every enterprise AI project, the opportunity for Infinidat to expand its impact in the enterprise market with its highly targeted RAG reference architecture is significant.” Stan Wysocki, President at Mark III Systems, remarks, “Infinidat is bringing enterprise storage and GenAI together in a very important way by providing a RAG architecture that will enhance the accuracy of AI. It makes perfect sense to apply this retrieval-augmented generation for AI to where data is actually stored in an organisation’s data infrastructure. This is a great example of how Infinidat is propelling enterprise storage into an exciting AI-enhanced future.” Inaccurate or misleading results from a GenAI model, referred to as 'AI hallucinations', are a common problem that have held back the adoption and broad deployment of AI within enterprises. An AI hallucination may present inaccurate information as 'fact', cite non-existent data, or provide false attribution – all of which tarnish AI and expose a gap that calls for the continual refinement of data queries. A focus on AI models, without a RAG strategy, tends to rely on a large amount of publicly available data, while under-utilising an enterprise’s own proprietary data assets. To address this major challenge in GenAI, Infinidat is making its architecture available for enterprises to continuously refine a RAG pipeline with new data, thereby reducing the risk of AI hallucinations. By enhancing the accuracy of AI model-driven insights, Infinidat is helping to advance the fulfillment of the promise of GenAI for enterprises. Infinidat’s solution can encompass any number of InfiniBox platforms and enables extensibility to third-party storage solutions via file-based protocols such as NFS. In addition, to simplify and accelerate the rollout of RAG for enterprises, Infinidat integrates with the cloud providers, using its InfuzeOS Cloud Edition for AWS and Azure to make RAG work in a hybrid cloud configuration. This complements the work that hyperscalers are doing to build out LLMs on a larger scale to do the initial training of the AI models. The combination of AI models and RAG is a key component for defining the future of generative AI. For more from Infinidat, click here.

Datadog Monitoring for OCI now widely available
Datadog, a monitoring and security platform for cloud applications and a member of Oracle PartnerNetwork, has announced the general availability of Datadog Monitoring for Oracle Cloud Infrastructure (OCI), which enables Oracle customers to monitor enterprise cloud-native and traditional workloads on OCI with telemetry in context across their infrastructure, applications and services. With this launch, Datadog is helping customers migrate with confidence from on-premises to cloud environments, execute multi-cloud strategies and monitor AI/ML inference workloads. Datadog Monitoring for Oracle Cloud Infrastructure helps customers: - Gain visibility into OCI and hybrid environments: Teams can collect and analyse metrics from their OCI stack by using Datadog's integrations for over 20 major OCI services and more than 750 other technologies. In addition, customers can visualise the performance of OCI cloud services, on-premises servers, VMs, databases, containers and apps in near-real time with customisable, drag-and-drop, and out-of-the-box dashboards and monitors. - Monitor AI/ML inference workloads: Teams can monitor and receive alerts on the usage and performance of GPUs, investigate root causes, monitor operational performance and evaluate the quality, privacy and safety of LLM applications. - Get code-level visibility into applications: Real-time service maps, AI-powered synthetic monitors and alerts on latency, exceptions, code-level errors, log issues and more give teams deeper insight into the health and performance of their applications, including those using Java. “With this announcement, Datadog enables Oracle customers to unify monitoring of OCI, on-premises environments and other clouds in a single pane of glass for all teams,” says Yrieix Garnier, VP of Product at Datadog. “This helps teams migrate to the cloud and execute multi-cloud strategies with confidence, knowing that they can monitor services side-by-side, visualise performance data during all stages of a migration and immediately identify service dependencies.” For more from Datadog, click here.

Custocy partners with Enea for AI-based NDR integration
Custocy, a pioneer in artificial intelligence (AI) technologies for cybersecurity, is to embed Enea Qosmos deep packet inspection (DPI) and intrusion detection (IDS) software libraries in its AI-powered network detection and response (NDR) platform. This integration will enable Custocy to improve accuracy and performance and support product differentiation through detailed traffic visibility and streamlined data inspection. Custocy uses layered, multi-temporal AI functions to detect immediate threats as well as persistent attacks. This approach streamlines the work of security analysts through attack path visualisation, improved prioritisation, workflow support and a radical reduction in the number of false-alarm alerts (‘false positives’). By integrating Enea software into its solution, Custocy will have the exceptional traffic data it needs to extend and accelerate this innovation while meeting extreme performance demands. Enea’s deep packet inspection (DPI) engine, the Enea Qosmos ixEngine, is the most widely embedded DPI engine in the cybersecurity industry. While it has long played a vital role in a wide range of security functions, it is increasingly valued by security leaders today for the value it brings to AI innovation. With market-leading recognition of more than 4,500 protocols and delivery of 5,900 metadata, including unique indicators of anomaly, Qosmos ixEngine provides invaluable fuel for AI innovators like Custocy. In addition, the Enea Qosmos Threat Detection SDK delivers a two-fold improvement in product performance by eliminating double packet processing for DPI and IDS, optimising resources and streamlining overheads. And thanks to Enea Qosmos ixEngine’s packet acquisition and parsing library, parsing speed is accelerated while traffic insights are vastly expanded to fuel next-generation threat detection and custom rule development. These enhancements are important, as demand for high-performing NDR solutions has never been higher. NDR plays a pivotal role in detecting unknown and advanced persistent threats (APTs), which is a challenge certain to become even more daunting as threat actors adopt AI tools and techniques. Custocy is well-positioned to help private and public organisations meet this challenge with a unique technological core built on AI that has earned the company a string of awards; the latest being Product of the Year at Cyber Show Paris. Jean-Pierre Coury, SVP Embedded Security Business Group, comments, “Custocy has developed its solution from the ground up to exploit the unique potential of AI to enhance advanced threat detection and security operations. AI is truly woven into the company's DNA, and I look forward to the additional value it will deliver to its customers as they leverage the enhanced data foundation delivered by Enea software to support their continuous AI innovation.” Custocy CEO, Sebastien Sivignon, adds, “We are thrilled to join forces with Enea to offer our customers the highest level of network intrusion detection. The Enea Qosmos ixEngine is the industry gold standard for network traffic data. It offers a level of accuracy and depth conventional DPI and packet sniffing tools cannot match. Having such a rich source of clean, well-structured, ready-to-use data will enable Custocy to dramatically improve its performance, work more efficiently and devote maximum time to AI model innovation.”

Singtel and Nscale partner to unlock GPU capacity
Singtel and Nscale, a fully vertically integrated AI cloud platform, have announced a strategic partnership that will unlock both companies’ GPU capacity across Europe and Southeast Asia. The collaboration aims to meet the growing global demand from enterprises for generative AI, high-performance computing and data-intensive workloads. Singtel will leverage Nscale’s AMD and NVIDIA GPU capacity in Europe for Singtel’s customer workloads across key markets in the region. This capability ensures that Singtel can deliver to high-volume requirements on demand and maintain service excellence especially when additional capacity is needed. Correspondingly, Nscale will be able to tap into Singtel’s NVIDIA H100 Tensor Core GPU capacity in the Southeast Asian region for its customers’ workloads through an integration with Singtel’s patented orchestration platform, Paragon. Furthermore, as Singtel’s regional data centre arm Nxera expands in the region, its sustainable AI-ready data centres will provide the necessary data centre capacity to support large-scale deployment of Nscale GPU capacity. This partnership will allow Singtel and Nscale to build out a more comprehensive GPU-as-a-Service (GPUaaS) offering globally, ensuring that their customers benefit from the flexibility of a wider geographic footprint and robust infrastructure support. This will also drive greater utilisation in their respective GPU clusters. Bill Chang, CEO of Singtel’s Digital InfraCo and Nxera, says, “As we continue to augment our GPUaaS offerings, we are forging a series of strategic partnerships to grow our ecosystem and broaden our service availability for our customers. Our partnership with Nscale will allow our customers to tap into their high-performance GPU resources on demand, unlocking new possibilities for innovation and efficiency. Our commitment to delivering cost-effective solutions, backed by our state-of-the-art data centres, ensures businesses can access high-performance GPU resources quickly and seamlessly.” Josh Payne, Nscale Founder and CEO, adds, “Nscale is the vertically integrated GPU cloud building the global infrastructure backbone for generative AI. Our sustainable AI-ready data centre together with our GW pipeline of data centre capacity uniquely positions us to deliver sustainable AI infrastructure at any scale for customers worldwide. Through this strategic partnership, Nscale will provide Singtel customers with unmatched access to sustainable, high-performance, and cost-effective AI compute to accelerate enterprise generative AI in the region and beyond.” Singtel previously announced in February that it will be launching its GPUaaS later this year, providing enterprises with access to NVIDIA’s AI computing power. This will enable them to deploy AI at scale quickly and cost-effectively to accelerate growth and innovation. Singtel also recently announced a partnership with Vultr in the US and a strategic partnership with Bridge Alliance that will bring its GPUaaS offerings to enterprises across Southeast Asia. Singtel’s GPUaaS will be expanded to run in new sustainable, hyper-connected, AI-ready data centres by Nxera across Singapore, Thailand, Indonesia and Malaysia when they begin operations from mid-2025 onwards. Nscale's strategic partnership with Singtel follows a number of recent announcements, including a partnership with Open Innovation AI to deliver 30,000 GPUs of consumption to the Middle Eastern market. Integrating Nscale’s powerful GPU infrastructure with Open Innovation AI’s orchestration, data science tools and frameworks. Additionally, Nscale recently acquired Kontena, a leader in high-density modular data centres and AI data centre solutions, further enhancing its ability to provide high-performance, cost-effective AI infrastructure to meet the growing demands of the generative AI market. For more from Singtel, click here.

Is poor data quality the biggest barrier to efficiency?
Employing data specialists, selecting the right tech and understanding the value of a patient and meticulous approach to validation are all fundamental elements of an effective data strategy, according to STX Next, a global leader in IT consulting. Recent research shows that data is an asset that many organisations undervalue, with businesses generating over $5.6 billion in annual global revenue losing a yearly average of $406 million as a direct result of low-quality data. Bad data primarily impacts company bottom lines by acting as the bedrock of underperforming business intelligence reports and AI models – set up or trained on inaccurate and incomplete data – that produce unreliable responses, which businesses then use as the basis for important decisions. According to Tomasz Jędrośka, Head of Data Engineering at STX Next, significant work behind the scenes is required for organisations to be confident in the data at their disposal. Tomasz says, “Approaches to data quality vary from company to company. Some organisations put a lot of effort into curating their data sets, ensuring there are validation rules and proper descriptions next to each attribute. Others concentrate on rapid development of the data layer with very little focus on eventual quality, lineage and data governance. “Both approaches have their positives and negatives, but it’s worth remembering that data tends to outlive all other layers of the application stack. Therefore, if data architecture isn’t designed correctly there could be issues downstream. This often stems from aggressive timelines set by management teams, as projects are rushed to facilitate unrealistic objectives, leading to a less than desirable outcome. “It’s important to remember that the data world is no longer recognisable from where we were 20 years ago. Whereas before we had a handful of database providers, now development teams may pick one of a whole host of data solutions that are available. “Businesses should carefully consider the requirements of the project and potential future areas that it might cover, and use this information to select a database product suitable for the job. Specialist data teams can also be extremely valuable, with organisations that invest heavily in highly skilled and knowledgeable personnel more likely to succeed. “An integral aspect of why high-quality data is important in today’s business landscape is because companies across industries are rushing to train and deploy classical ML as well as GenAI models. These models tend to multiply whatever issues they encounter, with some AI chatbots even hallucinating when trained on a perfect set of source information. If data points are incomplete, mismatched, or even contradictory, the GenAI model won’t be able to draw satisfactory conclusions from them. “To prevent this from happening, data teams should analyse the business case and the roots of ongoing data issues. Too often, organisations aim to tactically fix problems and then allow the original issue to grow bigger and bigger. “At some point, a holistic analysis of the architectural landscape needs to be done, depending on the scale of the organisation and its impact, in the shape of a lightweight review or a more formalised audit where recommendations are then implemented. Fortunately, modern data governance solutions can mitigate a lot of the pain connected with such a process and in many cases make it smoother, depending on the size of the technical debt.” Tomasz concludes, “Employees who trust and rely on data insights work far more effectively, feel more supported and drive improvements in efficiency. Business acceleration powered by a data-driven decision-making process is a true signal of a data-mature organisation, with such traits differentiating companies from rivals.” For more from STX Next, click here.

Cloudian and Lenovo announce AI data lake platform
Cloudian and Lenovo today announced the general availability of a new Cloudian HyperStore AI data lake platform that reportedly delivers new levels of performance and power efficiency. Built on Lenovo ThinkSystem SR635 V3 all-flash servers with AMD EPYC 9454P processors, the new solution demonstrated performance of 28.7 GB/s reads and 18.4 GB/s writes from a cluster of six power-efficient, single-processor servers, delivering a 74% power efficiency improvement versus a HDD-based system in Cloudian testing. AI workloads demand scalable, secure solutions to meet the performance and capacity requirements of next-generation workloads. Cloudian’s limitlessly scalable, parallel-processing architecture – proven with popular AI and data analytics tools including PyTorch, Tensor Flow, Kafka, and Druid – accelerates AI in capacity-intensive use cases such as media, finance, and life sciences. The system’s single processor architecture delivers not only superior performance with just one socket, but also amplifies power efficiency, a metric that is emerging as a key concern as power consumption for generative AI is forecasted to increase at an annual average of 70% through 2027, according to Morgan Stanley. Lenovo combines Cloudian’s high-performance AI-ready data platform software with its all-flash Lenovo ThinkSystem SR635 V3 servers and 4th Gen AMD EPYC processors to deliver an exceptionally high-performance, efficient and scalable data management solution for AI and data analytics. “There’s a big focus on the AI boom in Australia, New Zealand and across APAC, and it’s easy to see why when bodies like the CSIRO say the Australian market alone could be worth close to A$500 billion (£258.6bn) in the next few years,” says James Wright, Managing Director Asia Pacific and Japan, Cloudian. “But there’s a storage and infrastructure layer that companies and government agencies need to power the data-hungry workloads central to AI’s performance and functionality. What’s out there now simply won’t cut it. Imagine trying to power the mobile applications we use today with the simple mobile phones we had 20 years ago – it wouldn’t work and it’s no different at the infrastructure level, particularly with AI in play. “Cloudian’s data lake software on Lenovo’s all-flash servers simply breaks through the limitations we’ve had in terms of performance and power efficiency to power and secure modern applications and data workflows. These are the breakthroughs we need to drive AI, and particularly sovereign AI, which the CSIRO and many industry and government stakeholders are calling for in Australia.” Michael Tso, CEO and Co-Founder at Cloudian, adds, “Lenovo’s industry-leading servers with AMD EPYC processors perfectly complement Cloudian’s high-performance data platform software. Together, they deliver the limitlessly scalable, performant, and efficient foundation that AI and data analytics workloads require. For organisations looking to innovate or drive research and discovery with AI, ML, and HPC, this solution promises to be transformative.” Built for mission-critical, capacity-intensive workloads, the platform features exabyte scalability, industry-leading S3 API compatibility, military-grade security, and Object Lock for ransomware protection. “Combining Lenovo’s high-performance all-flash AMD EPYC CPU-based servers with Cloudian's AI data lake software creates a solution that can handle the most demanding AI and analytics workloads,” notes Stuart McRae, General Manager, Lenovo. “This partnership enables us to offer our customers a cutting-edge, scalable, and secure platform that will help them accelerate their AI initiatives and drive innovation.” Kumaran Siva, Corporate Vice President, Strategic Business Development, AMD, comments, “AI workloads demand a lot from storage. Our 4th Gen AMD EPYC processors, together with Lenovo's ThinkSystem servers and Cloudian's AI data lake software, deliver the performance and scalability that AI users need. The single socket, AMD EPYC CPU-based Lenovo ThinkSystem SR635 V3 platform provides outstanding throughput combined with excellent power and rack efficiency to accelerate AI innovation.” Proven in over 800 enterprise-scale deployments worldwide, Cloudian on-premises AI data lakes help organisations securely turn information into insight and develop proprietary AI models while fully addressing data sovereignty requirements. The combined Lenovo/AMD/Cloudian solution is available now from Lenovo and from authorised resellers. For more from Cloudian, click here.

STX Next teams with appliedAI for new management suite
STX Next, a global IT consulting specialist, has partnered with Europe’s largest initiative for applied artificial intelligence to deliver the appliedAI Management Suite, a management product that increases AI value creation and facilitates EU AI Act compliance. The solution makes risk assessment and AI portfolio optimisation easier, and will drive progress for the next generation of AI initiatives. Through a centralised dashboard, the service seeks to increase return on investment whilst minimising the time, resources and costs associated with meeting compliance requirements. It sets out clear guidance and interpretation of regulatory guidelines to support organisations in navigating the complexities of the AI landscape. It targets AI and legal departments, aiming to make Europe a pioneer in AI, and a strong competitor on the global market. The final product included tools such as a feasibility calculator that evaluates factors like data readiness, model deployment, and overall deployment ease on a one-to-five scale. Other features include a risk classifier solution for AI use cases in the early stages that supports with remodelling and modifying potential solutions. The management suite can integrate with an organisation’s existing IT infrastructure to support data management and analysis, collecting from multiple sources including MLOps platforms, experiment-tracking tools, and data catalogues. The team at STX Next worked with appliedAI to develop the suite using a range of technologies, including Python, Django, Kubernetes and Google Cloud. The partnership was struck due to STX Next’s wide-ranging expertise in software development and IT applications, as well as the company’s comprehensive AI experience. Commenting on the collaboration, Ronald Binkofski, CEO at STX Next, says, “Since the introduction of the EU AI act in March 2024, many businesses have sought effective new ways to manage their AI projects to ensure compliance while maximising their potential value. “When the AI boom arrived, concerns were quickly raised around its governance and the protection of users. The new regulations were introduced by the EU to ensure that AI systems used in the region are safe, transparent, traceable, non-discriminatory and environmentally friendly, and as such they present complex but important challenges for businesses. “Our partnership with appliedAI is a result of our shared support for the progression of responsible innovation. Our collaboration should make establishing the next series of AI projects a much less laborious process, and will contribute to a safer and fairer future for AI. Added together, this will make it much easier for new projects to reach their full potential.” For more from STX Next, click here.

Iron Mountain launches new digital experience platform
Global information management specialist, Iron Mountain, today announced the availability of the Iron Mountain InSight Digital Experience Platform (DXP), a secure software-as-a-service (SaaS) platform. Customers can use the platform to access, manage, govern, and monetise physical and digital information. AI-powered self-service tools automate workflows, enable audit-ready compliance, and make data accessible and useful. Through InSight DXP’s unique intelligent document processing that extracts, classifies, and enriches information with speed and accuracy, customers can quickly turn physical and digital unstructured information into structured, actionable data they can use in their integrated business applications and as the basis for their AI initiatives. Research conducted across six countries with 700 IT and data decision-makers indicates that most (93%) of the respondents’ organisations already use generative AI in some way. An overwhelming majority of those surveyed (96%) agree that a unified asset strategy for managing both digital and physical assets is critical to the success of generative AI initiatives. The modular InSight DXP platform includes secure generative AI-powered chat, enabling fast access to data trapped within documents. It can be used to quickly query data and documents in a secure isolated environment separate from publicly available generative AI applications. Its low-code solution designer allows anyone to use its intuitive drag-and-drop features to intelligently process and unlock intelligence from content. Nathan Booth, Product Manager, Amazon MGM Studios, says, “Iron Mountain InSight DXP’s generative AI integration with content management has the potential to provide us with enhanced visibility into our assets, allowing us to make quicker and easier business decisions.” InSight DXP is focused on improving the business to employee (B2E) experience which can greatly enhance how Iron Mountain customers create meaningful experiences for their employees while increasing productivity. IDC’s Amy Machado, Senior Research Manager, Enterprise Content and Knowledge Management Strategies, says, “Iron Mountain’s ability to help its customers build models and secure data within Large Language Models (LLMs) with low-code tools can change the way organisations access and monetise documents, both digital and physical.” Narasimha Goli, Senior Vice President, Chief Technology and Product Officer, Iron Mountain Digital Solutions, adds, "We are thrilled to introduce InSight DXP, our next-generation SaaS platform, which is the foundation for AI-readiness. InSight DXP transforms your information - whether physical or digital, structured or unstructured - from unrealised potential to actionable power. This innovative platform empowers businesses to harness the full capabilities of their data, driving intelligent decisions and unlocking new growth opportunities.” Mithu Bhargava, Executive Vice President and General Manager, Iron Mountain Digital Solutions, notes, “We see InSight DXP as a critical platform to help our customers get their information ready for use in generative AI and other AI-powered applications that drive operational efficiency and enhance customer experience. With unified asset management, information governance, workflow automation, and intelligent document processing tools, our customers can efficiently manage information across physical and digital assets.” InSight DXP is a flexible platform designed to quickly design, build, and publish solutions ranging from industry-specific, such as banking solutions like Digital Auto Lending, and healthcare solutions like Health Information Exchange, to cross-industry solutions, such as Digital Human Resources and Invoice Processing. These pre-built, customisable solutions are intended to help customers and system integrators get a head start on their unique configuration needs through pre-built connectors, workflows, document types, metadata, retention rules, and AI prompts. For more from Iron Mountain, click here.

PagerDuty builds upon generative AI capabilities
PagerDuty, a global leader in digital operations management, is building upon previous generative AI (genAI) capabilities with PagerDuty Advance, which is embedded across the PagerDuty Operations Cloud platform - including Incident Management, AIOps, Automation and Customer Service Operations customers. With PagerDuty Advance, organisations can accelerate digital transformation initiatives - from operations centre modernisation to automation standardisation and incident management transformation - elevating their operational excellence. The evolution of PagerDuty Advance empowers responder teams to work faster and smarter by using genAI to surface relevant context or automate at every step of the incident lifecycle. “Global IT disruption and outages are becoming the new normal due to organisations’ technical debt and a rush to harness the power of generative AI,” says Jeffrey Hausman, Chief Product Development Officer at PagerDuty. “These are contributing factors to a greater number of outages which last longer and are more costly. “Building upon our genAI offerings, PagerDuty Advance provides customers with generative AI solutions that help them scale teams by surfacing contextual insights and automating time-consuming tasks at every step of the incident lifecycle. Organisations can take the next step in unlocking the full potential of AI and automation across the digital enterprise with the help of PagerDuty.” PagerDuty Advance includes AI-powered capabilities built to streamline manual work across the incident lifecycle, including: PagerDuty Advance Assistant for Slack - A genAI chatbot that provides helpful insight at every step of the incident lifecycle from event to resolution directly from Slack. Using simple prompts, responders can quickly get a summary of the key information about the incident. It can also anticipate common diagnostic questions and suggest troubleshooting steps, resulting in faster resolution. PagerDuty Advance for Status Updates - This feature leverages AI to auto-generate an audience-specific status update draft in seconds, offering key insights on events, progress and challenges. It helps to streamline communication while saving cycles on what to say to whom, allowing teams to focus on the real work of resolution. PagerDuty Advance for Automation Digest - Part of the Actions Log, this feature summarises the most important results from running automation jobs in one place. Responders can make informed decisions based on diagnostic results and even load the output as key values into variables in Event Orchestration for dynamic automation. PagerDuty Advance for Postmortems - Once an incident is resolved, the user can elect to generate a postmortem review, accelerating an otherwise time-consuming task of collecting all available data around the incident at hand (including logs, metrics, and relevant Slack conversations). In addition to highlighting key findings, this AI-generated postmortem includes recommended next steps to prevent future issues and indicates areas of improvement. AI Generated Runbooks - AI-generated Runbooks accelerate automation development and deployment even among non-technical teams. Operators and developers can quickly translate plain-English prompts into runbook automations or leverage pre-engineered prompts as a starting point. Interviews with early access customers revealed that PagerDuty Advance for Status Updates can save up to 15 minutes per responder per incident. Given the average number of responders responsible for status updates in enterprise organisations is five and the monthly average number of incidents is 60, PagerDuty Advance can save at least 75 hours a month; more than nine business days. Snow Tempest, Research Manager for IT Service Management at IDC, notes, “According to IDC research, customers are looking for AI-enhanced, data-driven IT service and operations practices that reduce resolution times, prevent issues, and improve end user experiences. As digital products and services increasingly drive revenue and operations, organisations can expand their advantage by successfully implementing tools that enable rapid, context-aware response to urgent issues.” For more from PagerDuty, click here.



Translate »