Tuesday, April 22, 2025

Software & Applications


R&M introduces latest version of its DCIM software
R&M, a globally active developer and provider of high-end infrastructure solutions for data and communications networks, is now offering Release 5 of the DCIM software, inteliPhy net. With Release 5, inteliPhy net is turning into a digital architect for data centres. Computer rooms can be flexibly designed according to the demand, application, size, and category of the data centre. Planners can position the infrastructure modules intuitively on an arbitrary floor plan using drag-and-drop, and inteliPhy net enables detailed 2D and 3D visualisations that are also suitable for project presentations. With inteliPhy net, it is possible to insert, structure and move racks, rack rows and enclosures with just a few clicks, R&M tells us. Patch panels, PDUs, cable ducts and pre-terminated trunk cables can be added, adapted and connected virtually just as quickly. The software finds optimal routes for the trunk cables and calculates the cable lengths. inteliPhy net contains an extensive library of templates for the entire infrastructure, such as racks, patch panels, cables and power supply. Models for active devices with data on weight, size, ports, slots, feed connections, performance and consumption are also included. Users can configure metamodels and save them for future planning. During planning, inteliPhy net generates an inventory list that can be used directly for cost calculations and orders. The planning process results in a database with a digital twin of the computer room. It serves as the basis for the entire Data Centre Infrastructure Management (DCIM), which is the main function of inteliPhy net. R&M also now offers ready-made KPI reports with zero-touch configuration for inteliPhy net. Users can link the reports with environmental, monitoring, infrastructure, and operating data to monitor the efficiency of the data centre. Customisable dashboards and automated KPI analyses help them to regulate power consumption and temperatures more precisely, and to utilise resources. Another new feature is the interaction of inteliPhy net with a focus on the savings in packaging service from R&M. Customers can, for example, configure Netscale 48 patch panels individually with inteliPhy net. R&M assembles the patch panels completely ready for installation and delivers them in single packaging. The concept saves a considerable amount of individual packaging for small parts. This reduces raw material consumption, waste and the time required for installation. For more from R&M, click here.

New partnership will integrate digital twin software with DCIM platform
Schneider Electric has partnered with IT services provider DC Smarter, the German specialist for augmented reality solutions, to integrate its digital twin software, DC Vision, within Schneider Electric’s Data Centre Infrastructure Management (DCIM) platform, EcoStruxureä IT Advisor.  Available immediately for purchase via Schneider Electric, DC Vision utilises data from EcoStruxure IT and combines it with augmented reality (AR) to create a comprehensive software solution that helps optimise infrastructure performance and increase operational resilience.  By integrating these technologies and consolidating data from DCIM, IT Service Management (ITSM) and Building Management System (BMS) technologies within the Digital Twin, data centre operators can gain granular insights into the health and status of their infrastructure, and make an informed decision to increase performance, sustainability and reliability.  Further, by removing conventional IT silos, the digital twin allows owners, operators and IT managers to visualise the data centre environment using real-time information and thereby create a comprehensive digital replica which leverages all relevant performance data from the data centre.  DC Vision, integrated with Schneider Electric EcoStruxureä IT Advisor, also provides key opportunities for seamless collaboration between on-site workers and remote technicians. Its Remote Assist feature, for example, makes it possible for experts and colleagues to interact with the platform in real-time, and share a common AR view of their critical environment to support and encourage collective problem-solving alongside the processing of complex tasks - regardless of location. Schneider Electric's digital twin technology is an important core of DC Vision’s advanced capabilities. With Schneider Electric's EcoStruxure IT Advisor solution, a virtual image of the physical data centre can be created, including the necessary database.  The virtual 3D representation allows precise monitoring and analysis of the entire IT infrastructure. Additionally, the integration of AR software provides IT personnel with easy-to-understand real-time status information and actionable, contextual instructions, which help to quickly identify faults and properly handle complex service tasks. DC Vision is available immediately for purchase via Schneider Electric.  For more from Schneider Electric, click here.

VictoriaMetrics and CMS team up to monitor the universe
VictoriaMetrics has announced its role assisting the monitoring tasks of the Compact Muon Solenoid (CMS) experiment at the European laboratory for particle physics, CERN.  Tailor-made monitoring solutions  The CMS experiment is one of four particle physics detectors built at the Large Hadron Collider (LHC). Located deep underground at the border of Switzerland and France, the project is currently focused on experiments investigating standard model physics, extra dimensions and dark matter.  The computing infrastructure to deal with the multi-petabyte data sets produced by CMS requires best-in-class systems to monitor workload and data management, data transfers, and submission of production requests.  The CMS experiment has long relied on scalable, open source solutions to satisfy real-time and historical monitoring needs. However, after encountering storage and scalability issues with long term monitoring solutions such as Prometheus and InfluxDB, the CMS monitoring team began the search for alternatives. Edging out existing technology The CMS monitoring team has engaged VictoriaMetrics following a post by CTO and Co-Founder Aliaksandr Valialkin on Medium, which benchmarked VictoriaMetrics against other popular monitoring systems, and were won over by the detail on display.  "We were searching for alternative solutions following performance issues with Prometheus and InfluxDB. VictoriaMetrics' walkthrough of use cases, and concise detail gave us excellent insight into how they could help us. The solution's backwards compatibility with Prometheus made implementation into the CMS monitoring cluster as smooth and seamless as possible." says V. Kuznetsov from Cornell University (member of CMS collaboration). Initially implementing VictoriaMetrics as backend storage for Prometheus, the CMS monitoring team progressed to using the solution as front end storage to replace InfluxDB and Prometheus. This had the added impact of removing cardinality issues with Influx.  Since installing VictoriaMetrics, the CMS monitoring team had zero issues with cardinality, or using the software on the operational side. The CMS monitoring team gained added confidence in the open source flexibility of VictoriaMetrics after seamlessly implementing new features for vmalert, the solution's alerting system. "Working with CMS to monitor the experiment computing infrastructure is a great honour for the team here. The number of use cases for monitoring and observability is growing exponentially, and seeing our tech applied to cutting-edge science is testament to how critical monitoring has become. Our open source, community driven model is and will be at the core of our offering, granting us the flexibility to serve projects as complex as CMS infrastructure in the future", says Roman Khavronenko, Co-Founder of VictoriaMetrics.

Uptime Institute finds downtime consequences worsening as efforts to curb outage frequency fall short
The digital infrastructure sector is struggling to achieve a measurable reduction in outage rates and severity, and the financial consequences and overall disruption from outages are steadily increasing, according to Uptime Institute, which today released the findings of its 2022 annual Outage Analysis report. “Digital infrastructure operators are still struggling to meet the high standards that customers expect and service level agreements demand – despite improving technologies and the industry’s strong investment in resiliency and downtime prevention,” says Andy Lawrence, Founding Member and Executive Director, Uptime Institute Intelligence. “The lack of improvement in overall outage rates is partly the result of the immensity of recent investment in digital infrastructure, and all the associated complexity that operators face as they transition to hybrid, distributed architectures,” comments Lawrence. “In time, both the technology and operational practices will improve, but at present, outages remain a top concern for customers, investors, and regulators. Operators will be best able to meet the challenge with rigorous staff training and operational procedures to mitigate the human error behind many of these failures.” Uptime’s annual outage analysis is unique in the industry, and draws on multiple surveys, information supplied by Uptime Institute members and partners, and its database of publicly reported outages. Key Findings Include: • High outage rates haven’t changed significantly. One in five organisations report experiencing a ‘serious’ or ‘severe’ outage (involving significant financial losses, reputational damage, compliance breaches and in some severe cases, loss of life) in the past three years, marking a slight upward trend in the prevalence of major outages. According to Uptime’s 2022 Data Centre Resiliency Survey, 80% of data centre managers and operators have experienced some type of outage in the past three years – a marginal increase over the norm, which has fluctuated between 70% and 80%. • The proportion of outages costing over $100,000 has soared in recent years. Over 60% of failures result in at least $100,000 in total losses, up substantially from 39% in 2019. The share of outages that cost upwards of $1 million increased from 11% to 15% over that same period. • Power-related problems continue to dog data centre operators. Power-related outages account for 43% of outages that are classified as significant (causing downtime and financial loss). The single biggest cause of power incidents is uninterruptible power supply (UPS) failures. • Networking issues are causing a large portion of IT outages. According to Uptime’s 2022 Data Centre Resiliency Survey, networking-related problems have been the single biggest cause of all IT service downtime incidents – regardless of severity – over the past three years. Outages attributed to software, network and systems issues are on the rise due to complexities from the increasing use of cloud technologies, software-defined architectures and hybrid, distributed architectures. • The overwhelming majority of human error-related outages involve ignored or inadequate procedures. Nearly 40% of organisations have suffered a major outage caused by human error over the past three years. 85% of these incidents stem from staff failing to follow procedures or from flaws in the processes and procedures themselves. • External IT providers cause most major public outages. The more workloads that are outsourced to external providers, the more these operators account for high-profile, public outages. Third-party, commercial IT operators (including cloud, hosting, colocation, telecommunication providers, etc.) account for 63% of all publicly reported outages that Uptime has tracked since 2016. In 2021, commercial operators caused 70% of all outages. • Prolonged downtime is becoming more common in publicly reported outages. The gap between the beginning of a major public outage and full recovery has stretched significantly over the last five years. Nearly 30% of these outages in 2021 lasted more than 24 hours, a disturbing increase from just 8% in 2017. • Public outage trends suggest there will be at least 20 serious, high-profile IT outages worldwide each year. Of the 108 publicly reported outages in 2021, 27 were serious or severe. This ratio has been consistent since the Uptime Intelligence team began cataloguing major outages in 2016, indicating that roughly one-fourth of publicly recorded outages each year are likely to be serious or severe.

Kao Data to host Speechmatics' HPC and AI supercomputer
Kao Data has announced it has secured a new customer contract with Speechmatics, providing highly accurate speech-to-text capabilities to global enterprise businesses. Speechmatics' new high-performance computing (HPC) deployment includes NVIDIA A100 GPUs, housed within an advanced Supermicro computing infrastructure, which will be hosted at Kao Data's Harlow campus. The supercomputer will allow Speechmatics to expand its GPU-accelerated neural network research and development, supporting increasing customer demand for its leading speech recognition technology. Kao Data was chosen as the UK's premier location for HPC and AI, in addition to its commitments to sustainability and energy efficiency. Speechmatics' high-density supercomputer will benefit from bespoke colocation, an SLA-backed PUE of 1.2, and will be powered by 100% renewable energy. Recognised in both 2020 and 2021 by the FT1000 as one Europe's fastest-growing companies, and named within NVIDIA's prestigious Inception Program, Speechmatics exploits deep learning technology to deliver highly accurate and ultra-fast speech-to-text technology. The company utilises NVIDIA's acclaimed GPU hardware to complete its aim of understanding every voice, regardless of demographic, accent, or dialect. Speechmatics' supercomputer will work in tandem with its hyperscale cloud deployments, providing a pioneering example of fluent, hybrid HPC in-action. Central to this capability is Megaport's hyperscale connectivity solutions at Kao Data's Harlow campus, which provides seamless connectivity and on-ramps between the on-premises supercomputer, and Speechmatics' instances within Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. "Powering modern machine learning research and development requires an advanced computing infrastructure which only becomes more demanding as you scale" says Will Williams, VP Machine Learning, Speechmatics. "As we continue to develop our technological edge through the use of self-supervised learning in our models, it's crucial to ensure our data centre provider can scale with our demands, but in a sustainable way. Kao Data was the obvious choice for us as the best HPC data centre operator in the UK with the benefit of being powered by 100% renewable energy." "The opportunity to support Speechmatics in this crucial phase of the company's expansion is an exciting prospect for our organisation, and further underpins Kao Data's position as the UK's preeminent provider of sustainable data centres for HPC and AI," says Lee Myall, CEO, Kao Data. "We look forward to working closely with Speechmatics to help power and support their unique approach of applying neural networks to speech recognition."

Visibility is vital if we are to improve safety and trust in open source
When it comes to securing open source, ‘visibility and transparency are vital’ writes Vivian Dufour, CEO of Meterian, and when a vulnerability is discovered, the ability to respond fast saves time, resource and reputations. Recent high profile cyber security incidents have reinforced the importance of cleaning up the open source software supply chain. From Heartbleed to the Apache Software Foundation’s Log4j vulnerability, these highly publicised incidents have exposed the threats associated with the software supply chain. Open source security vulnerabilities are nothing new. Heartbleed was a security bug in the OpenSSL cryptography library that affected many systems. Similarly, Log4Shell is a severe vulnerability, however in the case of Log4j the number of affected systems could well run into potentially billions. Many cybersecurity experts have characterised Log4Shell as the single biggest, most critical vulnerability of the last decade. How do we ensure source code, build, and distribution integrity to achieve effective open source security management? Open source under the microscope Technology companies have been using open source for years as it speeds up innovation and time to market. Indeed, most major software developments include open source software – including software used by the national security community. Open source software brings unique value, but it also has unique security challenges. It is used extensively, however, the responsibility of ongoing security maintenance is carried out by a community of dedicated volunteers. These security incidents have demonstrated that the use of open source is so ubiquitous that no company can blindly continue in the mode of business as usual. Recent research showed that 73% of applications scanned have at least one vulnerability. The known unknown The concept of known knowns, known unknowns and unknown unknowns has been widely used as a risk assessment methodology. When it comes to cybersecurity and the voracity of threat actors to exploit vulnerabilities, it is a useful analogy. Let’s take Apache Log4J as an example. Companies often create products by assembling open source and commercial software components. Almost all software will have some form of ability to journal activity and Log4j is a very common component used for this. Java logger Log4j 2 – A zero-day vulnerability On 9 December 2021, the zero-day vulnerability in the Java logger Log4j 2, known as Log4Shell, sent shockwaves across organisations as security teams scrambled to patch the flaw. If left unfixed, attackers can break into systems, steal passwords and logins, extract data, and infect networks with malicious software causing untold damage, not least to brand reputations. However, herein lies the problem. How do you quickly patch what you don’t know you have? Often in the race to innovate, the first thing sacrificed is documentation. Without it how does a company know if Log4J is integrated within its application estate, let alone know if it has been previously patched. Improving safety and trust when speed is of the essence If we are to increase safety and trust in software, we must improve transparency and visibility across the entire software supply chain. Companies should have the ability to automatically identify open source components in order to monitor and manage security risks from publicly disclosed vulnerabilities. A software bill of materials (SBOM) should be a minimum for any project or development. Without such visibility of all component parts, security teams cannot manage risk and will be unaware, and potentially exposed, to dangers lurking in their software. Innovating securely As organisations continue to innovate at pace in order to reduce time to market, the reliance on open source software continues to increase. However, when the security of a widely-used open source component or application is compromised, every company, every country, and every community is impacted. Organisations can and should take advantage of the many benefits that open source software can deliver, but they must not do so blindly. Ensuring you know the exact make-up of your technology stack including all the component parts is an important first step. Choosing discovery tools that have the widest comprehensive coverage is important, and so too is the flexibility to grade alerts so that only the most pressing threats are highlighted. This avoids ‘alert fatigue’ and enables security teams to focus resources where it matters most, putting organisations in a good position to act fast when vulnerabilities are discovered. Hackers faced with stronger security defences will continue to turn their attention to the weaker underbelly of the software supply chain. Now is the time for organisations to implement integrated and automated tooling to gain comprehensive risk control of components in their open-source software supply chain. Only by increasing visibility, coverage of known unknowns and transparency can companies stay one step ahead.

Google launches Last Mile Fleet Solution and Cloud Fleet Routing API to meet COVID-driven challenges
Google has announced the launch of Last Mile Fleet Solution from Google Maps Platform and Cloud Fleet Routing API from Google Cloud—two new solutions to help fleet operators improve delivery success and optimise fleet performance.  The pandemic has accelerated ecommerce adoption and strained supply chains, while consumer expectations around delivery speed and visibility have reached an all-time high. At the same time, last-mile delivery is estimated to make up more than half of total shipping costs. This means last-mile fleet operators have to work harder to create better consumer experiences and improve their operations. Last Mile Fleet Solution and Cloud Fleet Routing API help fleet operators address these challenges through an integrated suite of mapping, routing, and analytics capabilities.   Cloud Fleet Routing API focuses on the route planning phase of delivery and allows operators to perform advanced fleet-wide optimisation, enabling them to determine the allocation of packages to delivery vans and the sequencing of the delivery tasks. Natively integrated with Google Maps routes data, Cloud Fleet Routing API can solve simple route planning requests in near-real-time, and scale to the most demanding of workloads with parallelised request batching. Across this spectrum, customers can specify a variety of constraints, such as time windows, package weights, and vehicle capacities. Cloud Fleet Routing runs on the cleanest cloud in the industry and can help carriers meet sustainability targets by reducing distance travelled, number of delivery vans, and CO2 output from computing. Last Mile Fleet Solution focuses on delivery execution and allows fleet operators to optimise across every stage of the last-mile delivery journey, from ecommerce order to doorstep delivery. The solution also helps businesses create exceptional delivery experiences for consumers and provides drivers the tools they need to perform at their best when completing tasks throughout the day. It builds on the On-demand Rides & Deliveries mobility solution from Google Maps Platform, which is already used by leading ride-hailing and on-demand delivery operators around the world.  When implemented together, Last Mile Fleet Solution and Cloud Fleet Routing API allows businesses to optimise across every stage of the last-mile delivery journey, with features such as:  Address capture to help obtain an accurate address and location for each pickup or delivery. Route optimization to help ensure drivers are provided with routes that optimize around the fleet's constraints—including delivery time windows—and adapt based on real-time traffic.Driver routing & navigation to deliver a seamless driver experience and improve route compliance with in-app navigation powered by Google Maps.Shipment tracking to keep consumers updated with live, day-of shipment tracking, including up-to-date location and arrival times of customer packages. Fleet performance to enable visibility into real-time route progress and shipment insights for operations teams. “The pandemic further accelerated both e-commerce and the number of deliveries, which were already growing rapidly. The increased strain on delivery networks, plus many other factors like driver shortages, poor address data, factory closures, and an increase in fuel prices have impacted delivery time and success,” says Hans Thalbauer, Managing Director, Global Supply Chain & Logistics Industries, Google Cloud. “With Google Maps Platform’s Last Mile Fleet Solution and Cloud Fleet Routing API, we’re making it easier for delivery fleet operators to address these issues and create seamless experiences for consumers, drivers, and fleet managers.”  “Our promise to customers is that we’ll deliver their goods within 120 minutes of receiving their order, and efficient route planning and navigation is indispensable in helping us achieve that. We chose Google Maps Platform because no other provider supports us so well with data—from distance and travel time data for planning to real-time data while driving,” comments Thomas Manthey, Head of Engineering, Warehouse and Logistics, flaschenpost SE. “At Paack, we are obsessed with helping some of the largest e-commerce retailers in Europe create exceptional delivery experiences for the millions of orders they receive each month. To scale quickly, we adopted Last Mile Fleet Solution and Cloud Fleet Routing which enables our drivers and fleet managers to maintain peak efficiency and go beyond our 98% on-time, first-time delivery rates,” says Olivier Colinet, Chief Technology & Product Officer, Paack Logistics. To learn more about Last Mile Fleet Solution and Cloud Fleet Routing API, register for our upcoming Google Cloud Spotlight event, “Shining a light on supply chain and logistics.” 

Secure-IC launches a unique cybersecurity lifecycle management platform
After recently announcing its capital raise to support its "Chip to Cloud" vision, Secure-IC has announced the launch of a unique cybersecurity lifecycle management platform for connected objects (Securyzr integrated Security Services Platform). Secure-IC has been providing the electronic industries, for over a decade, with its protection technologies, namely the Securyzr iSE (integrated Secure Elements) and Silicon IPs which are embedded into hundreds of millions of electronic chips for smartphones, computers, automobiles, smart meters, cloud servers and more. The semiconductor industry is now aware that integrated Secure Elements (iSE) are the solution to protect their System-on-Chips, across the supply chain of vertical industries (automotive, industrial IoT and OT, AI, Telecom, New Space). The iSE typically enables secure provisioning, and therefore proactively fights against master compromising, malware/Trojan insertion, overbuilding, etc. Then, after deployment, the Edge product must remain on the safe and secure side. Even better, security must be maintained at the desired level despite the fact that adversaries are becoming increasingly sophisticated. Nowadays, fleets of connected devices have operational needs to maintain over time a relevant level of security, monitor their status and securely update their firmware. Therefore, beyond the physical security of electronic chips, Secure-IC intends to cover the entire security lifecycle of connected objects and embedded systems, from their design through the management of fleets of deployed devices up to their decommissioning. That is why, Secure-IC has developed the Securyzr integrated Security Services Platform (iSSP) to enable its customers and partners to securely supply, deploy and manage a fleet of devices from the cloud and be provided with added-value security services, as well as compliance to standards. The Securyzr iSSP is composed of the traditional Securyzr iSE which is Secure-IC’s Root of Trust on the embedded Edge side with a software agent to provide connectivity from chip to cloud (and respectively) and the newly launched Securyzr Server. This solution will be able to run on both public and private clouds and will come with a user-friendly web interface and software bridge for the devices to manage heterogeneous fleets of devices. The Securyzr Server manages the different services for the platform and the business applications it hosts: −        Key provisioning to securely provision the chip devices with secret key across the supply chain, −        Firmware Update (FOTA/FUOTA) to securely provide chips with their software and then update them physically or over the air, to maintain their security level, −        Devices Monitoring and cyber intelligence to provide a proactive security service, retrieving cyber security logs from the chips, analyzing them and sending instructions back to the chip fleet if necessary, −        Devices Identity to guarantee trust from the chip to the cloud, to the devices, users and data through devices multi-factor authentication that allows resistance against impersonation, replay, and in the event of an initial compromise. The security of the systems will be easily visualized through a Security Digital Twin. This solution which meets the requirements of security certification schemes, has been developed for pilot projects already deployed in various applications and industries and will support our customers in addressing all the challenges they face during the design, operation and lifecycle of their secure IoT fleets and ensure trusted data.

Pressing sector challenges necessitate immediate priorities
Ensuring viable green energy technologies and alleviating risks from fluctuating power and unplanned downtime remain vital priorities for data centre professionals building hyperscale and edge data centres, according to new insights from Aggreko. Detailed in a series of reports – The Inside View – industry professionals explain the challenges they face in the management and maintenance of data centres, following a number of in-depth interviews. Aggreko’s two reports look in detail at concerns around power provision and risk management in the booming hyperscale and edge data centre markets from those who run the day-to-day operations of the facilities. To form the reports, Aggreko interviewed 10 key industry professionals, including energy managers, facilities managers, managing directors, consultants and those involved in the day-to-day operations of a hyperscale facility. The reports identify issues around facilities not always having the right provision for consistent power in the wake of growing national grid strain, and implementing maintenance strategies that pre-empt disruptive events rather than in response post-incident. It also highlights issues around sustainable energy sources when building due the construction phase of both edge and hyperscale data centres, including the practicality of implementing green technologies cost-effectively and at scale. “Data centre construction continues to boom throughout Europe, but with more demand comes more questions about what issues may arise as additional facilities are built,” says Billy Durie, Global Sector Head for Data Centres for Aggreko. “In such an in-demand market, it is crucial that key stakeholders working within it are aware of ongoing and upcoming trends, and nowhere is this more apparent than in utilities provision. “Put simply, if there is no contingency plan in place for power provision or appropriate maintenance strategies, data centre uptime can be threatened. Yet alongside this, sustainability poses a similarly existential issue, especially with the regulatory landscape expected to tighten further in the coming years. In such a challenging situation, the sector needs to have suppliers that can provide agile and effective equipment solutions.” Alongside power and equipment upkeep concerns, The Inside View reports examines issues professionals working on hyperscale and edge facilities face scaling up existing data centres to meet skyrocketing demand. Issues raised include ageing infrastructure and how it can cope powering modern IT equipment, and how to expand existing facilities quickly and effectively. “Our research has clearly shown that the data centre market is continuing to struggle with ever-increasing latency requirements, balancing inefficient existing plant against rapidly advancing data-intensive technologies. As well as this existing, insufficient energy infrastructure and unclear regulations can result in situations where stakeholders are not willing to upgrade existing equipment to meeting pressing challenges. However, as these new reports demonstrate, if the sector is to meet decarbonisation demands while meeting growing service levels, innovative strategies must be explored around the provision of key site utilities.” To download Aggreko’s new data centres reports, The Inside View, click here.

Low-code\No-code patents rise to transform software development for digital transformation
Software development in the digital era is often challenged by skilled developer shortages, technical debt, and shadow IT. Low-code\no-code (LCNC) platforms hold a promise to fix these problems and accelerate enterprise digital transformation. With their growing acceptance in wide-scale, patent grants in the space recorded a 19.6% growth between 2015 and 15 December 2021, reveals GlobalData. Darshana Naranje, Senior Disruptive Tech Analyst at GlobalData, notes: “A variety of IT technologies have evolved over the last decade and therefore IT teams are required to rely on experts to embrace digitalization. Moreover, conventional custom scripts often fail to match the speed and agility of business requirements. This has led to a significant growth in LCNC patent filings and grants over the last five years where technology followed by financial services were the sweet spots across industries.” Research focus areas in technology include LCNC platform development for content generation, web hosting, enterprise resource planning (ERP) systems, dialogue systems, game software, and spreadsheet-based software, while financial services encompassed LCNC platform development for financial transactions, investments, and customer relationship management (CRM) systems. GlobalData’s FutureTech Series report, ‘Codeless Tomorrow: Can Low-Code/No-Code Platforms Revolutionize Application Development in Digital Age?’, highlights the key trends in patent filings and grants by industry. Technology US11069340B2: Existing speech generation systems do not allow non-expert administrators to modify the architecture and dialogue patterns. Microsoft has patented a system that allows non-engineer administrators to modify the existing dialogue system without programming. It includes a knowledge system and learning model that can be used to annotate a user voice using simple language, in audio and video recording systems. KR101815561B1: Conventional ERP systems have limitations in tackling rapid modification and renewal of the enterprise business processes. Korean startup Bizentro has patented an ERP system, which enables its clients to perform the required business process customization in the existing system without any coding expertise. Financial Services US11017053B2: Useful and desired information can be often lost in massive data. CRM software is used to track customer information and audit sales opportunities. However, it lacks business logic and templates. Callidus Software (SAP subsidiary) has patented a CRM system that includes a content repository and a communication portal developer to manage customer data that can be created without coding. US10817662B2: Electronic devices can communicate different financial and legal transactions using web services. However, web services require complex design, development, and deployment, which is an expensive and time-consuming process. US-based startup Kim Technologies has patented a system, which enables the automation of data collection, validation, and execution without coding. Naranje concludes: “The COVID-19 crisis has made businesses more open to agile and innovative LCNC tools. Democratizing the patents of these technologies will become an attractive proposition to minimize the barriers between technology creation and adoption, ultimately benefitting the drive for digital transformation.”



Translate »