Tuesday, April 29, 2025

Machine Learning


Datadog Monitoring for OCI now widely available
Datadog, a monitoring and security platform for cloud applications and a member of Oracle PartnerNetwork, has announced the general availability of Datadog Monitoring for Oracle Cloud Infrastructure (OCI), which enables Oracle customers to monitor enterprise cloud-native and traditional workloads on OCI with telemetry in context across their infrastructure, applications and services. With this launch, Datadog is helping customers migrate with confidence from on-premises to cloud environments, execute multi-cloud strategies and monitor AI/ML inference workloads. Datadog Monitoring for Oracle Cloud Infrastructure helps customers: - Gain visibility into OCI and hybrid environments: Teams can collect and analyse metrics from their OCI stack by using Datadog's integrations for over 20 major OCI services and more than 750 other technologies. In addition, customers can visualise the performance of OCI cloud services, on-premises servers, VMs, databases, containers and apps in near-real time with customisable, drag-and-drop, and out-of-the-box dashboards and monitors. - Monitor AI/ML inference workloads: Teams can monitor and receive alerts on the usage and performance of GPUs, investigate root causes, monitor operational performance and evaluate the quality, privacy and safety of LLM applications. - Get code-level visibility into applications: Real-time service maps, AI-powered synthetic monitors and alerts on latency, exceptions, code-level errors, log issues and more give teams deeper insight into the health and performance of their applications, including those using Java. “With this announcement, Datadog enables Oracle customers to unify monitoring of OCI, on-premises environments and other clouds in a single pane of glass for all teams,” says Yrieix Garnier, VP of Product at Datadog. “This helps teams migrate to the cloud and execute multi-cloud strategies with confidence, knowing that they can monitor services side-by-side, visualise performance data during all stages of a migration and immediately identify service dependencies.” For more from Datadog, click here.

Panasas helps drive academic research initiatives
Panasas has announced that a number of leading academic research institutions are using storage solutions from the Panasas’ ActiveStor portfolio to support their modern high-performance computing (HPC) environments. Along with many university research institutions, the Minnesota Supercomputing Institute (MSI), the UC San Diego Centre for Microbiome Innovation (CMI), and LES MINES ParisTech, have trusted it to deliver the necessary data storage and management capabilities to support the HPC and AI workloads fuelling their research advancement. Modern HPC environments fast tracking academic research According to Hyperion Research, total HPC spending in 2022 reached $37bn and is projected to exceed $52bn in 2026. A key growth driver is that an increasing number of organisations and countries see the value of innovation and investing in R&D to advance society, grow revenues, reduce costs and remain competitive. High-performance data storage is the lifeblood of an academic research infrastructure. These research institutions cannot afford lost data, as it could set back the research program by months or even years. As these organisations simultaneously run and analyse data sets across multiple HPC and AI/ML applications with varied IO patterns and file sizes, they require an always-on storage solution that can manage the capacity of these bandwidth-intensive and mixed workloads while scaling to support growth. The Panasas’ ActiveStor portfolio of modern HPC storage solutions provides the reliability research teams need to keep their projects on track while eliminating the management burdens associated with roll-your-own and open-source storage systems. Its solutions currently power innovative research in agriculture, astrophysics, bioinformatics, climate research, computational chemistry, genome analysis, geophysics, high energy physics, machine learning, materials science, molecular biology and more. Here are a few examples of how Panasas is helping advance research within some of the most prestigious academic institutions across the globe: The Minnesota Supercomputing Institute (MSI) has relied on its storage for many years to support its growing HPC and AI research initiatives. Recently adding 10PB of additional storage, the institute leverages the company’s ActiveStor Ultra appliances running the Panasas PanFS parallel file system for a single-tier architecture that enables it to easily manage the unique set of mixed workloads that multiple research groups carry out every day. The UC San Diego Centre for Microbiome Innovation (CMI) has used it to build the data foundation it needed for continued research excellence, including the power to expedite data exploration and discovery with better control, usability and optimal uptime. The TU Freiberg is utilising Panasas as its parallel scratch file system. Rutherford Appleton Laboratory (RAL) team consolidated its storage onto Panasas ActiveStor and gained the scalability, performance and manageability that its workflows demanded, freeing up researchers’ time to spend time working on the data rather than managing and moving it. Today, a single IT manager spends half their time supporting 1,500 nodes. Click here for latest data centre news.

Over 1,300 business leaders declare AI as a force for good
More than 1,300 experts have signed an open letter to collectively emphasise the positive potential of artificial intelligence (AI) and relieve concerns about its impact on humanity. Coordinated by BCS, the open letter aims to challenge the pessimism surrounding AI and promote a more optimistic perspective. According to Rashik Parmar, CEO of BCS, the overwhelming support for the letter demonstrates the UK tech community's resolute belief that AI should be viewed as a "beneficial force" rather than "nightmare scenario of evil robot overlords". This comes standing in opposition to the recent letter signed by influential figures, such as Elon Musk, which called for a pause in developing powerful AI systems, citing the perceived "existential risk" posed by super-intelligent AI. The BCS signatories include experts from various businesses, academia, public institutions and think tanks, their collective expertise and insights highlight the myriad positive applications of AI. Sheila Flavell CBE, COO of FDM Group, says, “AI can play a key role in supercharging digital transformation strategies, helping organisations leverage their data to better understand their business and customers. As the UK continues to show its commitment to developing AI for good, it will help increase Britain’s position as a tech superpower and positively bolster the economy as its usage becomes widespread. In order to harness the full power of AI, the UK needs to develop a cohort of AI-skilled workers to oversee its development and deployment, so it is important for organisations to encourage new talent, such as graduates and returners, to engage in education courses in AI to lead this charge.” Hema Purohit, a specialist in digital health and social care for BCS, emphasises AI's ability to enable early detection of serious illnesses, like cardiac disease or diabetes, during eye tests. To further support Britain’s position as a global exemplar for high quality, ethical and inclusive AI practices, UK Prime Minister, Rishi Sunak will host a global summit on AI regulation this autumn. Challenges are emerging, including the potential automation of up to 300 million jobs, prompting companies to pause hiring in specific roles, but these must be approached pragmatically. Regulations will be a vital safeguard against the misuse of AI, instead of hasty and unregulated proliferation. As the world grapples with the powers of AI, these expert voices will provide valuable insights and perspectives to guide its responsible development and implementation. Click here for more latest news.

GovAssure, cyber security and NDR
By Ashley Nurcombe, Senior Systems Engineer UK&I, Corelight We live in a world of escalating digital threats to government IT systems. The public sector has recorded more global incidents and data breaches than any other over the past year, according to a recent Verizon study. That’s why it is heartening to see the launch of the new GovAssure scheme, which mandates stringent annual cyber security audits of all government departments, based on a National Cyber Security Centre (NCSC) framework. Now the hard work starts. As government IT and security leads begin to work through the strict requirements of the Cyber Assessment Framework (CAF), they will find network detection and response (NDR) increasingly critical to these compliance efforts. Why we need GovAssure GovAssure is the government's response to surging threat levels in the public sector. It is not hard to see why it is such an attractive target. Government entities hold a vast range of lucrative citizen data which could be used to carry out follow-on identity fraud. Government services are also a big target for extortionists looking to hold departments hostage with disruptive ransomware. And there's plenty of classified information in there for foreign powers to go after to gain a geopolitical advantage. Contrary to popular belief, most attacks are financially motivated (68%), rather than nation-state attempts at espionage (30%). That means external, organised crime gangs are the biggest threat to government security. However, internal actors account for nearly a third (30%) of breaches, and collaboration between external parties and government employees or partners accounts for 16% of data breaches. When the cause of insider risk is malicious intent rather than negligence, it can be challenging to spot because staff may be using legitimate access rights and going to great lengths to achieve their goals without being noticed. Phishing and social engineering are still among threat actors' most popular attack techniques. They target distracted and/or poorly trained employees to harvest government logins and/or personal information. Credentials are gathered in an estimated third of government breaches, while personal information is taken in nearly two-fifths (38%). Arguably the shift to hybrid working has created more risk here as staff admit being more distracted when working from home (WFH), and personal devices and home networks may be less well protected than their corporate counterparts. The growing cyber attack surface Several other threat vectors are frequently probed by malicious actors, including software vulnerabilities. The new Freedom of Information data reveals a worrying number of government assets are now using outdated software that vendors no longer support. Connected Internet of Things (IoT) devices are an increasingly popular target, especially those with unpatched firmware or factory default/easy to guess passwords. Such devices can be targeted to gain a foothold in government networks and/or to sabotage smart city services. Finally, the government has a significant supply chain risk management challenge. Third-party suppliers and partners are critical to efficiently delivering government services. But they also expand the attack surface and introduce additional risk, especially if third parties aren't properly and continuously vetted for security risks. Take the recent ransomware breach at Capita, an outsourcing giant with billions of pounds of government contracts. Although investigations are still ongoing, as many as 90 of the firm's clients have already reported data breaches due to the attack. What the CAF demands In this context, GovAssure is a long overdue attempt to enhance government resilience to cyber risk. In fact, Government Chief Security Officer, Vincent Devine, describes it as a "transformative change" in its approach to cyber that will deliver better visibility of the challenges, set clear expectations for departments and empower security pros to strengthen the investment case. Yet delivering assurance will not be easy. The CAF lists 14 cyber security and resilience principles, plus guidance on using and applying the principles. These range from risk and asset management to data, supply chain and system security, network resilience, security monitoring and much more. One thing becomes clear, visibility into network activity is a critical foundational capability on which to build CAF compliance programmes. How NDR can help NDR (Network Detection and Response) tools provide visibility. This kind of visibility will enable teams to map assets better, ensure the integrity of data exchanges with third parties, monitor compliance and detect threats before they have a chance to impact the organisation. Although the CAF primarily focuses on finding known threats, government IT leaders should consider going further, with NDR tooling designed to go beyond signature-based detection to spot unknown but potentially malicious behaviour.  Such tools might use machine learning algorithms to learn what regular activity looks like to better spot the signs of compromise. If they do, IT leaders should avoid purchasing black box tools that don't allow for flexible querying or provide results without showing their rationale. These tools can add opacity and assurance/compliance headaches. Open-source tools based on Zeek may offer a better and more reasonably priced alternative. Ultimately, there are plenty of challenges for departments looking to drive GovAssure programmes. Limited budgets, in-house skills, complex cyber threats, and a growing compliance burden will all take its toll. But by reaching out to private sector security experts, there is a way forward. For many, that journey will begin with NDR to safeguard sensitive information and critical infrastructure. Click here for more thought leadership.

Chalmers University of Technology selects Lenovo and NVIDIA for national supercomputer
Lenovo has announced that Chalmers University of Technology is using Lenovo and NVIDIA’s technology infrastructure to power its large-scale computer resource, Alvis. The project has seen the delivery and implementation of a clustered computing system for artificial intelligence (AI) and Machine Learning (ML) research, in what is Lenovo’s largest HPC (High Performance Computing) cluster for AI and ML in the Europe, Middle East and Africa region. Alvis (old Norse meaning ‘all-wise’) is a national supercomputer resource within the Swedish National Infrastructure for Computing (SNIC). It began initially in 2020 and has since developed to hold a capacity that solves larger research tasks on a broader scale. Financed by the Knut and Alice Wallenberg Foundation, the computer system is supplied by Lenovo and located at Chalmers University of Technology in Gothenburg, home to the EU’s largest research initiative, Graphene Flagship. The collaborative project allows any Swedish researcher who needs to improve their mathematical calculations and models to take advantage of Alvis' services through SNIC's application system, regardless of the research field. This supports researchers who are already utilising machine learning to analyse complex problems, and those who are investigating the use of machine learning to solve issues within their respective field, with the potential to lead to ground-breaking academic research in fields such as quantum computing and data-driven research for healthcare and science. “The Alvis project is a prime example of the role of supercomputing in helping to solve humanity’s greatest challenges, and Lenovo is both proud and excited to be selected as part of it,” says Noam Rosen, EMEA Director, HPC & AI at Lenovo Infrastructure Solutions Group. “Supported by Lenovo’s performance leading technology, Alvis will power research and use machine learning across many diverse areas with a major impact on societal development, including environmental research and the development of pharmaceuticals. This computing resource is truly unique, built on the premise of architectures for different AI and machine learning workloads with sustainability in mind, helping to save energy and reduce carbon emissions by using our pioneering warm water-cooling technology.” “The first pilot resource for Alvis has already been used by more than 150 research projects across Swedish universities. By making it larger, and opening the Alvis Systems to all Swedish researchers, Chalmers and Lenovo are playing an important role in providing a national HPC ecosystem for future research”, comments Sverker Holmgren, Director of Chalmers e-Infrastructure Commons, hosting the Alvis system. Powering energy-saving AI infrastructure Chalmers has chosen to implement a scalable cluster with a variety of Lenovo ThinkSystem servers to deliver the right mix of NVIDIA GPUs to its users in a way that prioritises energy savings and workload balance. This includes the Lenovo ThinkSystem SD650-N V2 to deliver the power of NVIDIA A100 Tensor Core GPUs, and the NVIDIA-Certified ThinkSystem SR670 V2 for NVIDIA A40 and T4 GPUs. “The work we’re doing with Chalmers University and its Alvis national supercomputer will give researchers the power they need to simulate and predict our world,” says Rod Evans, EMEA director of high-performance computing at NVIDIA. “Together, we’re giving the scientific community tools to solve the world’s greatest supercomputing challenges – from forecasting weather to drug discovery.” The storage architecture delivers a new Ceph solution with 7.8 petabytes, to be integrated into the existing storage environment at Chalmers. NVIDIA Quantum 200 Gb/s InfiniBand provides the system with low- latency, high data throughput networking and smart in- computing acceleration engines. With these high-speed infrastructure capabilities, users have almost 1000 GPUs, mainly NVIDIA A100 Tensor Core, including over 260,000 processing cores and over 800 TFLOPS of compute power to drive a faster time to answer in their research. In addition, Alvis leverages Lenovo’s NeptuneTM liquid-cooling technologies to deliver unparalleled compute efficiency. Initially, full air cooling was proposed for the project, but Chalmers instead decided to deploy Lenovo Neptune warm water-cooling capabilities to reduce long-term operational costs and result in a ‘greener’ AI infrastructure system. As a result, the university anticipates that there will be significant energy savings thanks to efficiencies through water cooling.

Xydus hits 600% hyper growth helping manage virtual workforces
Xydus has announced its rebranded emergence after a record-breaking year which saw sales increase six-fold as it closed its largest-ever deals. Previously known as Paycasso, Xydus built its technology platform and credibility for over a decade by working with the some of the world’s most trusted brands such as PWC, EY, TransUnion, Equifax, Philip Morris International, Irish Life and notably, the UK’s National Health Service, Europe’s largest employer. Hybrid working fuels growth Focusing on identity technology perfectly positioned Xydus’s cloud services and helped the team to win new deals as employee and consumer engagement moved massively to digital during lockdown. The shift to flexible working supercharged an already buoyant demand for Xydus’ (Paycasso’s) digital identity software, as employers transitioned to managing remote workforces who require digital access and consumers increased their use of e-commerce. Demand was particularly strong for organisations with large, geographically dispersed, consumer markets and significant workforces across multiple jurisdictions, such as healthcare and financial services, in the US and Europe. How Xydus works The Xydus platform uses technologies such as facial recognition, biometrics, machine learning and multi-layered encryption, which allows workers to enrol and reuse their individual identity across the enterprise in seconds. Previously this could take hours, days or weeks. The advantage for employers is a seamless onboarding experience and secure access to corporate systems and access points without the need to deploy overstretched IT or HR resources. For consumers, it simplifies and streamlines their experience as they access today’s thousands of digital services. Xydus CEO, Russell King, comments: “Recent years have seen massive disruption to the working patterns and online behaviours of billions of consumers and employers, making Xydus more relevant than ever. A decade ago, faced with a digitally expanding world, we could not think of a much bigger challenge to address than core identity. It is this consistent focus in addressing the challenges in identity management which is now paying dividends for Xydus. This last year’s spectacular sales growth reflects how the need to solve identity issues for this expansive world is only going to grow.”

Hunting Anomalies: combining CyberSec with Machine learning
In psychology, Capgras delusion is the (unfounded) thought that some close person has been replaced by an identical impostor. Under the spell of this delusion, people feel that something “isn’t right” about the person they know. Everything looks as it is supposed to but it just doesn’t feel seemly. Whenever I receive what turns out to be a “good” phishing email, I get much of the same experience. It looks legitimate but often something feels “off”. I have seen so many attempts over the years that the small irregularities or suspicious arrivals trigger a warning signal. It’s an anomaly in a familiar place. However, we can’t expect everyone to have the same experience in IT.  That’s why cybersecurity exists. We implement systems to protect users from malicious actors who will try anything to gain access or information from their victims. Manually noticing phishing or other malicious attempts takes experience. Often, emails, websites, and apps are crafted in such a way that they look almost identical. Yet, they are a little different. There is a field that deals well with detecting such small differences - machine learning. A bright future for CyberSec I don’t even mean “the future” in the exact sense of the word. Machine learning (ML) has been showing off its muscles in cybersecurity for quite some time now. Back in 2018, Microsoft had stopped a potential outbreak of Emotet through clever use of ML in both local and cloud systems. Philosophically, cybersecurity is a perfect candidate for machine learning as models are predictive. These predictions are derived from massive amounts of data (a common criticism of current ML). After the models are trained, they make predictions on data points that are very similar but not identical to the training data. Most malicious attacks depend on a similar approach. They have to fool a human user in order to execute some actions. Clearly, they must look as similar to something legitimate as possible. Otherwise, it will be ignored even by those less tech-savvy. Additionally, many new renditions of malware are somewhat simple mutations of the same code. Since we’ve been dealing with malicious code for several decades now, there’s enough of it out there to create good training sets for machine learning. We also have plenty of innocuous code for anything else we might need. A common threat: Domain Generated Algorithms Domain Generation Algorithms (DGAs) have been a long-standing threat for cybersecurity. Nearly everyone who has been in this field has had some experience with DGAs. There are numerous benefits for the attacker, making it a popular vector of attack. One of the primary benefits of DGA attacks is that the perpetrator can flood DNS with thousands of randomly generated domains. Of those thousands only one would be the real C&C center, resulting in significant issues for any expert trying to find the source. Additionally, since DGAs are, mostly, seed-based, the attacker can know which domain to register beforehand. A common and very popular way to generate seeds is to use dates. Obviously, it’s very easy to predict which domains to register ahead of time. However, DGAs produce URLs that look nothing like a regular website (with some exceptions, as randomness can lead to something that may seem like a pattern). For example, Sisron’s DGA produced these samples: mjgwndiwmjea.orgmdcwntiwmjea.netmdywntiwmjea.net There are two important drawbacks. It makes it easier to find that something is amiss, even for someone outside of cybersec. Secondly, after just several domains, it’s clear (to humans) that they are being generated. However, here’s the fun part - automating fool-proof (or even something close to that) DGA detection is unbelievably difficult. Rule-based approaches are prone to failure, false positives, or are too slow. DGAs were (and in large part still are) a real pain for any cybersecurity professional. Luckily, machine learning has already allowed us to make great strides in improving detection methods - Akamai have developed their, allegedly, very sophisticated and successful model. For smaller players in the market, there are plenty of libraries and frameworks for the same purpose. Machine learning for other avenues Cybersecurity is in a fortunate position - there’s millions of data points and more are produced every day. Unlike other fields, domain-specific data that can be easily labelled will continue being created for the foreseeable future. Yet, if DGAs can be “solved” (I use this word with some caution) through machine learning, other attack methods almost definitely can be as well. A great application for machine learning is phishing. Outside of being the most popular vector of attack, it’s also the one that prominently uses impersonation and fabrication. Every (good) phishing website (and email) looks a lot like it’s supposed to. However, there will always be some discrepancy - an unusual link here, a grammatical error there, there’s always something waiting to be uncovered. After some extensive logistic regression model training, such a tool should be able to output a phishing-probability and assign a specific website to a class. While acquiring data for these models might be a little challenging, there are some public sets available (e.g. PhishTank, used by authors of the study) to ease the process. Conclusion The applications I have mentioned are just a quick skim over the surface. Machine learning models can be applied to probably every sphere in cybersecurity. Malware detection, OSINT, email protection and others can be tackled effectively through proper use of ML. Thus, cybersecurity is in a unique position. The broad nature of cyberattacks generates large amounts of data that is the foundation for ML-based solutions. It will not solve everything, especially highly tailored attacks, but it will raise the bar that attackers need to overcome immensely. Therefore, cybersecurity should be thought of as the avant-garde of machine learning applications. We’ll be discussing more about the ML-based solutions in a free annual web scraping conference Oxycon - the agenda involves both technical and business topics around data collection. Free registration is available here. Author: Juras Juršėnas, COO at Oxylabs.io.

The best practices for building an AI Serving Engine
By Yiftach Schoolman, Redis Labs Co-founder and CTO One of the most critical steps in any operational machine learning (ML) pipeline is artificial intelligence (AI) serving, a task usually performed by an AI serving engine. AI serving engines evaluate and interpret data in the knowledgebase, handle model deployment, and monitor performance. They represent a whole new world in which applications will be able to leverage AI technologies to improve operational efficiencies and solve significant business problems. AI Serving Engine for Real Time: Best Practices I have been working with Redis Labs customers to better understand their challenges in taking AI to production and how they need to architect their AI serving engines. To help, we’ve developed a list of best practices: Fast end-to-end serving If you are supporting real-time apps, you should ensure that adding AI functionality in your stack will have little to no effect on application performance. No downtime As every transaction potentially includes some AI processing, you need to maintain a consistent standard SLA, preferably at least five-nines (99.999%) for mission-critical applications, using proven mechanisms such as replication, data persistence, multi availability zone/rack, Active-Active geo- distribution, periodic backups, and auto-cluster recovery. Scalability Driven by user behaviour, many applications are built to serve peak use cases, from Black Friday to the big game. You need the flexibility to scale-out or scale-in the AI serving engine based on your expected and current loads. Support for multiple platforms Your AI serving engine should be able to serve deep-learning models trained by state-of-the-art platforms like TensorFlow or PyTorch. In addition, machine-learning models like random-forest and linear-regression still provide good predictability for many use cases and should be supported by your AI serving engine. Easy to deploy new models Most companies want the option to frequently update their models according to market trends or to exploit new opportunities. Updating a model should be as transparent as possible and should not affect application performance. Performance monitoring and retraining Everyone wants to know how well the model they trained is executing and be able to tune it according to how well it performs in the real world. Make sure to require that the AI serving engine support A/B testing to compare the model against a default model. The system should also provide tools to rank the AI execution of your applications. Deploy everywhere In most cases it’s best to build and train in the cloud and be able to serve wherever you need to, for example: in a vendor’s cloud, across multiple clouds, on-premises, in hybrid clouds, or at the edge. The AI serving engine should be platform agnostic, based on open source technology, and have a well-known deployment model that can run on CPUs, state-of-the-art GPUs, high- engines, and even Raspberry Pi device.

Schneider Electric partners with Immersive Labs to launch Cyber Academy
Schneider Electric partners with Immersive Labs to provide industry-leading virtual Cyber Academy. Through the academy, Schneider Electric aims to enable businesses to be more secure by training employees at all levels and monitoring workplace capabilities. The focus extends beyond preparing companies to react to the threats, instead proactively identifying deficiencies and improving preparedness, throughout organisations. The Schneider Electric Cyber Academy provides on-demand access to world-class cybersecurity training, onboarding and platform orientation, gamified entry-level skills platform from Immersive Labs, instant access to over 200 labs and monitoring of improvements in workforce capability. The training system also provides companies with the opportunity to battle-test teams against emerging fabricated threats. These include scenarios based on cyber incidents such as Norsk Hydro, to test your operations teams Crisis Management process against. By asking users to apply their skills to real-world challenges and find real-world solutions, teams leave this training with realistic learning that they can put into action. Cyber-attacks are a constant risk, with malicious actors constantly evolving their modes of attack. The traditional approach of yearly, company-wide training is no longer sufficient. The Schneider Cyber Academy is continually updated to stay relevant and ensure employees have an understand the current threats. The academy provides bespoke education programmes to target potential gaps, with inbuilt scoring mechanisms to encourage development and engagement. Users are given a set number of tasks based on their role and, as time is a precious commodity for all businesses, participants can complete the training sessions at their own pace. Also, workers will only have to complete the sessions relevant to them, due to the Academy’s ability to cater programs to specific employees. This approach ensures that training remains useful for each individual, maintaining employee engagement. “People are the first and most susceptible line of defence against cyber-attacks. It is therefore more important than ever to ensure your teams are trained to deal with the plethora of threats they face,” says Victor Lough, Cyber Security and Advanced Digital Services Business Lead at Schneider Electric. “The Schneider Electric Cyber Academy enables training to continue remotely and be time-efficient for all involved. With more employees than ever working from home, ensuring security protocols and training are up-to-date must be a key priority not just now, but for the foreseeable future.”

Increasing insight with remote monitoring
Mantracourt has released SensorSpace, an umbrella platform for remotely monitoring all the key operational parameters across an industrial site. Suited to use in a variety of industrial sectors, it can be used to remotely monitor the status of any system equipped with a T24 wireless system, from measuring hydraulic pressures and force measurements to temperatures on a food processing line. This insight can help site managers improve decision making when it comes to implementing maintenance strategies and investing in new systems. SensorSpace can be used in a variety of industrial sectors, including the agricultural sector, food and beverage manufacturing, construction and civil engineering. It can be easily set up to automatically send SMS, email and app-based alerts in the event of any problems, like an unexpected change in a system’s operational parameters — this could be anything from pressure loss on a CO2 gas cylinder in an agricultural setting to a sudden drop in temperature in the pre-heating process at a manufacturing facility. “SensorSpace is a fully customisable cloud-based platform, tailored to the needs of the end user,” explains Matt Nicholas, product design manager at Mantracourt. “The dashboard can be as simple or as complicated as you need it to be, from a simple numerical display of operational parameters to in-depth overlays, charts and graphs showing live and historical data.” “The platform provides a place to store and mine all your performance data for up to three years, allowing asset managers to gain insights that were not possible before. For example, site managers in the agricultural sector can use SensorSpace to overlay data for factors like temperature, soil moisture levels and crop yields over time to improve decision making including investing in things like heat lamps and irrigation systems.” SensorSpace monitoring is a bespoke cloud-based platform that allows customers to login from anywhere in the world using a desktop or mobile device and remotely monitor system status and asset performance information relayed by Mantracourt’s T24 wireless telemetry system. “The T24 system offers sensor manufacturers the chance to add wireless capability into their products and has become increasingly popular in recent years because of its modular design and ease of set up”, continues Nicholas. “Combining this with the SensorSpace platform offers manufacturers to significantly enhance their data capture and analysis capabilities.” “It is becoming clear that remote monitoring offers the most convenient and cost-effective way of managing system performance. This is particularly true in a post-pandemic world, with many businesses forced to adjust to a new digital reality much sooner than they might have planned.” “It has been a common theme for site managers to worry about the status of their key systems during lockdown. Now, all they need to do is login to their SensorSpace platform to get the reassurance they need that everything is working as intended.”



Translate »