Thursday, April 24, 2025

Artificial Intelligence


Alibaba Cloud launches ModelScope platform and new solutions
Alibaba Cloud started its annual Apsara Conference by announcing the launch of ModelScope, an open-source Model-as-a-Service (MaaS) platform that comes with hundreds of AI models, including large pre-trained models for global developers and researchers. During its flagship conference, Alibaba Cloud also introduced a range of serverless database products and upgraded its integrated data analytics and intelligent computing platform to help customers further achieve business innovation through cloud technologies. "Cloud computing has given rise to a fundamental revolution in the way computing resources are organised, produced and put to commercial use while shifting the paradigm of software development and speeding up the integration of the cloud and endpoint terminals," says Jeff Zhang, President of Alibaba Cloud Intelligence. "As more customers are speeding up with their cloud adoption, we have been upgrading our cloud based resources, services and tools to become serverless, more intelligent and digitalised in order to lower the barrier for companies to adopt new technologies and capture more opportunities in the cloud era." MaaS to create a transparent and inclusive technology community The ModelScope platform has been launched with over 300 ready to deploy AI models developed by Alibaba DAMO Academy (DAMO), Alibaba's global research initiative, in the past five years. These models cover various fields from computer vision to natural language processing (NLP) and audio. The platform also includes more than 150 state of the art (SOTA) models, which are recognised globally as the best in their respective fields for achieving the results in a given task. Also made available on the platform are Alibaba’s proprietary large pre-trained models such as Tongyi, which is capable of turning text into image with five billion parameters, and OFA (One-For-All), a six billion-parameter pre-trained model that excels at cross-modal tasks such as image captioning and visual question answering. Independent developers have also contributed dozens of models to the open-source platform to date. As an open-source community, ModelScope aims to make developing and running AI models easier and more cost effective. Developers and researchers can simply test the models online for free and get the results of their tests within minutes. They can also develop customised AI applications by fine-tuning existing models, and run the models online backed by Alibaba Cloud, or deploy them on other cloud platforms or in a local setting. The launch underscores DAMO’s ongoing efforts and commitment to promote transparent and inclusive technology by reducing the threshold for building and running AI models, enabling universities and smaller companies to easily use AI for their research and in their business respectively. The community is expected to grow further with more quality models available on the platform from DAMO, partners from research institutes and third-party developers in the near future. New and upgraded solutions to increase computing efficiency Staying ahead of the emerging trend of serverless software development, Alibaba Cloud is making its key cloud products serverless to enable customers to concentrate on product deployment and development without worrying about managing servers and infrastructure. Essentially, Alibaba Cloud’s updated products focus on turning computing power into an on demand capability for users. Examples of these are the cloud native database PolarDB, the cloud native data warehouse AnalyticDB (ADB) and ApsaraDB for Relational Database Service (RDS). Leveraging Alibaba Cloud’s serverless technologies, customers can enjoy automatic scaling with extreme elasticity based on actual workloads and a pay-as-you-go billing model to reduce costs. The automatic elastic scaling time on demands can be as little as one second. The use of updated database products can help businesses in the Internet industry reduce their costs by 50%, on average, compared to using traditional ones. Currently, Alibaba Cloud has more than 20 serverless key products in total and is adding more product categories to become serverless. Alibaba Cloud also upgraded its ODPS (Open Data Platform and Services), a self-developed integrated data analytics and intelligent computing platform, to provide companies with diversified data processing and analytics services. The platform can handle both online and offline data simultaneously in one system, providing businesses dealing with complex workloads with analytics for business decision-making with reduced cost and increased efficiency. ODPS has refreshed global records for big data performance, according to recent results from the Transaction Processing Performance Council (TPC), an industry council that sets the standards for transaction processing and database benchmarking. Evaluated based on a 100TB data benchmark, the performance of ODPS Maxcompute Cluster attained the top score for the sixth consecutive year. ODPS Hologres Cluster also shown a record-breaking result in the TPC-H 30000GB decision support benchmark test. To drive workload collaboration between cloud and local hardware, Alibaba Cloud has announced the launch of the Wuying Architecture with a showcase on its applications on the Wuying Cloudbook. The Cloudbook with dedicated Architecture is designed to help users access unlimited computing power on the cloud in a more secure and agile manner while supporting collaboration and flexibility at a workplace.

atNorth joins the WEKA innovation network
atNorth has announced that it has joined the WEKA Innovation (WIN) Program to bring the WEKA Data Platform for AI to its customers. WEKA is on a mission to replace decades of data infrastructure compromise with a subscription software-based data platform that is purpose-built to support modern performance-intensive workloads like artificial intelligence (AI) and machine learning (ML). The WEKA Data Platform for AI delivers the radical simplicity, epic performance, and infinite scale required to support enterprise AI workloads in virtually any location. Whether on-premises, in the cloud, at the edge or bursting between platforms, WEKA accelerates every step of the enterprise AI data pipeline - from data ingestion, cleansing and modelling, to training validation or inference. atNorth’s High Performance Computing as a service (HPCaaS) and colocation hosting solutions help enterprises meet their increasing needs for resilient and reliable data centre operations, optimised to support any workload with increased capacity, speed and scale. By partnering with WEKA, atNorth can provide its customers with a flexible performance storage solution that meets the most demanding workloads such as AI and that can also grow and scale alongside their business needs. “WEKA is delighted to welcome atNorth to the WIN program as a valued partner,” says Jonathan Martin, President at WEKA. “Our global channel ecosystem is a critical conduit to bring the transformative power of the WEKA Data Platform for AI to organisations around the world and we are committed to their success. We look forward to working with atNorth to help take their customers’ enterprise AI initiatives to the next level.” “Today’s workloads are changing at breakneck speed across enterprises of all sizes and the need to support these applications in an on-demand, as-needed way is greatly demanding,” comments Guy d’Hauwers, Global Director HPC and AI, atNorth. “Businesses that require such high-performance computing need agile solutions that can scale up and down to accommodate bursts of activity and that can equally scale in line with business growth, cost effectively, efficiently, and sustainably. Partnering with WEKA is a no brainer for us - we share the same values and approach. With WEKA, we can offer a more robust, scalable end to end solution to our customers that accommodates their needs today and tomorrow.”

Getting an edge: how AI at the edge will pay off for your business
By Uri Guterman, Head of Product & Marketing, Hanwha Techwin Europe We’ve become used to seeing artificial intelligence (AI) in almost every aspect of our lives. It used to be that putting AI at work involved huge server rooms and required vast amounts of computing power and, inevitably, a significant investment in energy and IT resources. Now, more tasks are being done by devices placed across our physical world, ‘at the edge.’ By not needing to stream raw data back to a server for analysis, AI at the edge, or ‘edge AI’, is set to make AI even more ubiquitous in our world. It also holds huge benefits for the video surveillance industry. Sustainability benefits AI at the edge has several benefits compared to server-based AI. Firstly, there are reduced bandwidth needs and costs as less data is transmitted back to a server. Cost of ownership decreases, and there can also be important sustainability gains as a large server room no longer has to be maintained. Energy savings in the device itself can also be realised, as it can require significantly less energy to carry out AI tasks locally instead of sending data back to the server. Cost efficiencies With edge AI devices, compared to a cloud-based computing model, there isn’t usually a recurrent subscription fee, avoiding the price increases that can come with this. Focusing on edge devices also enables end-users to invest in their own infrastructure. Greater scalability Cameras using edge AI can make a video installation more flexible and scalable, which is particularly helpful for organisations that wish to deploy a project in stages. More AI cameras and devices can be added to the system as and when needs evolve, without the end-user having to commit to large servers with expensive GPUs and significant bandwidth from the start. Improved operational performance and security Because video analytics is occurring at the edge (on the device) only the metadata needs to be sent across the network, and that also improves cybersecurity as there’s no sensitive data in transit for hackers to intercept. Processing is done at the edge so no raw data or video streams need to be sent over a network. As analysis is done locally on the device, edge AI eliminates delays in communicating with the cloud or a server. Responses are sped up, which means tasks like automatically focusing cameras on an event, granting access, or triggering an intruder alert, can happen in near real-time. Additionally, running AI on a device can improve the accuracy of triggers and reduce false alarms. People counting, occupancy measurement, queue management, and more, can all be carried out with a high degree of accuracy thanks to edge AI using deep learning. This can improve the efficiency of operator responses and reduce frustration as they don’t have to respond to false alarms. AI cameras can also run multiple video analytics in the same device - another efficiency improvement that means operators can easily deploy AI to alert for potential emergencies or intrusions, detect safety incidents, or track down suspects, for example. Video quality improvements What’s more, using AI at the edge the quality of video captured is improved. Noise reduction can be carried out locally on a device and, using AI, can specifically reduce noise around objects of interest like a person moving in a detected area. Features such as BestShot ensure operators don’t have to sift through lots of footage to find the best angle of a suspect. Instead, AI delivers the best shot immediately, helping to reduce reaction times and speed up post-event investigations. It has an added benefit of saving storage and bandwidth as only the best shot images are streamed and stored. AI-based compression technology also works to apply a low compression rate to objects and people which are detected and tracked by AI, whilst applying a high compression rate to the remaining field of view - this minimises network bandwidth and data storage requirements. Using the metadata Edge AI cameras can provide metadata to third party software through an API (application programming interface). This means that system integrators and technology partners can use it as a first means of AI classification, then apply additional processing on the classified objects with their own software - adding another layer of analytics on top of it. Resilience There is no single point of failure when using AI at the edge. The AI can continue to operate even if a network or cloud service fails. Triggers can still be actioned locally, or sent to another device, with recordings and events sent to the back end when connections are restored. AI is processed in near real-time on edge devices instead of being streamed back to a server or on a remote cloud service. This avoids potentially unstable network connections from delaying analytics. Benefits for installers For installers specifically, offering edge AI as part of your installations helps you stand out in the market, by offering solutions for many different use cases. Out-of-the-box solutions are extremely attractive to end-users who don’t have the time or resources to set up video analytics manually. AI cameras like the ones in the Wisenet X Series and P Series work straight from the box so there’s no need for video analytics experts to fine tune the analytics. Installers don’t have to spend valuable time configuring complex server-side software. Of course, this also has a knock-on positive impact on training time and costs. Looking ahead The future of AI at the edge looks bright too, with more manufacturers looking at ways to broaden classification carried out by AI cameras and even move towards using AI cameras as a platform, to allow system integrators and software companies to create their own AI applications that run on cameras. Right now, it’s an area definitely worth exploring for both end-users and installers because of the huge efficiency, accuracy, and sustainability gains that edge AI promises.

Xydus hires top tech talent to solve the digital identity crisis
Xydus has announced two key technical leadership hires, following a record-breaking 600% revenue increase in 2021. Ensuring its identity solution is ready to flex to any challenges the future throws at it, Xydus brings in technical pros Jos Aussems as Chief Information Security Officer and Chris Covell as Senior VP of Engineering. In response to global demand, Xydus spent the past six months reorganising and modernising its Agile IT department to improve efficiency, performance and resilience. The redesign stopped engineers working in siloed teams, splitting into two departments; Enterprise Engineering, which covers all client-facing parts of the technology such as mobile, web, and EMS, supported by the backend engineering team. Within these teams Xydus now uses a ‘buddy system’, pairing developers and engineers together to ensure cross-training. These future-proofs the technology as no one person holds the keys to the IP. Russell King, CEO and Founder of Xydus says, “Digital identity is in a crisis today. The global scale of the problem is the reason we’ve spent a significant number of resources on adjusting our engineering practice. Now we need the right senior leadership in place to manage the new team dynamic, making sure our product solves today’s digital identity crisis, and tomorrow’s.” During Jos’ 17-year career at PwC, he was instrumental in the development of its world class audit process for information security. He moved from senior-roles in tech consultancy to management of one of the leading cyber teams in the industry. In his new role as part of the Xydus leadership team he holds responsibility for the critical role of keeping its enterprise and customer data locked down. He will oversee the two new engineering teams, Enterprise Engineering, which is responsible for the applications a client has any interaction with, and Backend Engineering, . Commenting on his new position, Jos says, “I’m moving from a world-renowned consultancy to a scaleup creating what will become world-renowned identity products and solutions. It’s a challenge I welcome, particularly as joining Xydus reunites me with former colleagues and peers as we establish world-class security capabilities to protect the sensitive data of our customers.” Following over a decade working with Transunion and Call Credit, Chris leaves his current role as head of engineering for an internet-of-things startup to join Xydus. He will serve as the technical lead for Xydus’s class-leading products and services, building world-class, never-seen-before products that help customers and employees seamlessly manage digital identities. Russell continues, “Whether you’re a business, an employee or a customer, identity is not only important, it’s essential. But most products available use biased AI algorithms, ask for too much Personal Identifiable Information (PII) and often also store that PII. Today’s world of digital transactions and currency mean identity is more valuable than money. Technology should protect that identity, not use it to make money. That’s what we’re on a mission to fix, which this impressive technical leadership team can deliver.”

Chalmers University of Technology selects Lenovo and NVIDIA for national supercomputer
Lenovo has announced that Chalmers University of Technology is using Lenovo and NVIDIA’s technology infrastructure to power its large-scale computer resource, Alvis. The project has seen the delivery and implementation of a clustered computing system for artificial intelligence (AI) and Machine Learning (ML) research, in what is Lenovo’s largest HPC (High Performance Computing) cluster for AI and ML in the Europe, Middle East and Africa region. Alvis (old Norse meaning ‘all-wise’) is a national supercomputer resource within the Swedish National Infrastructure for Computing (SNIC). It began initially in 2020 and has since developed to hold a capacity that solves larger research tasks on a broader scale. Financed by the Knut and Alice Wallenberg Foundation, the computer system is supplied by Lenovo and located at Chalmers University of Technology in Gothenburg, home to the EU’s largest research initiative, Graphene Flagship. The collaborative project allows any Swedish researcher who needs to improve their mathematical calculations and models to take advantage of Alvis' services through SNIC's application system, regardless of the research field. This supports researchers who are already utilising machine learning to analyse complex problems, and those who are investigating the use of machine learning to solve issues within their respective field, with the potential to lead to ground-breaking academic research in fields such as quantum computing and data-driven research for healthcare and science. “The Alvis project is a prime example of the role of supercomputing in helping to solve humanity’s greatest challenges, and Lenovo is both proud and excited to be selected as part of it,” says Noam Rosen, EMEA Director, HPC & AI at Lenovo Infrastructure Solutions Group. “Supported by Lenovo’s performance leading technology, Alvis will power research and use machine learning across many diverse areas with a major impact on societal development, including environmental research and the development of pharmaceuticals. This computing resource is truly unique, built on the premise of architectures for different AI and machine learning workloads with sustainability in mind, helping to save energy and reduce carbon emissions by using our pioneering warm water-cooling technology.” “The first pilot resource for Alvis has already been used by more than 150 research projects across Swedish universities. By making it larger, and opening the Alvis Systems to all Swedish researchers, Chalmers and Lenovo are playing an important role in providing a national HPC ecosystem for future research”, comments Sverker Holmgren, Director of Chalmers e-Infrastructure Commons, hosting the Alvis system. Powering energy-saving AI infrastructure Chalmers has chosen to implement a scalable cluster with a variety of Lenovo ThinkSystem servers to deliver the right mix of NVIDIA GPUs to its users in a way that prioritises energy savings and workload balance. This includes the Lenovo ThinkSystem SD650-N V2 to deliver the power of NVIDIA A100 Tensor Core GPUs, and the NVIDIA-Certified ThinkSystem SR670 V2 for NVIDIA A40 and T4 GPUs. “The work we’re doing with Chalmers University and its Alvis national supercomputer will give researchers the power they need to simulate and predict our world,” says Rod Evans, EMEA director of high-performance computing at NVIDIA. “Together, we’re giving the scientific community tools to solve the world’s greatest supercomputing challenges – from forecasting weather to drug discovery.” The storage architecture delivers a new Ceph solution with 7.8 petabytes, to be integrated into the existing storage environment at Chalmers. NVIDIA Quantum 200 Gb/s InfiniBand provides the system with low- latency, high data throughput networking and smart in- computing acceleration engines. With these high-speed infrastructure capabilities, users have almost 1000 GPUs, mainly NVIDIA A100 Tensor Core, including over 260,000 processing cores and over 800 TFLOPS of compute power to drive a faster time to answer in their research. In addition, Alvis leverages Lenovo’s NeptuneTM liquid-cooling technologies to deliver unparalleled compute efficiency. Initially, full air cooling was proposed for the project, but Chalmers instead decided to deploy Lenovo Neptune warm water-cooling capabilities to reduce long-term operational costs and result in a ‘greener’ AI infrastructure system. As a result, the university anticipates that there will be significant energy savings thanks to efficiencies through water cooling.

Xydus hits 600% hyper growth helping manage virtual workforces
Xydus has announced its rebranded emergence after a record-breaking year which saw sales increase six-fold as it closed its largest-ever deals. Previously known as Paycasso, Xydus built its technology platform and credibility for over a decade by working with the some of the world’s most trusted brands such as PWC, EY, TransUnion, Equifax, Philip Morris International, Irish Life and notably, the UK’s National Health Service, Europe’s largest employer. Hybrid working fuels growth Focusing on identity technology perfectly positioned Xydus’s cloud services and helped the team to win new deals as employee and consumer engagement moved massively to digital during lockdown. The shift to flexible working supercharged an already buoyant demand for Xydus’ (Paycasso’s) digital identity software, as employers transitioned to managing remote workforces who require digital access and consumers increased their use of e-commerce. Demand was particularly strong for organisations with large, geographically dispersed, consumer markets and significant workforces across multiple jurisdictions, such as healthcare and financial services, in the US and Europe. How Xydus works The Xydus platform uses technologies such as facial recognition, biometrics, machine learning and multi-layered encryption, which allows workers to enrol and reuse their individual identity across the enterprise in seconds. Previously this could take hours, days or weeks. The advantage for employers is a seamless onboarding experience and secure access to corporate systems and access points without the need to deploy overstretched IT or HR resources. For consumers, it simplifies and streamlines their experience as they access today’s thousands of digital services. Xydus CEO, Russell King, comments: “Recent years have seen massive disruption to the working patterns and online behaviours of billions of consumers and employers, making Xydus more relevant than ever. A decade ago, faced with a digitally expanding world, we could not think of a much bigger challenge to address than core identity. It is this consistent focus in addressing the challenges in identity management which is now paying dividends for Xydus. This last year’s spectacular sales growth reflects how the need to solve identity issues for this expansive world is only going to grow.”

Scotland’s data sector is “inspirational", says Tim Campbell MBE
Leading figures from across Scotland’s data science and AI community joined The Data Lab at the country’s biggest data and AI recruitment event, Data Talent 2022, in Glasgow on the 15th March. More than 250 students and graduates got the chance to connect and discuss career options with representatives from more than 20 of Scotland’s leading employers that are currently actively recruiting for roles in AI and data science, including large public and private organisations such as Public Health Scotland, NatWest, Accenture, and ScotRail, as well as other data and AI specialists. Delegates also had the chance to participate in employer-led workshops to find out more about what a career in data entails delivered by Royal London, NHS National Services Scotland, PHS, ForthPoint, MBN Solutions, and The Data Lab, as well as having access to an on-site photographer to create new headshots for LinkedIn, aimed at helping to increase their employability. Delegates also heard from a distinguished line-up of keynote speakers, who shared compelling thoughts about the future of the industry and the need to share data and best practice to help Scotland achieve its ambitions. Speaking during his closing address to delegates, Tim Campbell MBE, winner of 2005’s The Apprentice, shared an inspirational talk applauding Scotland’s ambitious data and AI strategy and encouraging students and graduates to embrace the opportunity the skills and data revolution is offering to the next generation of talent entering the workplace. Other speakers included Minister for Further Education, Higher Education and Science of Scotland Jamie Hepburn, who discussed the challenges and opportunities presented by the AI and data revolution; and Manira Ahmad, Chief Officer of Public Health Scotland, who discussed how data sharing is crucial to helping improve health inequality across Scotland, using real-life examples to bring it to life. Tim Campbell MBE, says: “The world of data is a hugely exciting part of our economy – and one that is only growing and getting bigger every year. What Scotland is doing in terms of AI and data is inspirational and all the graduates and students in this room should feel incredibly valued and fortunate as there is such a need for these types of skills in the UK right now. There are a lot of events that inspire hope, but it’s rare to see one like Data Talent joining hope and opportunity at the same time.” Brian Hills, Interim CEO, The Data Lab, comments: “This year’s Data Talent was a fantastic showcase of the breadth and depth of Scotland’s data and AI community. Combining stellar speakers – from both the public and private sectors – with workshops and the chance to get up close to potential employers to find out more what a career in data science and AI looks like, is a win win for both employers and potential employees. “I’d like to thank everyone involved in running the event for helping to make it a special return to Glasgow after a two-year hiatus due to the pandemic. As Scotland’s largest talent and recruitment event in data and AI, it is incredibility reassuring to see such a rich pipeline of talent coming through and helping Scotland to cement its place as a world leader in these innovative industries.”

Lenovo delivers AI at the edge to drive business transformation
Lenovo Infrastructure Solutions Group (ISG) has announced the expansion of the Lenovo ThinkEdge portfolio with the introduction of the new ThinkEdge SE450 server, delivering an artificial intelligence (AI) platform directly at the edge to accelerate business insights. The ThinkEdge SE450 advances intelligent edge capabilities with best-in-class, AI-ready technology that provides faster insights and leading computing performance to more environments, accelerating real-time decision making at the edge and unleashing full business potential.   “As companies of all sizes continue to work on solving real-world challenges, they require powerful infrastructure solutions to help generate faster insights that inform competitive business strategies, directly at edge sites,” says Charles Ferland, Vice President and General Manager, Edge Computing and Communication Service Providers at Lenovo ISG. “With the ThinkEdge SE450 server and in collaboration with our broad ecosystem of partners, Lenovo is delivering on the promise of AI at the edge, whether it’s enabling greater connectivity for smart cities to detect and respond to traffic accidents or addressing predictive maintenance needs on the manufacturing line.” Accelerate business insights at the edge Edge computing is at the heart of digital transformation for many industries as they seek to optimise how to process data directly at the point of origin. Gartner estimates that 75% of enterprise-generated data will be processed at the edge by 2025 and 80% of enterprise IoT projects will incorporate AI by 2022. Lenovo customers are using edge-driven data sources for immediate decision making on factory floors, retail shelves, city streets and telecommunication mobile sites. Lenovo’s complete ThinkEdge portfolio goes beyond the data centre to deliver the ultimate edge computing power experience. “Expanding our cloud to on-premise enables faster data processing while adding resiliency, performance and enhanced user experiences. As an early testing partner, our current deployment of Lenovo’s ThinkEdge SE450 server is hosting a 5G network delivered on edge sites and introducing new edge applications to enterprises,” comments Khaled Al Suwaidi, Vice President Fixed and Mobile Core at Etisalat. “It gives us a compact, ruggedised platform with the necessary performance to host our telecom infrastructure and deliver applications, such as e-learning, to users.” Enhance performance, scalability and security  Designed to stretch the limitations of server locations, Lenovo’s ThinkEdge SE450 delivers real-time insights with enhanced compute power and flexible deployment capabilities that can support multiple AI workloads while allowing customers to scale. It meets the demands of a wide variety of critical workloads with a unique, quieter go-anywhere form factor, featuring a shorter depth that allows it to be easily installed in space constrained locations. The GPU-rich server is purpose-built to meet the requirements of vertically specific edge environments, with a ruggedised design that withstands a wider operating temperature, as well as high dust, shock and vibration for harsh settings. As one of the first NVIDIA-Certified Edge systems, Lenovo’s ThinkEdge SE450 leverages NVIDIA GPUs for enterprise and industrial AI at the edge applications, providing maximum accelerated performance. Security at the edge is crucial and Lenovo enables businesses to navigate the edge-to-cloud frontier confidently, using resilient, better secured infrastructure solutions that are designed to mitigate security risks and data threats. The ThinkEdge portfolio provides a variety of connectivity and security options that are easily deployed and more securely managed in today’s remote environments, including a new locking bezel to help prevent unauthorised access and robust security features to better protect data. The ThinkEdge SE450 is built on the latest 3rd Gen Intel Xeon Scalable processor with Intel Deep Learning Boost technologies, featuring all-flash storage for running AI and analytics at the edge and optimised for delivering intelligence. It has been verified by Intel as an Intel Select Solution for vRAN. This pre-validated solution takes the guesswork out of the evaluation and procurement process by meeting strictly defined hardware and software configuration requirements and rigorous system-wide performance benchmarks to speed deployment and lower risk for communications service providers. Edge site locations are often unmanned and hard to reach; therefore, the ThinkEdge SE450 is automatically installed and managed with Lenovo Open Cloud Automation (LOC-A) and easily configured with Lenovo XClarity Orchestrator software. Remote access to the server, via a completely out-of-band wired or wireless access, avoids any unnecessary trip to the edge locations. AI-ready solutions at the edge Through an agile hardware development approach with partners and customers, the Lenovo ThinkEdge SE450 is the culmination of multiple prototypes, with live trials running real workloads in telecommunication, retail and smart city settings. The ThinkEdge SE450 AI-ready server is designed specifically for enabling a vast ecosystem of partners to make it easier for customers to deploy these edge solutions. As enterprises build out their hybrid infrastructures from the cloud to the edge, it is the perfect extension for the on-premise cloud currently supporting Microsoft, NVIDIA, Red Hat and VMware technologies.  Providing a complete portfolio of edge servers, AI-ready storage and solutions, Lenovo offerings are also  available as-a-service through Lenovo TruScale, which easily extends workloads from the edge to the cloud in a consumption-based model.

Aerospike future proofs Criteo's AI digital advertising platform
Aerospike has announced that Criteo has selected Aerospike to deliver a digital transformation project that significantly improves the scale and performance of its global commerce media platform. The project will also see Criteo reduce its server count by 80%, achieving millions of dollars of cost savings per year and reducing CO2 emissions. Criteo is a leading global technology company that combines commerce data and intelligence to deliver digital advertising that enables marketers and media owners to drive commerce outcomes. Prior to Aerospike, Criteo combined an open-source NoSQL database with caching solutions. The Criteo platform needs to match an advertiser's content with an internet user’s interests 950 billion times a day and respond in just 50 milliseconds. These data loads and requests are continually increasing to deliver fast responses to help advertisers reach more consumers and grow their online sales. To meet both new and future petabyte-scale demands, Criteo turned to the Aerospike Real-time Data Platform, which will cut its server count by more than 80 per cent for millions of dollars in annual IT savings. “Our data demands are growing exponentially, and we need a real-time data platform that can handle our current requirements but ensure that we can adapt and scale for what lies ahead,” says Diarmuid Gill, CTO of Criteo. “The Aerospike real-time data platform takes us to the next level in terms of scale and performance and provides us with a standardised platform that ensures our customers can meet the rapid response times they demand both now and in the future.”  “In order to meet the latency needs of the Ad Tech industry every other solution needs a cache to sit on top,” explains Geoff Clark, Vice President EMEA, Aerospike. “That’s where Aerospike is different. Aerospike runs on DRAM and Flash, making it faster, more powerful and far less of a drain on server resources. If Criteo was to double its requirement for data tomorrow, it would still need less than a quarter of the servers it had before. This further demonstrates the ability of the Aerospike real-time data platform to powerfully perform at scale and still deliver significant cost-savings to business,” adds Clark.

Pure storage expands as-a-service offerings to support business outcomes
Pure Storage has announced further expansion of its robust subscription services portfolio. With FlashStack delivered as-a-Service, customers can leverage Pure and Cisco’s AI-based software-defined infrastructure with flexible consumption economics. For applications running on Kubernetes, Portworx Cloud Consumption aligns spend to actual hours of usage. Demand for flexible, pay-as-you-go consumption has never been greater, as customers come to expect the same cloud consumption model and experience whether their solutions and infrastructure is on-premise, at the edge, or in the cloud. Launched in 2018 as the first true service consumption model for storage, Pure as-a-Service has seen incredible growth and adoption across all industries and geographies, delivering flexibility, transparency, and simplicity, while satisfying performance and capacity Service Level Agreements (SLAs) with proactive monitoring and non-disruptive upgrades. By continuing to expand its as-a-Service offerings - which unify block, file, fast-file and object storage across all tiers of performance, and private and public clouds into a single data-storage subscription - Pure is not only meeting customers need for flexibility, but providing the foundation of a true hybrid-cloud experience. “Pure is committed to delivering the gold standard of as-a-Service solutions in the storage industry. By continuing to expand the breadth and depth of our subscription services portfolio, we are giving our customers the flexibility and transparency to scale with confidence, knowing that Pure will meet the SLAs needed to power their business.” comments Prakash Darji, VP and GM, Digital Experience Business Unit, Pure Storage FlashStack as-a-Service Delivered as-a-Service, FlashStack from Pure Storage and Cisco is an AI-based, software-defined modern data infrastructure that integrates on-premises and multi-cloud landscapes. Discretely scalable and holistically managed, FlashStack delivers a full-stack solution for critical apps that is always-on and future-proof. Customers can slash deployment and administration time and risk with pre-tested, validated reference architectures for popular workloads. By delivering a modern operational model, FlashStack helps customers stay ahead of business demand and protect applications and data on premises, at the edge, or in the cloud. Portworx Cloud Consumption Portworx, the “gold standard” in Kubernetes storage for the enterprise according to GigaOm Research, allows customers to operate and scale their modern enterprise apps consistently on any cloud or data center with the performance, security and data protection they are used to. Because cloud native applications running on Kubernetes are highly dynamic, estimating the exact number of servers needed to run applications on Kubernetes is a significant challenge which leads to overprovisioning and higher costs. To solve this challenge, Portworx is introducing flexibility to allow customers' consumption of Portworx to match the ebbs and flows of their applications. Previously, customers would purchase one or more annual Portworx licenses which could each be run on any single server. If they needed to scale beyond the number of licenses purchased, however, they would have to contact sales to purchase additional licenses. With the new flexible pricing, customers purchase a number of hours, rather than a license tied to a single server. These hours can be consumed at any rate throughout the year depending on how many servers they need to run, similar to how public cloud reserved instance pricing works. “As enterprises increasingly desire as-a-Service solutions and a cloud-like experience, Pure has been an incredible partner in delivering robust and differentiated services to our joint clients. We are excited to see Pure’s continued expansion of its subscription services portfolio, now with FlashStack as-a-Service and Portworx Cloud Consumption.” says Juan Orlandini, Chief Architect, Cloud + Data Center Transformation, Insight



Translate »