Artificial Intelligence


STT GDC and Firmus to build sustainable AI factories
ST Telemedia Global Data Centres (STT GDC) has announced a significant investment into a global venture with Firmus Technologies. The venture is to be based in Singapore and will see the launch of a GPU-centric Infrastructure as a Service (IaaS) offering focused on deep learning AI and visual computing workloads, to be known as Sustainable Metal Cloud (SMC). SMC will deliver bare-metal-service access to high-performance AI clusters, which include some of the world’s most advanced workload accelerators including GPUs and high-speed networking from NVIDIA for energy-efficient computing. It will leverage Firmus’ proprietary, scaled, immersion-cooled platform, the HyperCube, to deliver sustainable AI factories that are all at once sustainable, scalable, high-performance and cost-effective. Within the HyperCube, Firmus will operate a fleet of high-performance servers provided by OEM partners including Supermicro. It unlocks access to world-class AI tools and hardware in a highly available way. The SMC is being launched in Singapore, India and Australia in 2023, with the Singapore AZ (SIN01) expected to be live in H2 2023. The combination of Firmus’ platform, paired with STT GDC’s highly efficient data centre infrastructure, will result in AI workloads running with a lower PUE, lower CO2 emissions and higher petaflops per watt. “The compound growth in forecasted energy consumption is an existential threat to the data centre sector. Data centre operators and their customers must be prepared to embrace and support new cooling solutions and expand the services they offer beyond traditional air-cooled colocation if they are to host AI GPU platforms in a sustainable manner. The evolution of the data centre into AI factories of the future will fundamentally change the way that all infrastructure operators are thinking about the design and operation of their facilities. STT GDC’s foresight and long-term vision made them the ideal global partner for Firmus’ highly developed solution,” says Ted. “The future of data centres will rely on the ability to provide both exceptional performance and highly sustainable services at scale. From our beginnings in Singapore almost 10 years ago, we now have scaled the business to cover 10 geographies. We are immensely pleased to enhance our core co-location offering to include the latest GPU-based bare-metal services, empowering our customers with access to the next generation of high-performance computing which will be so critical in the AI revolution. These high-performance services are a key component of the critical infrastructure needed to support the plethora of AI use cases that will be critical to business, governments and society in years to come,” says Bruno Lopez, President and Group Chief Executive, ST Telemedia Global Data Centres.

AMD announces its future vision for data centre and AI
AMD has announced its vision for the future of the data centre and pervasive AI, showcasing the products, strategy and ecosystem partners that will shape the future of computing. The 'Data Center and AI Technology Premiere' highlights the next phase of data centre innovation. The company was joined on stage with executives from Amazon Web Services, Citadel, Hugging Face, Meta, Microsoft Azure and PyTorch to showcase the technological partnerships to bring the next generation of high-performance CPU and AI accelerator solutions to market. “Today, we took another significant step forward in our data centre strategy as we expanded our 4th Gen EPYC processor family with new leadership solutions for cloud and technical computing workloads, and announced new public instances and internal deployments with the largest cloud providers,” says AMD Chair and CEO, Dr Lisa Su. “AI is the defining technology shaping the next generation of computing and the largest strategic growth opportunity for AMD. We are laser focused on accelerating the deployment of AMD AI platforms at scale in the data centre, led by the launch of our Instinct MI300 accelerators planned for later this year and the growing ecosystem of enterprise-ready AI software optimised for our hardware.” The company was joined by AWS to highlight a preview of the next generation Amazon Elastic Compute Cloud, ‘Amazon EC2’, M7a instances, powered by 4th Gen AMD EPYC processors, ‘Genoa’. AMD has introduced the 4th Gen AMD EPYC 97X4 processors, formerly named ‘Bergamo.’ With 128 Zen 4c cores per socket, these processors provide great vCPU density, performance and efficiency for applications that run in the cloud. Meta participated in discussing how these processors are well suited for their mainstay applications such as Instagram, WhatsApp and more. It has also unveiled the AMD 3D V-Cache technology, with x86 server CPU and shared its AI platform strategy, giving customers a cloud, to edge, to endpoint portfolio of hardware products, with deep industry software collaboration, to develop scalable and pervasive AI solutions. The AMD Instinct MI300X accelerator is one of the advanced accelerators for generative AI, along software ecosystem momentum, with partners PyTorch and Hugging Face. It is based on the next-gen AMD CDNA 3 accelerator architecture and supports up to 192GB of HBM3 memory to provide the compute and memory efficiency needed for large language model training and inference for generative AI workloads. Finally, there was a showcase of the ROCm software ecosystem for data centre accelerators, highlighting the readiness and collaborations with industry performers to bring together an open AI software ecosystem. PyTorch discussed the work to fully upstream the ROCm software stack. This integration empowers developers with an array of AI models that are compatible and ready to use on AMD accelerators. Hugging Face also announced that it will optimise its models on AMD platforms.

DEAC Latvia discusses the impact of AI on data centre operations
In a quarter of a century, data centres have transformed from a niche industry to one of the most important parts in people's daily lives. Although many do not realise it, the pictures they take, the content of their favourite entertainment services and even online shops are stored in modern data centres that occupy areas the size of sports arenas. Everything that happens on the internet is connected to data centres. Consequently, global challenges and new opportunities directly affect this industry. It is important to understand how disruptions in the production of digital infrastructure, cyber security issues, and the entry of artificial intelligence into business and other current events affect the development of data centres. Data storage facilities become more expensive Undoubtedly, the industry is adversely affected by supply problems of modern technologies that started during the pandemic. Its production volumes have still not returned to previous levels. Therefore, many manufacturing industries are competing over semiconductors, chips, cables and other components. The lack of spare parts and new equipment is also affecting data centres. In addition, while equipment and car manufacturers are already used to the situation, data centre builders are truly facing the problem now, as building a data centre takes several years. It means that equipping the new centres started very recently. And the prices are much higher now than they were before the pandemic. As a result, data centre expansion projects are more expensive and the size of data storage facilities can no longer be expanded as quickly as before. Avoid wasting gigabytes This is not good news for businesses and end consumers, as competition for space in data centres will intensify and every terabyte will have to be used more rationally. It will be especially interesting to watch what the world's biggest consumers of server power will do. These companies traditionally reserve space in data centre servers three to five years in advance. If they continue this practice to the usual extent, it will create a problem for smaller consumers. Most data centre and cloud service providers have already increased their prices, and it is expected that they will increase further in the second half of the year to cover the costs of using additional storage facilities in data centres. This will also place some responsibility on the end users. Until now, many users have freely uploaded their images, videos and other content to cloud storage or video platforms without thinking about the number of gigabytes used, but they will have to change this habit in the near future. Perhaps service owners will be forced to set stricter limits or a higher price per gigabyte, and older content that has not been used for a long time may be deleted. Latvian companies are currently building the third DEAC data centre in Riga to tackle this issue, and it will be put into operation as early as 2025. The new data centre DC3 will have up to 1,000 racks with a total capacity of 10MW and will be certified according to the requirements of the Tier III standard. This certificate sets safety standards regarding the design, construction and maintenance of data centres. Although it has been planned to complete the construction works by the end of 2024, potential customers can already reserve a place in the new data centre. Self-production of electrical power The new trend of generating electricity very close to a data centre could become a useful method for reducing costs in the long term. Data centres consume a lot of electrical power, because the building not only contains thousands of servers that run continuously, but also cooling equipment for maintaining a favourable working climate for the machines. Recent fluctuations in electricity prices have encouraged data centres to build its own power plants. The new DC3 will use electrical power produced only from renewable resources. Even the back-up electricity generators use a green solution like fuels produced from 100% renewable resources. As the chemical formula of the substance is identical to fossil diesel fuel. AI will help Another solution to keeping control over the costs and saving space could be the promising artificial intelligence that is gradually entering the data centre industry. The generative AI or equipment capable of creating original content will make it possible to personalise services and rationally use available space. AI algorithms will customise language, recommend the most appropriate features for customers, perform detailed programming, create design and content, and provide customer service. At the same time, AI will make sure that the processes take place as efficiently as possible and do not unnecessarily waste space on the servers. All of this will contribute to the reduction of costs that are currently being passed on to relevant service providers. It must be noted here that only recently AI has started showing its ability to be a useful help in business, therefore useful AI powered solutions for automation may become common in two to five years. However, there is an area where radical changes are not expected, and that is cyber security. The activation of cyber criminals experienced in the last year and a half makes it necessary for data centre employees to be maximally vigilant and adhere to the highest cyber security standards. It means that if the data centre has received an appropriate certification, then customers can be certain that their data will be taken care of. They will be protected from attacks, as well as backups will be available even if one of the servers stops working.

Why high-speed cable assemblies must be considered
Siemon has published a new product and application guide that provides data centre designers and operators with valuable insight into the benefits that high-speed cable assemblies can deliver in support of higher bandwidth, low latency applications in the data centre. Artificial Intelligence, machine learning, edge computing and other emerging technologies are widely adopted in enterprise businesses today. This drives the need for data centre server and storage systems to deliver speeds well beyond 10Gb/s in support of these applications. Siemon’s new guide details how Direct Attached Cables (DACs) and Active Optical Cables (AOCs) can deliver performance, reliability, scalability and power efficiencies to cost-effectively adopt these emerging technologies. A key focus in the guide is data centre topology and how fixed lengths DACs and AOCs can support shorter in cabinet connections or connect servers across multiple racks in End of Row (EoR) or Middle of Row (MoR) configurations. In addition to providing a detailed overview of the cable options available to scale and support transmission speeds all the way to 400Gb/s, the guide also addresses the rising challenge of latency. With latency impacting real-time technologies such as virtual reality, high frequency trading and blockchain, direct attach copper high-speed cable assemblies deliver distinct benefits over and above structured cabling for increased network performance. Common concerns relating to equipment density and switch port utilisation can also be addressed through high-speed cable assemblies. The guide demonstrates how DACs and AOCs can be deployed to connect a single higher speed switch port to multiple lower speed servers using 4 x 10GbE, 4 x 25GbE, 4 x 100GbE or 2 x 200GbE breakouts. “Whilst there is no one single solution for connecting servers in data centres, high speed cable assemblies offer a number of advantages and should be considered a valuable alternative to other cabling options”, says Ryan Harris, Sales and Market Manager for high speed cable assemblies at Siemon. “We have created this guide to help data centre designers and operators make informed decisions when selecting cabling for their data centre server and storage systems, especially when future technologies must be supported that rely on high-speed, low latency cabling for optimum performance.”

Alibaba Cloud launches ModelScope platform and new solutions
Alibaba Cloud started its annual Apsara Conference by announcing the launch of ModelScope, an open-source Model-as-a-Service (MaaS) platform that comes with hundreds of AI models, including large pre-trained models for global developers and researchers. During its flagship conference, Alibaba Cloud also introduced a range of serverless database products and upgraded its integrated data analytics and intelligent computing platform to help customers further achieve business innovation through cloud technologies. "Cloud computing has given rise to a fundamental revolution in the way computing resources are organised, produced and put to commercial use while shifting the paradigm of software development and speeding up the integration of the cloud and endpoint terminals," says Jeff Zhang, President of Alibaba Cloud Intelligence. "As more customers are speeding up with their cloud adoption, we have been upgrading our cloud based resources, services and tools to become serverless, more intelligent and digitalised in order to lower the barrier for companies to adopt new technologies and capture more opportunities in the cloud era." MaaS to create a transparent and inclusive technology community The ModelScope platform has been launched with over 300 ready to deploy AI models developed by Alibaba DAMO Academy (DAMO), Alibaba's global research initiative, in the past five years. These models cover various fields from computer vision to natural language processing (NLP) and audio. The platform also includes more than 150 state of the art (SOTA) models, which are recognised globally as the best in their respective fields for achieving the results in a given task. Also made available on the platform are Alibaba’s proprietary large pre-trained models such as Tongyi, which is capable of turning text into image with five billion parameters, and OFA (One-For-All), a six billion-parameter pre-trained model that excels at cross-modal tasks such as image captioning and visual question answering. Independent developers have also contributed dozens of models to the open-source platform to date. As an open-source community, ModelScope aims to make developing and running AI models easier and more cost effective. Developers and researchers can simply test the models online for free and get the results of their tests within minutes. They can also develop customised AI applications by fine-tuning existing models, and run the models online backed by Alibaba Cloud, or deploy them on other cloud platforms or in a local setting. The launch underscores DAMO’s ongoing efforts and commitment to promote transparent and inclusive technology by reducing the threshold for building and running AI models, enabling universities and smaller companies to easily use AI for their research and in their business respectively. The community is expected to grow further with more quality models available on the platform from DAMO, partners from research institutes and third-party developers in the near future. New and upgraded solutions to increase computing efficiency Staying ahead of the emerging trend of serverless software development, Alibaba Cloud is making its key cloud products serverless to enable customers to concentrate on product deployment and development without worrying about managing servers and infrastructure. Essentially, Alibaba Cloud’s updated products focus on turning computing power into an on demand capability for users. Examples of these are the cloud native database PolarDB, the cloud native data warehouse AnalyticDB (ADB) and ApsaraDB for Relational Database Service (RDS). Leveraging Alibaba Cloud’s serverless technologies, customers can enjoy automatic scaling with extreme elasticity based on actual workloads and a pay-as-you-go billing model to reduce costs. The automatic elastic scaling time on demands can be as little as one second. The use of updated database products can help businesses in the Internet industry reduce their costs by 50%, on average, compared to using traditional ones. Currently, Alibaba Cloud has more than 20 serverless key products in total and is adding more product categories to become serverless. Alibaba Cloud also upgraded its ODPS (Open Data Platform and Services), a self-developed integrated data analytics and intelligent computing platform, to provide companies with diversified data processing and analytics services. The platform can handle both online and offline data simultaneously in one system, providing businesses dealing with complex workloads with analytics for business decision-making with reduced cost and increased efficiency. ODPS has refreshed global records for big data performance, according to recent results from the Transaction Processing Performance Council (TPC), an industry council that sets the standards for transaction processing and database benchmarking. Evaluated based on a 100TB data benchmark, the performance of ODPS Maxcompute Cluster attained the top score for the sixth consecutive year. ODPS Hologres Cluster also shown a record-breaking result in the TPC-H 30000GB decision support benchmark test. To drive workload collaboration between cloud and local hardware, Alibaba Cloud has announced the launch of the Wuying Architecture with a showcase on its applications on the Wuying Cloudbook. The Cloudbook with dedicated Architecture is designed to help users access unlimited computing power on the cloud in a more secure and agile manner while supporting collaboration and flexibility at a workplace.

atNorth joins the WEKA innovation network
atNorth has announced that it has joined the WEKA Innovation (WIN) Program to bring the WEKA Data Platform for AI to its customers. WEKA is on a mission to replace decades of data infrastructure compromise with a subscription software-based data platform that is purpose-built to support modern performance-intensive workloads like artificial intelligence (AI) and machine learning (ML). The WEKA Data Platform for AI delivers the radical simplicity, epic performance, and infinite scale required to support enterprise AI workloads in virtually any location. Whether on-premises, in the cloud, at the edge or bursting between platforms, WEKA accelerates every step of the enterprise AI data pipeline - from data ingestion, cleansing and modelling, to training validation or inference. atNorth’s High Performance Computing as a service (HPCaaS) and colocation hosting solutions help enterprises meet their increasing needs for resilient and reliable data centre operations, optimised to support any workload with increased capacity, speed and scale. By partnering with WEKA, atNorth can provide its customers with a flexible performance storage solution that meets the most demanding workloads such as AI and that can also grow and scale alongside their business needs. “WEKA is delighted to welcome atNorth to the WIN program as a valued partner,” says Jonathan Martin, President at WEKA. “Our global channel ecosystem is a critical conduit to bring the transformative power of the WEKA Data Platform for AI to organisations around the world and we are committed to their success. We look forward to working with atNorth to help take their customers’ enterprise AI initiatives to the next level.” “Today’s workloads are changing at breakneck speed across enterprises of all sizes and the need to support these applications in an on-demand, as-needed way is greatly demanding,” comments Guy d’Hauwers, Global Director HPC and AI, atNorth. “Businesses that require such high-performance computing need agile solutions that can scale up and down to accommodate bursts of activity and that can equally scale in line with business growth, cost effectively, efficiently, and sustainably. Partnering with WEKA is a no brainer for us - we share the same values and approach. With WEKA, we can offer a more robust, scalable end to end solution to our customers that accommodates their needs today and tomorrow.”

Getting an edge: how AI at the edge will pay off for your business
By Uri Guterman, Head of Product & Marketing, Hanwha Techwin Europe We’ve become used to seeing artificial intelligence (AI) in almost every aspect of our lives. It used to be that putting AI at work involved huge server rooms and required vast amounts of computing power and, inevitably, a significant investment in energy and IT resources. Now, more tasks are being done by devices placed across our physical world, ‘at the edge.’ By not needing to stream raw data back to a server for analysis, AI at the edge, or ‘edge AI’, is set to make AI even more ubiquitous in our world. It also holds huge benefits for the video surveillance industry. Sustainability benefits AI at the edge has several benefits compared to server-based AI. Firstly, there are reduced bandwidth needs and costs as less data is transmitted back to a server. Cost of ownership decreases, and there can also be important sustainability gains as a large server room no longer has to be maintained. Energy savings in the device itself can also be realised, as it can require significantly less energy to carry out AI tasks locally instead of sending data back to the server. Cost efficiencies With edge AI devices, compared to a cloud-based computing model, there isn’t usually a recurrent subscription fee, avoiding the price increases that can come with this. Focusing on edge devices also enables end-users to invest in their own infrastructure. Greater scalability Cameras using edge AI can make a video installation more flexible and scalable, which is particularly helpful for organisations that wish to deploy a project in stages. More AI cameras and devices can be added to the system as and when needs evolve, without the end-user having to commit to large servers with expensive GPUs and significant bandwidth from the start. Improved operational performance and security Because video analytics is occurring at the edge (on the device) only the metadata needs to be sent across the network, and that also improves cybersecurity as there’s no sensitive data in transit for hackers to intercept. Processing is done at the edge so no raw data or video streams need to be sent over a network. As analysis is done locally on the device, edge AI eliminates delays in communicating with the cloud or a server. Responses are sped up, which means tasks like automatically focusing cameras on an event, granting access, or triggering an intruder alert, can happen in near real-time. Additionally, running AI on a device can improve the accuracy of triggers and reduce false alarms. People counting, occupancy measurement, queue management, and more, can all be carried out with a high degree of accuracy thanks to edge AI using deep learning. This can improve the efficiency of operator responses and reduce frustration as they don’t have to respond to false alarms. AI cameras can also run multiple video analytics in the same device - another efficiency improvement that means operators can easily deploy AI to alert for potential emergencies or intrusions, detect safety incidents, or track down suspects, for example. Video quality improvements What’s more, using AI at the edge the quality of video captured is improved. Noise reduction can be carried out locally on a device and, using AI, can specifically reduce noise around objects of interest like a person moving in a detected area. Features such as BestShot ensure operators don’t have to sift through lots of footage to find the best angle of a suspect. Instead, AI delivers the best shot immediately, helping to reduce reaction times and speed up post-event investigations. It has an added benefit of saving storage and bandwidth as only the best shot images are streamed and stored. AI-based compression technology also works to apply a low compression rate to objects and people which are detected and tracked by AI, whilst applying a high compression rate to the remaining field of view - this minimises network bandwidth and data storage requirements. Using the metadata Edge AI cameras can provide metadata to third party software through an API (application programming interface). This means that system integrators and technology partners can use it as a first means of AI classification, then apply additional processing on the classified objects with their own software - adding another layer of analytics on top of it. Resilience There is no single point of failure when using AI at the edge. The AI can continue to operate even if a network or cloud service fails. Triggers can still be actioned locally, or sent to another device, with recordings and events sent to the back end when connections are restored. AI is processed in near real-time on edge devices instead of being streamed back to a server or on a remote cloud service. This avoids potentially unstable network connections from delaying analytics. Benefits for installers For installers specifically, offering edge AI as part of your installations helps you stand out in the market, by offering solutions for many different use cases. Out-of-the-box solutions are extremely attractive to end-users who don’t have the time or resources to set up video analytics manually. AI cameras like the ones in the Wisenet X Series and P Series work straight from the box so there’s no need for video analytics experts to fine tune the analytics. Installers don’t have to spend valuable time configuring complex server-side software. Of course, this also has a knock-on positive impact on training time and costs. Looking ahead The future of AI at the edge looks bright too, with more manufacturers looking at ways to broaden classification carried out by AI cameras and even move towards using AI cameras as a platform, to allow system integrators and software companies to create their own AI applications that run on cameras. Right now, it’s an area definitely worth exploring for both end-users and installers because of the huge efficiency, accuracy, and sustainability gains that edge AI promises.

Xydus hires top tech talent to solve the digital identity crisis
Xydus has announced two key technical leadership hires, following a record-breaking 600% revenue increase in 2021. Ensuring its identity solution is ready to flex to any challenges the future throws at it, Xydus brings in technical pros Jos Aussems as Chief Information Security Officer and Chris Covell as Senior VP of Engineering. In response to global demand, Xydus spent the past six months reorganising and modernising its Agile IT department to improve efficiency, performance and resilience. The redesign stopped engineers working in siloed teams, splitting into two departments; Enterprise Engineering, which covers all client-facing parts of the technology such as mobile, web, and EMS, supported by the backend engineering team. Within these teams Xydus now uses a ‘buddy system’, pairing developers and engineers together to ensure cross-training. These future-proofs the technology as no one person holds the keys to the IP. Russell King, CEO and Founder of Xydus says, “Digital identity is in a crisis today. The global scale of the problem is the reason we’ve spent a significant number of resources on adjusting our engineering practice. Now we need the right senior leadership in place to manage the new team dynamic, making sure our product solves today’s digital identity crisis, and tomorrow’s.” During Jos’ 17-year career at PwC, he was instrumental in the development of its world class audit process for information security. He moved from senior-roles in tech consultancy to management of one of the leading cyber teams in the industry. In his new role as part of the Xydus leadership team he holds responsibility for the critical role of keeping its enterprise and customer data locked down. He will oversee the two new engineering teams, Enterprise Engineering, which is responsible for the applications a client has any interaction with, and Backend Engineering, . Commenting on his new position, Jos says, “I’m moving from a world-renowned consultancy to a scaleup creating what will become world-renowned identity products and solutions. It’s a challenge I welcome, particularly as joining Xydus reunites me with former colleagues and peers as we establish world-class security capabilities to protect the sensitive data of our customers.” Following over a decade working with Transunion and Call Credit, Chris leaves his current role as head of engineering for an internet-of-things startup to join Xydus. He will serve as the technical lead for Xydus’s class-leading products and services, building world-class, never-seen-before products that help customers and employees seamlessly manage digital identities. Russell continues, “Whether you’re a business, an employee or a customer, identity is not only important, it’s essential. But most products available use biased AI algorithms, ask for too much Personal Identifiable Information (PII) and often also store that PII. Today’s world of digital transactions and currency mean identity is more valuable than money. Technology should protect that identity, not use it to make money. That’s what we’re on a mission to fix, which this impressive technical leadership team can deliver.”

Chalmers University of Technology selects Lenovo and NVIDIA for national supercomputer
Lenovo has announced that Chalmers University of Technology is using Lenovo and NVIDIA’s technology infrastructure to power its large-scale computer resource, Alvis. The project has seen the delivery and implementation of a clustered computing system for artificial intelligence (AI) and Machine Learning (ML) research, in what is Lenovo’s largest HPC (High Performance Computing) cluster for AI and ML in the Europe, Middle East and Africa region. Alvis (old Norse meaning ‘all-wise’) is a national supercomputer resource within the Swedish National Infrastructure for Computing (SNIC). It began initially in 2020 and has since developed to hold a capacity that solves larger research tasks on a broader scale. Financed by the Knut and Alice Wallenberg Foundation, the computer system is supplied by Lenovo and located at Chalmers University of Technology in Gothenburg, home to the EU’s largest research initiative, Graphene Flagship. The collaborative project allows any Swedish researcher who needs to improve their mathematical calculations and models to take advantage of Alvis' services through SNIC's application system, regardless of the research field. This supports researchers who are already utilising machine learning to analyse complex problems, and those who are investigating the use of machine learning to solve issues within their respective field, with the potential to lead to ground-breaking academic research in fields such as quantum computing and data-driven research for healthcare and science. “The Alvis project is a prime example of the role of supercomputing in helping to solve humanity’s greatest challenges, and Lenovo is both proud and excited to be selected as part of it,” says Noam Rosen, EMEA Director, HPC & AI at Lenovo Infrastructure Solutions Group. “Supported by Lenovo’s performance leading technology, Alvis will power research and use machine learning across many diverse areas with a major impact on societal development, including environmental research and the development of pharmaceuticals. This computing resource is truly unique, built on the premise of architectures for different AI and machine learning workloads with sustainability in mind, helping to save energy and reduce carbon emissions by using our pioneering warm water-cooling technology.” “The first pilot resource for Alvis has already been used by more than 150 research projects across Swedish universities. By making it larger, and opening the Alvis Systems to all Swedish researchers, Chalmers and Lenovo are playing an important role in providing a national HPC ecosystem for future research”, comments Sverker Holmgren, Director of Chalmers e-Infrastructure Commons, hosting the Alvis system. Powering energy-saving AI infrastructure Chalmers has chosen to implement a scalable cluster with a variety of Lenovo ThinkSystem servers to deliver the right mix of NVIDIA GPUs to its users in a way that prioritises energy savings and workload balance. This includes the Lenovo ThinkSystem SD650-N V2 to deliver the power of NVIDIA A100 Tensor Core GPUs, and the NVIDIA-Certified ThinkSystem SR670 V2 for NVIDIA A40 and T4 GPUs. “The work we’re doing with Chalmers University and its Alvis national supercomputer will give researchers the power they need to simulate and predict our world,” says Rod Evans, EMEA director of high-performance computing at NVIDIA. “Together, we’re giving the scientific community tools to solve the world’s greatest supercomputing challenges – from forecasting weather to drug discovery.” The storage architecture delivers a new Ceph solution with 7.8 petabytes, to be integrated into the existing storage environment at Chalmers. NVIDIA Quantum 200 Gb/s InfiniBand provides the system with low- latency, high data throughput networking and smart in- computing acceleration engines. With these high-speed infrastructure capabilities, users have almost 1000 GPUs, mainly NVIDIA A100 Tensor Core, including over 260,000 processing cores and over 800 TFLOPS of compute power to drive a faster time to answer in their research. In addition, Alvis leverages Lenovo’s NeptuneTM liquid-cooling technologies to deliver unparalleled compute efficiency. Initially, full air cooling was proposed for the project, but Chalmers instead decided to deploy Lenovo Neptune warm water-cooling capabilities to reduce long-term operational costs and result in a ‘greener’ AI infrastructure system. As a result, the university anticipates that there will be significant energy savings thanks to efficiencies through water cooling.

Xydus hits 600% hyper growth helping manage virtual workforces
Xydus has announced its rebranded emergence after a record-breaking year which saw sales increase six-fold as it closed its largest-ever deals. Previously known as Paycasso, Xydus built its technology platform and credibility for over a decade by working with the some of the world’s most trusted brands such as PWC, EY, TransUnion, Equifax, Philip Morris International, Irish Life and notably, the UK’s National Health Service, Europe’s largest employer. Hybrid working fuels growth Focusing on identity technology perfectly positioned Xydus’s cloud services and helped the team to win new deals as employee and consumer engagement moved massively to digital during lockdown. The shift to flexible working supercharged an already buoyant demand for Xydus’ (Paycasso’s) digital identity software, as employers transitioned to managing remote workforces who require digital access and consumers increased their use of e-commerce. Demand was particularly strong for organisations with large, geographically dispersed, consumer markets and significant workforces across multiple jurisdictions, such as healthcare and financial services, in the US and Europe. How Xydus works The Xydus platform uses technologies such as facial recognition, biometrics, machine learning and multi-layered encryption, which allows workers to enrol and reuse their individual identity across the enterprise in seconds. Previously this could take hours, days or weeks. The advantage for employers is a seamless onboarding experience and secure access to corporate systems and access points without the need to deploy overstretched IT or HR resources. For consumers, it simplifies and streamlines their experience as they access today’s thousands of digital services. Xydus CEO, Russell King, comments: “Recent years have seen massive disruption to the working patterns and online behaviours of billions of consumers and employers, making Xydus more relevant than ever. A decade ago, faced with a digitally expanding world, we could not think of a much bigger challenge to address than core identity. It is this consistent focus in addressing the challenges in identity management which is now paying dividends for Xydus. This last year’s spectacular sales growth reflects how the need to solve identity issues for this expansive world is only going to grow.”



Translate »
View the latest digital issue!