Advertise on DCNN Advertise on DCNN Advertise on DCNN

Cloud Computing & Storage


Lenovo introduces purpose-built AI-centric infrastructure systems
Lenovo Group has announced a comprehensive new suite of purpose-built AI-centric infrastructure systems and innovations to advance Hybrid AI innovation from edge to cloud. The company is delivering GPU-rich and thermal efficient solutions intended for compute intensive workloads across multiple environments and industries. In industries such as financial services and healthcare, customers are managing massive data sets that require extreme I/O bandwidth, and Lenovo is providing the IT infrastructure vital to management of critical data. Across all these solutions is Lenovo TruScale, which provides the ultimate flexibility, scale and support for customers to on-ramp demanding AI workloads completely as-a-service. Regardless of where customers are in their AI journey, Lenovo Professional Services says that it can simplify the AI experience as customers look to meet the new demands and opportunities of AI that businesses are facing today. “Lenovo is working to accelerate insights from data by delivering new AI solutions for use across industries delivering a significant positive impact on the everyday operations of our customers,” says Kirk Skaugen, President of Lenovo ISG. “From advancing financial service capabilities, upgrading the retail experience, or improving the efficiency of our cities, our hybrid approach enables businesses with AI-ready and AI-optimised infrastructure, taking AI from concept to reality and empowering businesses to efficiently deploy powerful, scalable and right-sized AI solutions that drive innovation, digitalisation and productivity.” In collaboration with AMD, Lenovo is also delivering the ThinkSystem SR685a V3 8GPU server, bringing customers extreme performance for the most compute-demanding AI workloads, inclusive of GenAI and Large Language Models (LLM). The powerful innovation provides fast acceleration, large memory, and I/O bandwidth to handle huge data sets, needed for advances in the financial services, healthcare, energy, climate science, and transportation industries. The new ThinkSystem SR685a V3 is designed for both enterprise private on-prem AI as well as for public AI cloud service providers. Additionally, Lenovo is bringing AI inferencing and real time data analysis to the edge with the new Lenovo ThinkAgile MX455 V3 Edge Premier Solution with AMD EPYCTM 8004 processors. A versatile AI-optimised platform delivers new levels of AI, compute, and storage performance at the edge with the best power efficiency of any Azure Stack HCI solution. Offering turnkey seamless integration with on-prem and Azure cloud, Lenovo’s ThinkAgile MX455 V3 Edge Premier Solution allows customers to reduce TCO with unique lifecycle management, gain an enhanced customer experience and allows the ability to adopt software innovations faster. Lenovo and AMD have also unveiled a multi-node, high-performance, thermally efficient server designed to maximise performance per rack for intensive transaction processing. The Lenovo ThinkSystem SD535 V3 is a 1S/1U half-width server node powered by a single fourth-gen AMD EPYC processor, and it's engineered to maximise processing power and thermal efficiency for workloads including cloud computing and virtualisation at scale, big data analytic, high-performance computing and real-time e-commerce transactions for businesses of all sizes. Finally, to empower businesses and accelerate success with AI adoption, Lenovo has introduced with immediate availability Lenovo AI Advisory and Professional Services that offer a breadth of services, solutions and platforms designed to help businesses of all sizes navigate the AI landscape, find the right innovations to put AI to work for their organisations quickly, cost-effectively and at scale, bringing AI from concept to reality. For more from Lenovo, click here.

Axis introduces new open hybrid cloud platform
Axis Communications, a network video specialist, has introduced Axis Cloud Connect, an open hybrid cloud platform designed to provide customers with more secure, flexible, and scalable security solutions. Together with Axis devices, the platform enables a range of managed services to support system and device management, video and data delivery and meet high demands in cybersecurity, the company says. The video surveillance market is increasingly utilising connectivity to cloud, driven by the need for remote access, data-driven insights, and scalability. While the number of cloud-connected cameras is growing at a rate of over 80% per year, the trend toward cloud adoption has shifted more towards the implementation of hybrid security solutions, a mix of cloud and on-premises infrastructure, using smart edge devices as powerful generators of valuable data. Axis Cloud Connect enables smooth integration of Axis devices and partner applications by offering a selection of managed services. To keep systems up to date and to ensure consistent system performance and cybersecurity, Axis takes added responsibility for hosting, delivering, and running digital services to ensure availability and reliability. The managed services enable secure remote access to live video operations, and improved device management with automated updates throughout the lifecycle. It also offers user and access management for easy and secure control of user access rights and permissions. According to Johan Paulsson, CTO, Axis Communications, “Axis Cloud Connect is a continuation of our commitment to deliver secure-by-design solutions that meet changing customer needs. This offering combines the benefits of cloud technology and hybrid architectures with our deep knowledge of analytics, image usability, cybersecurity, and long-term experience with cloud-based solutions, all managed by our team of experts to reduce friction for our customers.” With a focus on adding security, flexibility and scalability to its offerings, Axis has also announced that it has built the next generation of its video management system (VMS) - AXIS Camera Station - on Axis Cloud Connect. Accordingly, Axis is extending its VMS into a suite of solutions that includes AXIS Camera Station Pro, Edge and Center. The AXIS Camera Station suite is engineered to more precisely match the needs of users based on flexible options for video surveillance and device management, architecture and storage, analytics and data management, and cloud-based services. AXIS Camera Station Edge: An easily accessible cam-to-cloud innovation combining the power of Axis edge devices with Axis cloud services. It provides a cost-effective, easy-to-use, secure, and scalable video surveillance offering. It is reportedly easy to install and maintain, with minimal equipment on-site. Only a camera with SD card is needed or use AXIS S30 Recorder Series depending on the requirements. This flexible and reliable recording solution offers straight-forward control from anywhere, AXIS claims. AXIS Camera Station Pro: A powerful and flexible video surveillance and access management software for customers who want full control of their system. It ensures they have full flexibility to take control of their site and build the right solution on their private network, while also benefiting from optional cloud connectivity. It supports the complete range of Axis products and comes with all the powerful features needed for active video management and access control. It includes new features such as a web client, data insight dashboards and improved search functionality and optional cloud connectivity. AXIS Camera Station Center: Providing new possibilities to effectively manage and operate hundreds or even thousands of cloud-connected AXIS Camera Station Pro or AXIS Camera Station Edge sites all from one centralised location. Accessible from anywhere, it enables aggregated multi-site device management, user management, and video operation. In addition, it offers corporate IT functionality such as Active Directory, user group management, and 24/7 support. Axis Cloud Connect, the AXIS Camera Station offering, and Axis devices are all built with robust cybersecurity measures to ensure compliance with industry standards and regulations. In addition, with powerful edge devices and the AXIS OS working in tandem with Axis cloud capabilities, users can expect high-level performance with seamless software and firmware updates. The new cloud-based platform and the solutions built upon it by both Axis and selected partners aim to create opportunity and value throughout the channel, allowing companies to utilise the cloud at a pace that makes the most sense for their evolving business needs. For more from Axis, click here.

Cloud is key in scaling systems to your business needs
by Brian Sibley, Solutions Architect at Espria Meeting the demands of the modern-day SMB is one of the challenges facing many business leaders and IT operators today. Traditional, office-based infrastructure was fine up until the point where greater capacity was needed than those servers could deliver, vendor support became an issue, or the needs of a hybrid workforce weren’t being met. In the highly competitive SMB space, maintaining and investing in a robust and efficient IT infrastructure can be one of the ways to stay ahead of competitors. Thankfully, with the advent of cloud offerings, a new scalable model has entered the landscape; whether it be 20 or 20,000 users, the cloud will fit all and with it comes a much simpler, per user cost model. This facility to integrate modern computing environments in the day-to-day workplace means businesses can now stop rushing to catch up, and with this comes the invaluable peace of mind that these operations will scale up or down as required. Added to which, the potential cost savings and added value will better serve each business and help to future-proof the organisation, even when on a tight budget. Cloud service solutions are almost infinitely flexible, rather than traditional on-premises options, and won’t require in-house maintenance. When it comes to environmental impact and carbon footprint, data centres are often thought to be a threat, contributing to climate change; but in reality, cloud is a great option. The scalability of cloud infrastructure and the economies of scale they leverage facilitate not just cost but carbon savings too. Rather than a traditional model where a server runs in-house at 20% capacity, using power 24/7/365 and pumping out heat, cloud data centres are specifically designed to run and cater for multiple users more efficiently, utilising white space cooling, for example, to optimise energy consumption. When it comes to the bigger players like Microsoft and Amazon, they are investing heavily in sustainable, on-site energy generation to power their data centres; even planning to feedback excess power into the National Grid. Simply put, it’s more energy efficient for individual businesses to use a cloud offering than to run their own servers – the carbon footprint for each business using a cloud solution becomes much smaller. With many security solutions now being cloud based too, security doesn’t need to be compromised and can be managed remotely via SOC teams either in-house or via the security provider (where the resources are greater and have far more specialist expertise). Ultimately, a cloud services solution, encompassing servers, storage, security and more, will best service SMBs. It’s scalable, provides economies of scale and relieves in-house IT teams from many mundane yet critical tasks, allowing them to focus on more profitable activities. For more from Espria, click here.

Latest version of StarlingX cloud platform now available
Version 9.0 of StarlingX - the open source distributed cloud platform for IoT, 5G, O-RAN and edge computing - is now available. StarlingX combines Ceph, OpenStack, Kubernetes and more to create a full-featured cloud software stack that provides everything telecom carriers and enterprises need to deploy an edge cloud on a few servers or hundreds of them. Container-based and highly scalable, StarlingX is used by the most demanding applications in industrial IoT, telecom, video delivery and other ultra-low latency, high-performance use cases. "StarlingX adoption continues to grow as more organisations learn of the platform’s unique advantages in supporting modern-day workloads at scale; in fact, one current user has documented its deployment of 20,000 nodes and counting,” says Ildikó Váncsa, Director, Community, for the Open Infrastructure Foundation. “Accordingly, this StarlingX 9.0 release prioritises enhancements for scaling, updating and upgrading the end-to-end environment. “Also during this release cycle, StarlingX collaborated closely with Arm and AMD for a coordinated effort towards increasing the diversity of hardware supported,” Ildikó continues. “This collaboration also includes building out lab infrastructure to continuously test the project on a diverse set of hardware platforms.” “Across cloud, 5G, and edge computing, power efficiency and lower TCO is critical for developers in bringing new innovations to market,” notes Eddie Ramirez, Vice President of Go-To-Market, Infrastructure Line of Business, Arm. “StarlingX plays a vital role in this mission and we’re pleased to be working with the community so that developers and users can leverage the power efficiency advantages of the Arm architecture going forward.” Additional New Features and Upgrades in StarlingX 9.0 Transition to the Antelope version of OpenStack. Redundant / HA PTP timing clock sources. Redundancy is an important requirement in many systems, including telecommunications. This feature is crucial to be able to set up multiple timing sources to synchronise from any of the available and valid hardware clocks and to have an HA configuration down to system clocks. AppArmor support. This security feature makes a Kubernetes deployment (and thus the StarlingX stack) more secure by restricting what containers and pods are allowed to do. Configurable power management. Sustainability and optimal power consumption is becoming critical in modern digital infrastructures. This feature adds to the StarlingX platform the Kubernetes Power Manager, which allows power control mechanisms to be applied to processors. Intel Ethernet operator integration. This feature allows for firmware updates and more granular interface adapter configuration on Intel E810 Series NICs. Learn more about these and other features of StarlingX 9.0 in the community’s release notes.A simple approach to scaling distributed clouds StarlingX is widely used in production among large telecom operators around the globe, such as T-Systems, Verizon, Vodafone, KDDI and more. Operators are utilising the container-based platform for their 5G and O-RAN backbone infrastructures along with relying on the project's features to easily manage the lifecycle of the infrastructure components and services.  Hardened by major telecom users, StarlingX is ideal for enterprises seeking a highly performant distributed cloud architecture. Organisations are evaluating the platform for use cases such as backbone network infrastructure for railway systems and high-performance edge data centre solutions. Managing forest fires is another new use case that has emerged and is being researched by a new StarlingX user and contributor. OpenInfra Community Drives StarlingX Progress The StarlingX project launched in 2018, with initial code for the project contributed by Wind River and Intel. Active contributors to the project include Wind River, Intel and 99Cloud. Well-known users of the software in production include T-Systems, Verizon and Vodafone. The StarlingX community is actively collaborating with several other groups such as the OpenInfra Edge Computing Group, ONAP, the O-RAN Software Community (SC), Akraino and more.

Vultr launches Sovereign Cloud and Private Cloud to boost digital autonomy
Vultr, the world’s largest privately-held cloud computing platform, today announced the launch of Vultr Sovereign Cloud and Private Cloud. They have been introduced in response to the increased importance of data sovereignty and the growing volumes of enterprise data being generated, stored and processed in even more locations — from the public cloud to edge networks and IoT devices, to generative AI. The announcement comes on the heels of the launch of Vultr Cloud Inference, which provides global AI model deployment and AI inference capabilities leveraging Vultr’s cloud-native infrastructure spanning six continents and 32 cloud data centre locations. With Vultr, governments and enterprises worldwide can ensure all AI training data is bound by local data sovereignty, data residency, and privacy regulations. A significant portion of the world's cloud workloads are currently managed by a small number of cloud service providers in concentrated geographies, raising concerns particularly in Europe, the Middle East, Latin America, and Asia about digital sovereignty and control over data within their countries. Enterprises meanwhile must adhere to a growing number of regulations governing where data can be collected, stored, and used while retaining access to native GPU, cloud, and AI capabilities to compete on the global stage. 50% of European CXOs list data sovereignty as a top issue when selecting cloud vendors, with more than a third looking to move 25-75% of data, workloads, or assets to a sovereign cloud, according to Accenture. Vultr Sovereign Cloud and Private Cloud are designed to empower governments, research institutions, and enterprises to access essential cloud-native infrastructure while ensuring that critical data, technology, and operations remain within national borders and comply with local regulations. At the same time, Vultr provides these customers with access to the advanced GPU, cloud, and AI technology powering today’s leading AI innovators. This enables the development of sovereign AI factories and AI innovations that are fully compliant, without sacrificing reach or scalability. By working with local telecommunications providers, such as Singtel, and other partners and governments around the world, Vultr is able to build and deploy clouds managed locally in any region. Vultr Sovereign Cloud and Private Cloud guarantee data is stored locally, ensuring it is used strictly for its intended purposes and not transferred outside national borders or other in-country parameters without explicit authorisation. Vultr also delivers technological independence through physical infrastructure, featuring air-gapped deployments and a dedicated control plane that is under the customer’s direct control, completely untethered from the central control plane governing resources across Vultr’s global data centres. This provides complete isolation of data and processing power from global cloud resources. To further ensure local governance and administration of these resources, Vultr Sovereign Cloud and Private Cloud are managed exclusively by nationals of the host country, resulting in an audit trail that complies with the highest standards of national security and operational integrity. For enterprises, Vultr combines Sovereign and Private Cloud services with ‘train anywhere, scale everywhere’ infrastructure, including Vultr Container Registry, which enables models to be trained in one location but shared across multiple geographies, allowing customers to scale AI models on their own terms.“To address the growing need for countries to control their own data, and to reduce their reliance on a small number of large global tech companies, Vultr will now deploy sovereign clouds on demand for national governments around the world,” says J.J. Kardwell, CEO of Vultr’s parent company, Constant. “We are actively working with government bodies, local telecommunications companies, and other in-country partners to provide Vultr Sovereign Cloud and Private Cloud solutions globally, paving the way for customers to deliver fully compliant AI innovation at scale.” For more from Vultr, click here.

Nasuni launches Nasuni Edge for Amazon S3
Nasuni, a hybrid cloud storage solution, has announced the general availability of Nasuni Edge for Amazon Simple Storage Service (S3), a cloud-native, distributed solution that allows enterprises to accelerate data access and delivery times, while ensuring low-latency access that is crucial for edge workloads, including cloud-based artificial intelligence and machine learning (AI/ML) applications, all through a single, unified platform. Amazon S3 is an object storage service from Amazon Web Services (AWS) that offers scalability, data availability, security, and performance. Nasuni Edge for Amazon S3 supports petabyte-sized workloads and allows customers to run S3-compatible storage that supports select S3 APIs on AWS outposts, AWS local zones, and customers’ on-premises environments. The Nasuni cloud-native architecture is designed to improve performance and accelerate business processes. Immediate file access is essential across various industries, where remote facilities with limited bandwidth generate large volumes of data that must be quickly processed and ingested into Amazon S3. In addition, increasingly, customers are looking for a unified platform for both file and object data access and protection, enabling them to address small and large-scale projects with on-prem and cloud-centric workloads through a single offering. With an influx of large volumes of data, simplified storage is a priority for any business looking to collect and quickly process data at the edge in 2024. Forrester Research expects 80% of new data pipelines in 2024 will be built for ingesting, processing, and storing unstructured data. Nasuni Edge for Amazon S3 is specifically designed for IT professionals, infrastructure managers, IT architects, and ITOps teams who want to improve application performance and data-driven workflow processes. It enables application developers to read and write using the Amazon S3 API to a global namespace backed by an Amazon S3 bucket from multiple endpoints located within AWS local zones, AWS outposts, and on-premises environments. Nasuni Edge for Amazon S3 enhances data access by providing local performance at the edge, multi-protocol read/write scenarios and support for more file metadata. “Nasuni has been a long-time AWS partner, and this latest collaboration delivers the simplest solution for modernising an enterprise’s existing file infrastructure. With Nasuni Edge for Amazon S3, enterprises can support legacy workloads and take advantage of modern Amazon S3-based applications,” says David Grant, President, Nasuni. “Nasuni Edge for Amazon S3 allows an organisation to make unstructured data easily available to cloud-based AI services.” In addition to providing fast access to a single, globally accessible namespace, the ability to ingest large amounts of data into a single location drives potential new opportunities for customer innovation and powerful new insights via integration with third-party AI services. Importantly, Nasuni’s multi-protocol support means these new data workloads are accessible from a vast range of existing applications without having to rewrite them. 

Navigating enterprise approach on public, hybrid and private clouds
By Adriaan Oosthoek, Chairman at Portus Data Centers With the rise of public cloud services offered by industry giants like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), many organisations have migrated or are considering migrating their workloads to these platforms. However, the decision is not always straightforward, as factors like cost, performance, security, vendor lock-in and compliance come into play. Increasingly, enterprises must think about workloads in the public cloud, the costs involved and when a private cloud or a hybrid cloud setup might be a better fit. Perceived benefits of public cloud Enterprises view the public cloud as a versatile solution that offers scalability, flexibility, and accessibility. Workloads that exhibit variable demand patterns, such as certain web applications, mobile apps and development environments, are well-suited for the public cloud. The ability to quickly provision resources and pay only for what is used may make it an attractive option for some businesses and applications. Public cloud offerings also typically provide a vast array of managed services, including databases, analytics, machine learning and AI, and can enable enterprises to innovate rapidly without the burden of managing underlying infrastructure. This is also a key selling point of public cloud offerings. But how have enterprises’ real life experiences of public cloud set-ups compared against these expectations? Many have found the ‘pay-as-you-go’ pricing model to be very expensive and to have led to unexpected cost increases, particularly if workloads and usage spike unexpectedly or if the customer packages have been provisioned inefficiently. If not very carefully managed, the costs of public cloud services have a tendency to balloon quickly. Public cloud providers and enterprises that have adopted public cloud strategies are naturally seeking to address these concerns. Enterprises are increasingly adopting cloud cost management strategies, including using cost estimation tools, implementing resource tagging for better visibility, optimising instance sizes, and utilising reserved instances or savings plans to reduce costs. Cloud providers offer pricing calculators and cost optimisation recommendations to help enterprises forecast expenses and increase efficiency. Despite these efforts, the public cloud has proved to be far more expensive for many organisations than originally envisaged and managing costs effectively in public cloud set-ups requires considerable oversight, ongoing vigilance and optimisation efforts. When private clouds make sense There are numerous situations where a private cloud environment is a more suitable and cost-effective option. Workloads with stringent security and compliance requirements, such as those in regulated industries like finance, healthcare or government, often necessitate the control and isolation provided by a private cloud environment - hosted in a local data centre on a server that is owned by the user. Many workloads with predictable and steady resource demands, such as legacy applications or mission-critical systems, may not need the flexibility of the public cloud and could potentially incur much higher costs over time. In such cases, a private cloud infrastructure offers much greater predictability and cost control, allowing enterprises to optimise resources based on their specific requirements. And last but not least, once workloads are in the public cloud, vendor lock-in occurs. It is notoriously expensive to repatriate workloads back out of the public cloud, mainly due to excessive data egress costs. Hybrid cloud It is becoming increasingly clear that most organisations will benefit most from a hybrid cloud setup. Simply put, ‘horses for courses’. Only put those workloads that will benefit from the specific advantages into the public cloud and keep the other workloads within their own control in a private environment. Retaining a private environment does not require an enterprise to have or run their own data centre. Rather, they should take capacity in a professionally managed, third-party colocation data centre that is located in the vicinity of the enterprises’ own premises. Capacity in a colocation facility will generally be more resilient, efficient, sustainable, and cost effective for enterprises compared to operating their facilities. The private cloud infrastructure can also be outsourced – in a private instance. This is where regional and edge data centre operators such as Portus Data Centers come to the fore. In most cases, larger organisations will end up with hybrid cloud IT architecture to benefit from the best of both worlds. This will require careful consideration of how to seamlessly pull those workloads together through smart networking. Regional data centres with strong network and connectivity options will be crucial to serving this demand for local IT infrastructure housing.  The era where enterprises went all-in into the cloud is over. While the public cloud offers scalability, flexibility, and access to cutting-edge technologies, concerns about cost, security, vendor lock-in and compliance persist. To mitigate these concerns, enterprises must carefully evaluate their workloads and determine the most appropriate hosting environment. 

Own Company empowers customers to capture value from their data
Own Company, a SaaS data platform, has announced a new product, Own Discover, that reflects the company’s commitment to empower every company operating in the cloud to own their own data. Own Discover is expanding its product portfolio beyond its backup and recovery, data archiving, seeding, and security solutions to help customers activate their data and amplify their business. With Own Discover, businesses will be able to use their historical SaaS data to unlock insights, accelerate AI innovation, and more in an easy and intuitive way. Own Discover is part of the Own Data Platform, giving customers quick and easy access to all of their backed up data in a time-series format so they can: Analyse their historical SaaS data to identify trends and uncover hidden insights Train machine learning models faster, enabling AI-driven decisions and actions Integrate SaaS data to external systems while maintaining security and governance “For the first time, customers can easily access all of their historical SaaS data to understand their businesses better, and I’m excited to see our customers unleash the potential of their backups and activate their data as a strategic asset,” says Adrian Kunzle, Chief Technology Officer at Own. “Own Discover goes beyond data protection to active data analysis and insights and provides a secure, fast way for customers to learn from the past and inform new business strategies and growth.”

Vultr announces new CDN in race to be the next hyperscaler
Vultr, a privately-held cloud computing platform, has announced the launch of Vultr CDN. This content delivery service pushes content closer to the edge without compromising security. Vultr now enables global content and media caching, empowering its worldwide community with services for scaling websites and web applications. Traditional content delivery networks are incredibly complex, leaving businesses and web developers needing help to configure, manage, and optimise infrastructure cost-effectively and in a timely manner. They require immediate access to a powerful, scalable, and global content delivery network to accelerate digital content distribution and keep up with customer demand. The launch of Vultr CDN marks the next phase of the company’s growth as a leading cloud computing platform. By adding global content caching and delivery to Vultr’s existing cloud infrastructure, the service simplifies infrastructure operations with unbeatable price-to-performance starting at $10/month, with the industry’s lowest bandwidth costs. For those requiring the highest performance CPUs, Vultr also offers unique high-frequency plans powered by high clock speed CPUs and NVMe local storage, optimised for websites and content management systems. Purpose-built for performance-driven businesses, Vultr CDN delivers a network for fast, secure, and reliable content distribution and is optimised for content acceleration, API caching, image optimisation and more. Seamless integrations with Vultr Cloud Compute enable it to scale automatically and intelligently by selecting the best location for content delivery, thereby optimising user requests to save time and money.  Vultr CDN is now available for use as a beta service with a full release in February.

Aruba combines cloud potential with electric mobility
Aruba has announced that it is ready to enter the FIM Enel MotoE World Championship with the arrival of the Aruba Cloud MotoE Team. As both, the manager and title sponsor of the team, this is a new journey for Aruba into the world of sport. The project runs parallel to one undertaken with customers in the construction of a new cloud platform, which is now complete. There are several challenges that unite cloud technologies and the motor industry. First and foremost, sustainability, a key topic associated for cloud technologies as businesses look for more innovative and environmentally friendly products. The virtualisation of computational resources that underlies cloud computing, for example, allows for a reduction in the use of servers, and therefore, a reduction in emissions or when using clean energy, saving of natural resources. Furthermore, the continuous search for performance optimisation also unites the two industries. Cloud technologies are crucial across all spheres, both at a business level, but also in everyday life. For this reason, cloud developers are always looking to save energy through increasing the efficiency of infrastructure and optimising the use of resources. Similarly, the MotoE team is a starting point from where Ducati can experiment and develop technologies that could, in the future, be used on road motorbikes and offer customers increasingly sustainable and clean vehicles. The international dimension of the project is also particularly exciting, as over the years, Aruba Cloud has consolidated a significant international presence, becoming a player with more than 200,000 customers served in over 150 countries. Thanks to continuous investments in the innovation of its technology stack, Aruba Cloud is also distributed across the European data centre network. The riders of the Aruba Cloud MotoE Team will be Chaz Davies, who after retiring from Superbike in 2021, joined the Ducati ERC team in the Endurance World Championship, acting as coach for the Aruba riders in Superbike and Supersport at the same time, and Armando Pontone, who after a stint in the Moto3 category won the National Trophy SS600 in 2021. The team's official presentation will be held on 7 March at the Aruba Auditorium in Ponte San Pietro. Read more latest news from Aruba here.



Translate »