News in Cloud Computing & Data Storage


Distributed cloud saves CloudReso 30% on storage costs
Cubbit, the innovator behind Europe’s first distributed cloud storage enabler, has announced that CloudReso, a France-based distributor of MSP security solutions, has reduced its cloud object storage costs by 30% thanks to Cubbit’s fully-managed cloud object storage, DS3. The MSP can now offer its customers cloud storage with unequalled data sovereignty specs, geographical resilience, and double ransomware protection (server and client-side). Through this deployment, CloudReso has successfully avoided hidden costs traditionally linked to S3, including egress, deletion, and bucket replication fees. Operating across all French-speaking countries, CloudReso manages over 790 TB of data and backs up 1,590 endpoints. CloudReso's expertise covers a wide range of vertical markets, including the public sector, providing extensive technical knowledge of S3 solutions and support. With cyber-threats such as ransomware attacks growing in sophistication, targeting both client-side and server-side vulnerabilities with unprecedented precision, Cubbit technology has helped CloudReso protect data both client-side (Object Lock, versioning, IAM policy) and server-side (geo-distribution, encryption). The MSP considered and assessed various public services over the years, including centralised S3 cloud storage offerings. These options incurred high fees for deleting data, and expenses were multiplied according to the number of sites needed for bucket replication (in CloudReso’s case the configuration was composed of three data centres across the Paris region). With Cubbit DS3, fixed storage costs include all the main S3 APIs, together with the geo-distribution capacity, enabling CloudReso to save 30% on storage costs while providing a cloud storage solution with up to 15 9s for durability. Cubbit’s flat rate also helped CloudReso quickly estimate the monthly volume data load, whilst easily predicting costs and ROI, enabling CloudReso to make higher margins. The infrastructure and maintenance costs of other on-prem object storage options did not offer a perfectly secure and available solution for the MSP’s needs. A major prerequisite for CloudReso's choice of partner was the inclusion of all the key S3 APIs, including Lifecycle Configuration (enabling setting the lifecycle of data and objects), and Object Lock technology, thus implementing an extra layer of data protection from ransomware and preventing unauthorised data access or deletion - capabilities not delivered as comprehensively or as cost-effectively by the other providers. Whereas Cubbit offers an extensive range of S3 object store features at one of today's most competitive prices. With Cubbit's GDPR compliance and geo-fence capabilities, CloudReso can now comply with regional regulations and stringent laws impacting the industries in which its customers operate, enabling the MSP to create new sources of revenue streams. Gilles Gozlan, CEO at CloudReso, says, “To store our data, we've used US-based and French-based cloud storage providers for a long time, but they did not come with the geographical resilience, data sovereignty and simple pricing that Cubbit offers. This has enabled us to estimate our ROI more easily, and to generate a 30% saving on our previous costs for equivalent configuration. Cubbit will enable us to enter new data intensive, highly-regulated markets. Moreover, Cubbit has a quality and response time of support that we haven't found with any other provider. For these reasons we haven't used any other S3 provider since we started working with Cubbit.” Richard Czech, Chief Revenue Officer at Cubbit, adds, “Together with the explosive growth of unstructured data, European organisations are now facing an increasing number of challenges: cyber-threats, data sovereignty, and unpredictable costs, to name a few. CloudReso has our DS3 as a true obstacle remover, and we are working together to bring it to organisations all over Europe.” Moving forward, CloudReso expects to develop the adoption of Cubbit throughout all Francophone territories. Soon, CloudReso will also adopt Cubbit’s new DS3 Composer to provide its customers with on-prem, country-clustered innovations with complete sovereignty and compliance, including NIS2, GDPR, and ISO 27001. For more from Cubbit, click here.

Wasabi delivers flexible hybrid cloud storage solutions
Wasabi Technologies, the hot cloud storage company, has announced a collaboration with Dell Technologies, via the Extended Technologies Complete program to bring affordable and innovative hybrid cloud solutions for backup, data protection, and long-term retention to customers.  In a time marked by the exponential growth of data, organisations worldwide face the need for efficient and cost-effective storage solutions. The Wasabi-Dell collaboration addresses this challenge by providing flexible, efficient hybrid cloud solutions enabling users to optimise their data management processes while simultaneously reducing overall costs.  "Dell is a clear industry leader with a broad and deep portfolio of transformative technology,” says David Friend, co-founder and chief executive officer at Wasabi Technologies. "This collaboration will extend the reach of Wasabi's cloud storage to a broader audience, catering to users in search of dependable, economical solutions for safeguarding their data archives over the long haul."  The cloud has become the preferred location for long-term data backup retention and disaster recovery. Dell’s PowerProtect Data Domain appliances natively tiers data to Wasabi, enabling customers to benefit from a complete data protection solution for on-premises storage with long-term cloud retention. In addition, Wasabi integrates with Dell NetWorker CloudBoost to bring long term retention in the cloud to existing NetWorker customers.  "The collaboration between Wasabi Technologies and Dell Technologies presents a powerful solution for organisations grappling with data growth," says Dave McCarthy, research vice president at IDC. "Organisations need a hybrid cloud infrastructure that is efficient and cost-effective, and that has the ability to scale with them during their data management journey. This collaboration meets these challenges head on." 

Arista launches next generation multi-domain segmentation
Arista Networks, a provider of cloud networking solutions, has announced a significant update to its Arista MSS (Multi-Domain Segmentation Service) offerings that address the challenge of creating a truly enterprise-wide zero trust network. Without the need for endpoint software agents and proprietary network protocols, Arista MSS enables effective microperimeters that restrict lateral movement in campus and data centre networks and thus reduces the blast radius of security breaches such as ransomware. Today’s distributed IT infrastructure with work-from-anywhere, the explosion of IoT devices and multi-cloud applications has upended the traditional security perimeter and led to a dynamic and unpredictable attack surface. To improve their defensive posture, organisations have embarked on zero trust efforts that require granular control of both north-south and east-west communication paths. Firewalls are simply not optimised to protect against all lateral movement, which would require a proliferation of security appliances, soaring costs, and an explosion of complex rule sets that still fail to protect against lateral movement. To address this challenge, the Cybersecurity and Infrastructure Security Agency (CISA) “Zero Trust Maturity Model” recommends the adoption of microsegmentation for highly distributed, fine-grained enforcement through microperimeters. While many microsegmentation solutions are available on the market, both network and endpoint-based, they struggle with operational complexity, interoperability and portability challenges, and cost, which has limited their widespread adoption across the enterprise. As a result, zero trust efforts often stall. Arista MSS offers standards-based microsegmentation using existing network infrastructure while overcoming the challenges of existing solutions. MSS is network-agnostic and end point-independent. It avoids proprietary protocols and can thus seamlessly integrate into a multi-network vendor environment. The solution also does not require endpoint software, avoiding the portability limitations and operational complexity typical of agent-based microsegmentation solutions. "We are very impressed with the potential of Arista's MSS microperimter segmentation technology,” says Evan Gillette, Security Engineering, Paychex. “We view this technology as highly promising and believe it has the potential to transform our approach to security and segmentation from a traditional perimeter approach to a more distributed network-centric architecture. We are excited to be working with Arista to explore the possibilities of this innovative technology and its applications in our infrastructure.” Arista MSS combines three capabilities that enable organisations to build microperimeters around each digital asset they seek to protect, whether in the campus or the data centre. Arista MSS enables: ▪ Stateless Wire-speed Enforcement in the Network: Arista EOS-based switches deliver a simple model for fine-grained, identity-aware microperimeter enforcement. This enforcement model is independent of endpoint type and identical across campus and data centre environments, simplifying day two operations. Importantly, Arista MSS thus enables lateral segmentation that is often missing today and offloads the capability from firewalls that would have to be explicitly deployed for this purpose. ▪ Redirection to Stateful Firewalls: Arista MSS can seamlessly integrate with firewalls and cloud proxies from partners such as Palo Alto Networks and Zscaler for stateful network enforcement, especially for north-south and inter-zone traffic. MSS thus ensures the right traffic is sent to these critical security controls, allowing them to focus on L4-L7 stateful enforcement while avoiding unnecessary hairpinning of all other traffic. ▪ CloudVision for Microperimeter Management: Arista CloudVision powered by NetDL provides deep real-time visibility into packets, flows, and endpoint identity. This, in turn, enables effective east-west lateral segmentation. In addition, MSS dashboards within CloudVision ease operator effort to manage the microperimeters. MSS extends Arista’s Ask AVA (Autonomous Virtual Assist) service to provide a chat-like interface for operators to navigate the dashboard data and query and filter policy violations. “As a bank, we are committed to delivering comprehensive financial products and solutions, while putting customer's data and security as our top priority,” says Komang Artha Yasa, Technology Division Head, OCBC. “Security is also embedded in one of our core architectural principles when designing our data centre networks. “Arista MSS completes our zero trust posture by working efficiently with our firewalls to microsegment our critical payment systems. Arista's approach is easy for us to adopt since it avoids software-based agents and still gives us interoperability across our entire data centre environment.” Arista MSS seamlessly integrates with the broader Arista Zero Trust Networking solution, including Arista CloudVision, CV AGNITM and Arista NDR. It also integrates with industry-leading firewalls such as Palo Alto Networks, IT service management (ITSM) such as ServiceNow, and virtualisation platforms such as VMware. "Arista MSS has been a welcome addition to our zero trust strategy,” notes Dougal Mair, Associate Director, Networks and Security at The University of Waikato. "The ability to provide an open but secure network for many users (e.g., students, faculty, guests), IT (e.g., laptops, printers), and IoT devices (including sensors and smart lighting) in a large environment was a huge challenge at the university. Arista MSS prevents any unauthorised peer-to-peer and lateral movement on our dynamic network." Arista MSS is in trials now, with general availability in Q3 2024. For more from Arista, click here.

Lenovo introduces purpose-built AI-centric infrastructure systems
Lenovo Group has announced a comprehensive new suite of purpose-built AI-centric infrastructure systems and innovations to advance Hybrid AI innovation from edge to cloud. The company is delivering GPU-rich and thermal efficient solutions intended for compute intensive workloads across multiple environments and industries. In industries such as financial services and healthcare, customers are managing massive data sets that require extreme I/O bandwidth, and Lenovo is providing the IT infrastructure vital to management of critical data. Across all these solutions is Lenovo TruScale, which provides the ultimate flexibility, scale and support for customers to on-ramp demanding AI workloads completely as-a-service. Regardless of where customers are in their AI journey, Lenovo Professional Services says that it can simplify the AI experience as customers look to meet the new demands and opportunities of AI that businesses are facing today. “Lenovo is working to accelerate insights from data by delivering new AI solutions for use across industries delivering a significant positive impact on the everyday operations of our customers,” says Kirk Skaugen, President of Lenovo ISG. “From advancing financial service capabilities, upgrading the retail experience, or improving the efficiency of our cities, our hybrid approach enables businesses with AI-ready and AI-optimised infrastructure, taking AI from concept to reality and empowering businesses to efficiently deploy powerful, scalable and right-sized AI solutions that drive innovation, digitalisation and productivity.” In collaboration with AMD, Lenovo is also delivering the ThinkSystem SR685a V3 8GPU server, bringing customers extreme performance for the most compute-demanding AI workloads, inclusive of GenAI and Large Language Models (LLM). The powerful innovation provides fast acceleration, large memory, and I/O bandwidth to handle huge data sets, needed for advances in the financial services, healthcare, energy, climate science, and transportation industries. The new ThinkSystem SR685a V3 is designed for both enterprise private on-prem AI as well as for public AI cloud service providers. Additionally, Lenovo is bringing AI inferencing and real time data analysis to the edge with the new Lenovo ThinkAgile MX455 V3 Edge Premier Solution with AMD EPYCTM 8004 processors. A versatile AI-optimised platform delivers new levels of AI, compute, and storage performance at the edge with the best power efficiency of any Azure Stack HCI solution. Offering turnkey seamless integration with on-prem and Azure cloud, Lenovo’s ThinkAgile MX455 V3 Edge Premier Solution allows customers to reduce TCO with unique lifecycle management, gain an enhanced customer experience and allows the ability to adopt software innovations faster. Lenovo and AMD have also unveiled a multi-node, high-performance, thermally efficient server designed to maximise performance per rack for intensive transaction processing. The Lenovo ThinkSystem SD535 V3 is a 1S/1U half-width server node powered by a single fourth-gen AMD EPYC processor, and it's engineered to maximise processing power and thermal efficiency for workloads including cloud computing and virtualisation at scale, big data analytic, high-performance computing and real-time e-commerce transactions for businesses of all sizes. Finally, to empower businesses and accelerate success with AI adoption, Lenovo has introduced with immediate availability Lenovo AI Advisory and Professional Services that offer a breadth of services, solutions and platforms designed to help businesses of all sizes navigate the AI landscape, find the right innovations to put AI to work for their organisations quickly, cost-effectively and at scale, bringing AI from concept to reality. For more from Lenovo, click here.

Axis introduces new open hybrid cloud platform
Axis Communications, a network video specialist, has introduced Axis Cloud Connect, an open hybrid cloud platform designed to provide customers with more secure, flexible, and scalable security solutions. Together with Axis devices, the platform enables a range of managed services to support system and device management, video and data delivery and meet high demands in cybersecurity, the company says. The video surveillance market is increasingly utilising connectivity to cloud, driven by the need for remote access, data-driven insights, and scalability. While the number of cloud-connected cameras is growing at a rate of over 80% per year, the trend toward cloud adoption has shifted more towards the implementation of hybrid security solutions, a mix of cloud and on-premises infrastructure, using smart edge devices as powerful generators of valuable data. Axis Cloud Connect enables smooth integration of Axis devices and partner applications by offering a selection of managed services. To keep systems up to date and to ensure consistent system performance and cybersecurity, Axis takes added responsibility for hosting, delivering, and running digital services to ensure availability and reliability. The managed services enable secure remote access to live video operations, and improved device management with automated updates throughout the lifecycle. It also offers user and access management for easy and secure control of user access rights and permissions. According to Johan Paulsson, CTO, Axis Communications, “Axis Cloud Connect is a continuation of our commitment to deliver secure-by-design solutions that meet changing customer needs. This offering combines the benefits of cloud technology and hybrid architectures with our deep knowledge of analytics, image usability, cybersecurity, and long-term experience with cloud-based solutions, all managed by our team of experts to reduce friction for our customers.” With a focus on adding security, flexibility and scalability to its offerings, Axis has also announced that it has built the next generation of its video management system (VMS) - AXIS Camera Station - on Axis Cloud Connect. Accordingly, Axis is extending its VMS into a suite of solutions that includes AXIS Camera Station Pro, Edge and Center. The AXIS Camera Station suite is engineered to more precisely match the needs of users based on flexible options for video surveillance and device management, architecture and storage, analytics and data management, and cloud-based services. AXIS Camera Station Edge: An easily accessible cam-to-cloud innovation combining the power of Axis edge devices with Axis cloud services. It provides a cost-effective, easy-to-use, secure, and scalable video surveillance offering. It is reportedly easy to install and maintain, with minimal equipment on-site. Only a camera with SD card is needed or use AXIS S30 Recorder Series depending on the requirements. This flexible and reliable recording solution offers straight-forward control from anywhere, AXIS claims. AXIS Camera Station Pro: A powerful and flexible video surveillance and access management software for customers who want full control of their system. It ensures they have full flexibility to take control of their site and build the right solution on their private network, while also benefiting from optional cloud connectivity. It supports the complete range of Axis products and comes with all the powerful features needed for active video management and access control. It includes new features such as a web client, data insight dashboards and improved search functionality and optional cloud connectivity. AXIS Camera Station Center: Providing new possibilities to effectively manage and operate hundreds or even thousands of cloud-connected AXIS Camera Station Pro or AXIS Camera Station Edge sites all from one centralised location. Accessible from anywhere, it enables aggregated multi-site device management, user management, and video operation. In addition, it offers corporate IT functionality such as Active Directory, user group management, and 24/7 support. Axis Cloud Connect, the AXIS Camera Station offering, and Axis devices are all built with robust cybersecurity measures to ensure compliance with industry standards and regulations. In addition, with powerful edge devices and the AXIS OS working in tandem with Axis cloud capabilities, users can expect high-level performance with seamless software and firmware updates. The new cloud-based platform and the solutions built upon it by both Axis and selected partners aim to create opportunity and value throughout the channel, allowing companies to utilise the cloud at a pace that makes the most sense for their evolving business needs. For more from Axis, click here.

Cloud is key in scaling systems to your business needs
by Brian Sibley, Solutions Architect at Espria Meeting the demands of the modern-day SMB is one of the challenges facing many business leaders and IT operators today. Traditional, office-based infrastructure was fine up until the point where greater capacity was needed than those servers could deliver, vendor support became an issue, or the needs of a hybrid workforce weren’t being met. In the highly competitive SMB space, maintaining and investing in a robust and efficient IT infrastructure can be one of the ways to stay ahead of competitors. Thankfully, with the advent of cloud offerings, a new scalable model has entered the landscape; whether it be 20 or 20,000 users, the cloud will fit all and with it comes a much simpler, per user cost model. This facility to integrate modern computing environments in the day-to-day workplace means businesses can now stop rushing to catch up, and with this comes the invaluable peace of mind that these operations will scale up or down as required. Added to which, the potential cost savings and added value will better serve each business and help to future-proof the organisation, even when on a tight budget. Cloud service solutions are almost infinitely flexible, rather than traditional on-premises options, and won’t require in-house maintenance. When it comes to environmental impact and carbon footprint, data centres are often thought to be a threat, contributing to climate change; but in reality, cloud is a great option. The scalability of cloud infrastructure and the economies of scale they leverage facilitate not just cost but carbon savings too. Rather than a traditional model where a server runs in-house at 20% capacity, using power 24/7/365 and pumping out heat, cloud data centres are specifically designed to run and cater for multiple users more efficiently, utilising white space cooling, for example, to optimise energy consumption. When it comes to the bigger players like Microsoft and Amazon, they are investing heavily in sustainable, on-site energy generation to power their data centres; even planning to feedback excess power into the National Grid. Simply put, it’s more energy efficient for individual businesses to use a cloud offering than to run their own servers – the carbon footprint for each business using a cloud solution becomes much smaller. With many security solutions now being cloud based too, security doesn’t need to be compromised and can be managed remotely via SOC teams either in-house or via the security provider (where the resources are greater and have far more specialist expertise). Ultimately, a cloud services solution, encompassing servers, storage, security and more, will best service SMBs. It’s scalable, provides economies of scale and relieves in-house IT teams from many mundane yet critical tasks, allowing them to focus on more profitable activities. For more from Espria, click here.

Latest version of StarlingX cloud platform now available
Version 9.0 of StarlingX - the open source distributed cloud platform for IoT, 5G, O-RAN and edge computing - is now available. StarlingX combines Ceph, OpenStack, Kubernetes and more to create a full-featured cloud software stack that provides everything telecom carriers and enterprises need to deploy an edge cloud on a few servers or hundreds of them. Container-based and highly scalable, StarlingX is used by the most demanding applications in industrial IoT, telecom, video delivery and other ultra-low latency, high-performance use cases. "StarlingX adoption continues to grow as more organisations learn of the platform’s unique advantages in supporting modern-day workloads at scale; in fact, one current user has documented its deployment of 20,000 nodes and counting,” says Ildikó Váncsa, Director, Community, for the Open Infrastructure Foundation. “Accordingly, this StarlingX 9.0 release prioritises enhancements for scaling, updating and upgrading the end-to-end environment. “Also during this release cycle, StarlingX collaborated closely with Arm and AMD for a coordinated effort towards increasing the diversity of hardware supported,” Ildikó continues. “This collaboration also includes building out lab infrastructure to continuously test the project on a diverse set of hardware platforms.” “Across cloud, 5G, and edge computing, power efficiency and lower TCO is critical for developers in bringing new innovations to market,” notes Eddie Ramirez, Vice President of Go-To-Market, Infrastructure Line of Business, Arm. “StarlingX plays a vital role in this mission and we’re pleased to be working with the community so that developers and users can leverage the power efficiency advantages of the Arm architecture going forward.” Additional New Features and Upgrades in StarlingX 9.0 Transition to the Antelope version of OpenStack. Redundant / HA PTP timing clock sources. Redundancy is an important requirement in many systems, including telecommunications. This feature is crucial to be able to set up multiple timing sources to synchronise from any of the available and valid hardware clocks and to have an HA configuration down to system clocks. AppArmor support. This security feature makes a Kubernetes deployment (and thus the StarlingX stack) more secure by restricting what containers and pods are allowed to do. Configurable power management. Sustainability and optimal power consumption is becoming critical in modern digital infrastructures. This feature adds to the StarlingX platform the Kubernetes Power Manager, which allows power control mechanisms to be applied to processors. Intel Ethernet operator integration. This feature allows for firmware updates and more granular interface adapter configuration on Intel E810 Series NICs. Learn more about these and other features of StarlingX 9.0 in the community’s release notes.A simple approach to scaling distributed clouds StarlingX is widely used in production among large telecom operators around the globe, such as T-Systems, Verizon, Vodafone, KDDI and more. Operators are utilising the container-based platform for their 5G and O-RAN backbone infrastructures along with relying on the project's features to easily manage the lifecycle of the infrastructure components and services.  Hardened by major telecom users, StarlingX is ideal for enterprises seeking a highly performant distributed cloud architecture. Organisations are evaluating the platform for use cases such as backbone network infrastructure for railway systems and high-performance edge data centre solutions. Managing forest fires is another new use case that has emerged and is being researched by a new StarlingX user and contributor. OpenInfra Community Drives StarlingX Progress The StarlingX project launched in 2018, with initial code for the project contributed by Wind River and Intel. Active contributors to the project include Wind River, Intel and 99Cloud. Well-known users of the software in production include T-Systems, Verizon and Vodafone. The StarlingX community is actively collaborating with several other groups such as the OpenInfra Edge Computing Group, ONAP, the O-RAN Software Community (SC), Akraino and more.

Vultr launches Sovereign Cloud and Private Cloud to boost digital autonomy
Vultr, the world’s largest privately-held cloud computing platform, today announced the launch of Vultr Sovereign Cloud and Private Cloud. They have been introduced in response to the increased importance of data sovereignty and the growing volumes of enterprise data being generated, stored and processed in even more locations — from the public cloud to edge networks and IoT devices, to generative AI. The announcement comes on the heels of the launch of Vultr Cloud Inference, which provides global AI model deployment and AI inference capabilities leveraging Vultr’s cloud-native infrastructure spanning six continents and 32 cloud data centre locations. With Vultr, governments and enterprises worldwide can ensure all AI training data is bound by local data sovereignty, data residency, and privacy regulations. A significant portion of the world's cloud workloads are currently managed by a small number of cloud service providers in concentrated geographies, raising concerns particularly in Europe, the Middle East, Latin America, and Asia about digital sovereignty and control over data within their countries. Enterprises meanwhile must adhere to a growing number of regulations governing where data can be collected, stored, and used while retaining access to native GPU, cloud, and AI capabilities to compete on the global stage. 50% of European CXOs list data sovereignty as a top issue when selecting cloud vendors, with more than a third looking to move 25-75% of data, workloads, or assets to a sovereign cloud, according to Accenture. Vultr Sovereign Cloud and Private Cloud are designed to empower governments, research institutions, and enterprises to access essential cloud-native infrastructure while ensuring that critical data, technology, and operations remain within national borders and comply with local regulations. At the same time, Vultr provides these customers with access to the advanced GPU, cloud, and AI technology powering today’s leading AI innovators. This enables the development of sovereign AI factories and AI innovations that are fully compliant, without sacrificing reach or scalability. By working with local telecommunications providers, such as Singtel, and other partners and governments around the world, Vultr is able to build and deploy clouds managed locally in any region. Vultr Sovereign Cloud and Private Cloud guarantee data is stored locally, ensuring it is used strictly for its intended purposes and not transferred outside national borders or other in-country parameters without explicit authorisation. Vultr also delivers technological independence through physical infrastructure, featuring air-gapped deployments and a dedicated control plane that is under the customer’s direct control, completely untethered from the central control plane governing resources across Vultr’s global data centres. This provides complete isolation of data and processing power from global cloud resources. To further ensure local governance and administration of these resources, Vultr Sovereign Cloud and Private Cloud are managed exclusively by nationals of the host country, resulting in an audit trail that complies with the highest standards of national security and operational integrity. For enterprises, Vultr combines Sovereign and Private Cloud services with ‘train anywhere, scale everywhere’ infrastructure, including Vultr Container Registry, which enables models to be trained in one location but shared across multiple geographies, allowing customers to scale AI models on their own terms.“To address the growing need for countries to control their own data, and to reduce their reliance on a small number of large global tech companies, Vultr will now deploy sovereign clouds on demand for national governments around the world,” says J.J. Kardwell, CEO of Vultr’s parent company, Constant. “We are actively working with government bodies, local telecommunications companies, and other in-country partners to provide Vultr Sovereign Cloud and Private Cloud solutions globally, paving the way for customers to deliver fully compliant AI innovation at scale.” For more from Vultr, click here.

Nasuni launches Nasuni Edge for Amazon S3
Nasuni, a hybrid cloud storage solution, has announced the general availability of Nasuni Edge for Amazon Simple Storage Service (S3), a cloud-native, distributed solution that allows enterprises to accelerate data access and delivery times, while ensuring low-latency access that is crucial for edge workloads, including cloud-based artificial intelligence and machine learning (AI/ML) applications, all through a single, unified platform. Amazon S3 is an object storage service from Amazon Web Services (AWS) that offers scalability, data availability, security, and performance. Nasuni Edge for Amazon S3 supports petabyte-sized workloads and allows customers to run S3-compatible storage that supports select S3 APIs on AWS outposts, AWS local zones, and customers’ on-premises environments. The Nasuni cloud-native architecture is designed to improve performance and accelerate business processes. Immediate file access is essential across various industries, where remote facilities with limited bandwidth generate large volumes of data that must be quickly processed and ingested into Amazon S3. In addition, increasingly, customers are looking for a unified platform for both file and object data access and protection, enabling them to address small and large-scale projects with on-prem and cloud-centric workloads through a single offering. With an influx of large volumes of data, simplified storage is a priority for any business looking to collect and quickly process data at the edge in 2024. Forrester Research expects 80% of new data pipelines in 2024 will be built for ingesting, processing, and storing unstructured data. Nasuni Edge for Amazon S3 is specifically designed for IT professionals, infrastructure managers, IT architects, and ITOps teams who want to improve application performance and data-driven workflow processes. It enables application developers to read and write using the Amazon S3 API to a global namespace backed by an Amazon S3 bucket from multiple endpoints located within AWS local zones, AWS outposts, and on-premises environments. Nasuni Edge for Amazon S3 enhances data access by providing local performance at the edge, multi-protocol read/write scenarios and support for more file metadata. “Nasuni has been a long-time AWS partner, and this latest collaboration delivers the simplest solution for modernising an enterprise’s existing file infrastructure. With Nasuni Edge for Amazon S3, enterprises can support legacy workloads and take advantage of modern Amazon S3-based applications,” says David Grant, President, Nasuni. “Nasuni Edge for Amazon S3 allows an organisation to make unstructured data easily available to cloud-based AI services.” In addition to providing fast access to a single, globally accessible namespace, the ability to ingest large amounts of data into a single location drives potential new opportunities for customer innovation and powerful new insights via integration with third-party AI services. Importantly, Nasuni’s multi-protocol support means these new data workloads are accessible from a vast range of existing applications without having to rewrite them. 

Navigating enterprise approach on public, hybrid and private clouds
By Adriaan Oosthoek, Chairman at Portus Data Centers With the rise of public cloud services offered by industry giants like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), many organisations have migrated or are considering migrating their workloads to these platforms. However, the decision is not always straightforward, as factors like cost, performance, security, vendor lock-in and compliance come into play. Increasingly, enterprises must think about workloads in the public cloud, the costs involved and when a private cloud or a hybrid cloud setup might be a better fit. Perceived benefits of public cloud Enterprises view the public cloud as a versatile solution that offers scalability, flexibility, and accessibility. Workloads that exhibit variable demand patterns, such as certain web applications, mobile apps and development environments, are well-suited for the public cloud. The ability to quickly provision resources and pay only for what is used may make it an attractive option for some businesses and applications. Public cloud offerings also typically provide a vast array of managed services, including databases, analytics, machine learning and AI, and can enable enterprises to innovate rapidly without the burden of managing underlying infrastructure. This is also a key selling point of public cloud offerings. But how have enterprises’ real life experiences of public cloud set-ups compared against these expectations? Many have found the ‘pay-as-you-go’ pricing model to be very expensive and to have led to unexpected cost increases, particularly if workloads and usage spike unexpectedly or if the customer packages have been provisioned inefficiently. If not very carefully managed, the costs of public cloud services have a tendency to balloon quickly. Public cloud providers and enterprises that have adopted public cloud strategies are naturally seeking to address these concerns. Enterprises are increasingly adopting cloud cost management strategies, including using cost estimation tools, implementing resource tagging for better visibility, optimising instance sizes, and utilising reserved instances or savings plans to reduce costs. Cloud providers offer pricing calculators and cost optimisation recommendations to help enterprises forecast expenses and increase efficiency. Despite these efforts, the public cloud has proved to be far more expensive for many organisations than originally envisaged and managing costs effectively in public cloud set-ups requires considerable oversight, ongoing vigilance and optimisation efforts. When private clouds make sense There are numerous situations where a private cloud environment is a more suitable and cost-effective option. Workloads with stringent security and compliance requirements, such as those in regulated industries like finance, healthcare or government, often necessitate the control and isolation provided by a private cloud environment - hosted in a local data centre on a server that is owned by the user. Many workloads with predictable and steady resource demands, such as legacy applications or mission-critical systems, may not need the flexibility of the public cloud and could potentially incur much higher costs over time. In such cases, a private cloud infrastructure offers much greater predictability and cost control, allowing enterprises to optimise resources based on their specific requirements. And last but not least, once workloads are in the public cloud, vendor lock-in occurs. It is notoriously expensive to repatriate workloads back out of the public cloud, mainly due to excessive data egress costs. Hybrid cloud It is becoming increasingly clear that most organisations will benefit most from a hybrid cloud setup. Simply put, ‘horses for courses’. Only put those workloads that will benefit from the specific advantages into the public cloud and keep the other workloads within their own control in a private environment. Retaining a private environment does not require an enterprise to have or run their own data centre. Rather, they should take capacity in a professionally managed, third-party colocation data centre that is located in the vicinity of the enterprises’ own premises. Capacity in a colocation facility will generally be more resilient, efficient, sustainable, and cost effective for enterprises compared to operating their facilities. The private cloud infrastructure can also be outsourced – in a private instance. This is where regional and edge data centre operators such as Portus Data Centers come to the fore. In most cases, larger organisations will end up with hybrid cloud IT architecture to benefit from the best of both worlds. This will require careful consideration of how to seamlessly pull those workloads together through smart networking. Regional data centres with strong network and connectivity options will be crucial to serving this demand for local IT infrastructure housing.  The era where enterprises went all-in into the cloud is over. While the public cloud offers scalability, flexibility, and access to cutting-edge technologies, concerns about cost, security, vendor lock-in and compliance persist. To mitigate these concerns, enterprises must carefully evaluate their workloads and determine the most appropriate hosting environment. 



Translate »