Features


How data centres can prepare for 2024 CSRD reporting
by Jad Jebara, CEO of Hyperview. The CEO of Britain's National Grid, John Pettigrew, recently highlighted the grim reality that data centre power consumption is on track to grow 500% over the next decade. The time to take collective action around implementing innovative and sustainable date centre initiatives is now - and the new initiatives such as the Corporate Sustainability Reporting Directive (CSRD) is the perfect North Star to guide the future of data centre reporting. This new EU regulation will impact around 50,000 organisations, including over 10,000 non-EU entities with a significant presence in the region. The Corporate Sustainability Reporting Directive (CSRD) requires businesses to report their sustainability efforts in more detail, starting this year. If your organisation is affected, you’ll need reliable, innovative data collection and analysis systems to meet the strict reporting requirements. CSRD replaces older EU directives and provides more detailed and consistent data on corporate sustainability efforts. It will require thousands of companies that do business in the EU to file detailed reports on the environmental impact and climate-related risks of their operations. Numerous metrics being assessed are still widely analysed within additional EU-wide initiatives. For instance, the Energy Efficiency Directive (EED) requires reporting on two Information & Communication Technologies (ICT) within the CSRD Directive – ITEEsy and ITEUsy – allowing for enhanced measuring and insight into server utilisation, efficiency, and CO2 impact. Given the anticipated explosion in energy consumption by data centres over the next decade, CSRD will shine a spotlight on the sustainability of these facilities. For example, the law will require organisations to provide accurate data for both greenhouse gases and Scope 1, 2 and 3 emissions. The essential metrics that data centres will need to report on include:   Power usage effectiveness (PUE) – measures the efficiency of a data centre’s energy consumption   Renewable energy factor (REF) – quantifies the proportion of renewable energy sources used to power data centres   IT equipment energy efficiency for servers (ITEEsv) – evaluates server efficiency, focusing on reducing energy consumption per unit of computing power   IT equipment utilisation for servers (ITEUsv) – measures the utilisation rate of IT equipment   Energy reuse factor (ERF) – measures how much waste energy from data centre operations is reused or recycled     Cooling efficiency ratio (CER) – evaluates the efficiency of data centre cooling systems    Carbon usage effectiveness (CUE) – quantifies the carbon emissions generated per unit of IT workload   Water usage effectiveness (WUE) – measures the efficiency of water consumption in data centre cooling   While power capacity effectiveness (PCE) isn’t a mandatory requirement yet, it is a measure that data centres should track and report on as it reveals the total power capacity consumed over the total power capacity built. If not already, now is the time to ensure you have processes and systems in place to capture, verify, and extract this information from your data centres. We also recommend conducting a comprehensive data gap analysis to ensure that all relevant data will be collected. It’s important to understand where your value chain will fall within the scope of CSRD reporting and how that data can be utilised in reporting that’s compliant with ESRS requirements. For example, reports should be machine-readable, digitally tagged and separated into four sections – General, Environmental, Social and Governance. While the immediate impact of CSRD will be in reporting practices, the hope is that, over time, the new legislation will drive change in how businesses operate. The goal is that CSRD will incentivise organisations such as data centre operators to adopt sustainable practices and technologies, such as renewable energy sources and circular economy models. Improving sustainability of data centres    Correctly selecting and leveraging Data Centre Infrastructure Management (DCIM) that offers precise and comprehensive reports on energy usage is a paramount step in understanding and driving better sustainability in data centre operations. From modelling and predictive analytics to benchmarking energy performance - data centres that utilise innovative, comprehensive DCIM toolkits are perfectly primed to maintain a competitive operational advantage while prioritising a greener data centre future. DCIM modelling and predictive analytics tools can empower data centre managers to forecast future energy needs more accurately, in turn helping data centres to optimise operations for maximum efficiency. Modelling and predictive analytics also enables proactive planning, ensuring that energy consumption aligns with actual requirements - preventing unnecessary resource allocation and further supporting sustainability objectives.  Real-time visibility of energy usage gives data centre operators insight into usage patterns and instances of energy waste, allowing changes to be made immediately. Ultimately, eliminating efficiencies faster means less emissions and less energy waste. In addition to enhancing operational efficiency, leveraging these real-time insights aligns seamlessly with emission reduction goals – supporting a more sustainable and conscious data centre ecosystem. Utilising the right DCIM tools can also reduce energy consumption by driving higher efficiency in crucial areas such as cooling, power provisioning and asset utilisation. They can ensure critical components operate at optimal temperatures, reducing the risk of overheating and preventing energy wastage. In addition to mitigating overheating and subsequent critical failures, utilising optimal temperature tools can also improve the lifespan and performance of the equipment. The right DCIM tool kit enables businesses to benchmark energy performance across multiple data centres and prioritise energy efficiency – while also verifying the compliance of data centres with key environmental standards and regulations like CSRD. Cutting-edge DCIM platforms also enables data centres to correctly assess their environmental impact by tracking metrics such as power usage effectiveness (PUE), carbon usage effectiveness (CUE) or water usage effectiveness (WUE). These tools facilitate the integration of renewable energy sources - such as solar panels or wind turbines - into the power supply and distribution of green data centres. As sustainability continues to move up the corporate agenda, expect to see greater integration of DCIM with AI and ML to collect and analyse vast quantities of data, such as sensors, devices, applications and users. In addition to enhancing the ease of data collection, this streamlined approach aligns seamlessly with CSRD emission reduction goals - making compliance with CSRD and similar regulations much easier for data centres. Taking a proactive approach to the data gathering requirements of CSRD and implementing technologies to support better sustainability practice isn’t just about compliance or reporting; it’s also to incentivise data centre operators towards the adoption of sustainable practices and technologies. Ultimately, data centres that are prepared for CSRD will also be delivering greater value for their organisation while paving the way for a more sustainable future.

Lenovo introduces purpose-built AI-centric infrastructure systems
Lenovo Group has announced a comprehensive new suite of purpose-built AI-centric infrastructure systems and innovations to advance Hybrid AI innovation from edge to cloud. The company is delivering GPU-rich and thermal efficient solutions intended for compute intensive workloads across multiple environments and industries. In industries such as financial services and healthcare, customers are managing massive data sets that require extreme I/O bandwidth, and Lenovo is providing the IT infrastructure vital to management of critical data. Across all these solutions is Lenovo TruScale, which provides the ultimate flexibility, scale and support for customers to on-ramp demanding AI workloads completely as-a-service. Regardless of where customers are in their AI journey, Lenovo Professional Services says that it can simplify the AI experience as customers look to meet the new demands and opportunities of AI that businesses are facing today. “Lenovo is working to accelerate insights from data by delivering new AI solutions for use across industries delivering a significant positive impact on the everyday operations of our customers,” says Kirk Skaugen, President of Lenovo ISG. “From advancing financial service capabilities, upgrading the retail experience, or improving the efficiency of our cities, our hybrid approach enables businesses with AI-ready and AI-optimised infrastructure, taking AI from concept to reality and empowering businesses to efficiently deploy powerful, scalable and right-sized AI solutions that drive innovation, digitalisation and productivity.” In collaboration with AMD, Lenovo is also delivering the ThinkSystem SR685a V3 8GPU server, bringing customers extreme performance for the most compute-demanding AI workloads, inclusive of GenAI and Large Language Models (LLM). The powerful innovation provides fast acceleration, large memory, and I/O bandwidth to handle huge data sets, needed for advances in the financial services, healthcare, energy, climate science, and transportation industries. The new ThinkSystem SR685a V3 is designed for both enterprise private on-prem AI as well as for public AI cloud service providers. Additionally, Lenovo is bringing AI inferencing and real time data analysis to the edge with the new Lenovo ThinkAgile MX455 V3 Edge Premier Solution with AMD EPYCTM 8004 processors. A versatile AI-optimised platform delivers new levels of AI, compute, and storage performance at the edge with the best power efficiency of any Azure Stack HCI solution. Offering turnkey seamless integration with on-prem and Azure cloud, Lenovo’s ThinkAgile MX455 V3 Edge Premier Solution allows customers to reduce TCO with unique lifecycle management, gain an enhanced customer experience and allows the ability to adopt software innovations faster. Lenovo and AMD have also unveiled a multi-node, high-performance, thermally efficient server designed to maximise performance per rack for intensive transaction processing. The Lenovo ThinkSystem SD535 V3 is a 1S/1U half-width server node powered by a single fourth-gen AMD EPYC processor, and it's engineered to maximise processing power and thermal efficiency for workloads including cloud computing and virtualisation at scale, big data analytic, high-performance computing and real-time e-commerce transactions for businesses of all sizes. Finally, to empower businesses and accelerate success with AI adoption, Lenovo has introduced with immediate availability Lenovo AI Advisory and Professional Services that offer a breadth of services, solutions and platforms designed to help businesses of all sizes navigate the AI landscape, find the right innovations to put AI to work for their organisations quickly, cost-effectively and at scale, bringing AI from concept to reality. For more from Lenovo, click here.

Axis introduces new open hybrid cloud platform
Axis Communications, a network video specialist, has introduced Axis Cloud Connect, an open hybrid cloud platform designed to provide customers with more secure, flexible, and scalable security solutions. Together with Axis devices, the platform enables a range of managed services to support system and device management, video and data delivery and meet high demands in cybersecurity, the company says. The video surveillance market is increasingly utilising connectivity to cloud, driven by the need for remote access, data-driven insights, and scalability. While the number of cloud-connected cameras is growing at a rate of over 80% per year, the trend toward cloud adoption has shifted more towards the implementation of hybrid security solutions, a mix of cloud and on-premises infrastructure, using smart edge devices as powerful generators of valuable data. Axis Cloud Connect enables smooth integration of Axis devices and partner applications by offering a selection of managed services. To keep systems up to date and to ensure consistent system performance and cybersecurity, Axis takes added responsibility for hosting, delivering, and running digital services to ensure availability and reliability. The managed services enable secure remote access to live video operations, and improved device management with automated updates throughout the lifecycle. It also offers user and access management for easy and secure control of user access rights and permissions. According to Johan Paulsson, CTO, Axis Communications, “Axis Cloud Connect is a continuation of our commitment to deliver secure-by-design solutions that meet changing customer needs. This offering combines the benefits of cloud technology and hybrid architectures with our deep knowledge of analytics, image usability, cybersecurity, and long-term experience with cloud-based solutions, all managed by our team of experts to reduce friction for our customers.” With a focus on adding security, flexibility and scalability to its offerings, Axis has also announced that it has built the next generation of its video management system (VMS) - AXIS Camera Station - on Axis Cloud Connect. Accordingly, Axis is extending its VMS into a suite of solutions that includes AXIS Camera Station Pro, Edge and Center. The AXIS Camera Station suite is engineered to more precisely match the needs of users based on flexible options for video surveillance and device management, architecture and storage, analytics and data management, and cloud-based services. AXIS Camera Station Edge: An easily accessible cam-to-cloud innovation combining the power of Axis edge devices with Axis cloud services. It provides a cost-effective, easy-to-use, secure, and scalable video surveillance offering. It is reportedly easy to install and maintain, with minimal equipment on-site. Only a camera with SD card is needed or use AXIS S30 Recorder Series depending on the requirements. This flexible and reliable recording solution offers straight-forward control from anywhere, AXIS claims. AXIS Camera Station Pro: A powerful and flexible video surveillance and access management software for customers who want full control of their system. It ensures they have full flexibility to take control of their site and build the right solution on their private network, while also benefiting from optional cloud connectivity. It supports the complete range of Axis products and comes with all the powerful features needed for active video management and access control. It includes new features such as a web client, data insight dashboards and improved search functionality and optional cloud connectivity. AXIS Camera Station Center: Providing new possibilities to effectively manage and operate hundreds or even thousands of cloud-connected AXIS Camera Station Pro or AXIS Camera Station Edge sites all from one centralised location. Accessible from anywhere, it enables aggregated multi-site device management, user management, and video operation. In addition, it offers corporate IT functionality such as Active Directory, user group management, and 24/7 support. Axis Cloud Connect, the AXIS Camera Station offering, and Axis devices are all built with robust cybersecurity measures to ensure compliance with industry standards and regulations. In addition, with powerful edge devices and the AXIS OS working in tandem with Axis cloud capabilities, users can expect high-level performance with seamless software and firmware updates. The new cloud-based platform and the solutions built upon it by both Axis and selected partners aim to create opportunity and value throughout the channel, allowing companies to utilise the cloud at a pace that makes the most sense for their evolving business needs. For more from Axis, click here.

How DCs can develop science-based goals – and succeed at them
by Anthea van Scherpenzeel, Senior Sustainability Manager at Colt DCS. Sustainability has swiftly evolved over the last 10 years from a nice-to-have, to a top business concern. The data centres industry is one that is frequently criticised for its excessive energy usage and environmental effects. For example, data centres account for about a fifth of Ireland's total power use, while global data centre and network electricity consumption in 2022 was expected to be up to 1.3% of global final electricity demand. It is clear that responsible roadmaps and measurable targets are needed to lessen the impact of data centres' energy use. But some in the industry really don't know where to begin. The IEA asserts that the data centre industry urgently needs to improve, citing a variety of reasons such as a lack of environmental, social, and governance (ESG) data, a lack of internal expertise, or a culture that values performance and speed over environmental credentials. In order to effectively address environmental challenges at the rate required to realise a global net-zero economy, it is imperative that science-based targets and roadmaps are established. It is the industry's duty to move forward with the most sustainable practices possible so that ESG impacts will be as small as possible when demand rises as a result of new technology and developing markets. It is also expected that by 2027, the AI sector alone will use as much energy as the Netherlands, making it more important than ever to take steps that are consistent with the science underlying the Paris Agreement. Being accountable under science-based targets Science-based targets underline the short and long-term commitment of businesses to take action on climate change. Targets are considered science-based if they align with the goals of the Paris Agreement to limit global warming to 1.5°C above pre-industrial levels. More recently, the Intergovernmental Panel on Climate Change (IPCC) published its sixth Assessment Report reaffirming the near linear relationship between the increase in CO2 emissions due to human activities and future global warming. Colt DCS resubmitted its science-based targets in 2023 in line with the latest Net Zero Standard by the Science-based Targets initiative. The targets cover a range of environmental topics, including fuel, electricity, waste and water. These goals are vital not only for the data centres themselves but also to support customers in reaching their own net-zero goals. Science-based targets and roadmaps tells businesses how much and how quickly they need to reduce their greenhouse gas (GHG) emissions if we are to achieve a global net zero economy and prevent the worst effects of climate change. This extends to Scope 3 emissions – often the most challenging to track and manage – where data centre and business leaders must ensure that partners are on the same path to sustainable practices. Data centre leaders must commit, develop, submit, communicate, and disclose their science-based targets to remain accountable. Although science-based targets are vital to improve environmental impact, data centre operators must not forget to cover all three pillars of ESG in their sustainability strategies. Many organisations focus on the ‘E’ with the granularity of data easier to assess; however, social and governance prove just as important. Whether that’s connecting with local communities, safeguarding, or ensuring that governance and reporting are up to scratch, a further focus on the ‘S’ and the ‘G’ can prove a key differentiator. Turning targets into actions With these science-based targets in place, it’s time for data centres to turn targets into actions. Smart switches to new, more sustainable materials, technologies, and energy sources reduce the impact of data centres on the planet. For instance, switching to greener fuel options and procuring renewable energy or choosing refrigerants with a lower global warming potential, go a long way in reducing a data centre’s carbon footprint. A cultural change is also needed to ensure that sustainability becomes a vital part of business strategy. As well as shifting internal mindsets, collaboration with customers and suppliers will be crucial to meeting targets. Being on the same page, and not just thinking of sustainability as a tick-box exercise, must be at the industry’s core. Going one step further, measuring progress is an essential part of the sustainability journey. Close relationships with partners and suppliers are a key part of effective reporting to track Scope 3 emissions – not just for data centres, but also for the businesses that use them. With the incoming EU Corporate Sustainability Reporting Directive (CSRD), this data sharing will prove more important than ever to gain a holistic overview of sustainability impacts along the entire value chain. The future for the digital infrastructure market The carbon footprint of the digital infrastructure market might be significantly reduced if the data centre business adopted science-based targets and actions as best practices. The ultimate objective should focus on embedded acts that begin as soon as land is obtained, rather than just reviewing daily operations. Data centres must make the site lifecycle as sustainable as possible, encompassing construction, materials, equipment, and operations. Measuring embedded carbon is essential to tracking a project's overall impact. For more from Colt DCS, click here.

Cloud is key in scaling systems to your business needs
by Brian Sibley, Solutions Architect at Espria Meeting the demands of the modern-day SMB is one of the challenges facing many business leaders and IT operators today. Traditional, office-based infrastructure was fine up until the point where greater capacity was needed than those servers could deliver, vendor support became an issue, or the needs of a hybrid workforce weren’t being met. In the highly competitive SMB space, maintaining and investing in a robust and efficient IT infrastructure can be one of the ways to stay ahead of competitors. Thankfully, with the advent of cloud offerings, a new scalable model has entered the landscape; whether it be 20 or 20,000 users, the cloud will fit all and with it comes a much simpler, per user cost model. This facility to integrate modern computing environments in the day-to-day workplace means businesses can now stop rushing to catch up, and with this comes the invaluable peace of mind that these operations will scale up or down as required. Added to which, the potential cost savings and added value will better serve each business and help to future-proof the organisation, even when on a tight budget. Cloud service solutions are almost infinitely flexible, rather than traditional on-premises options, and won’t require in-house maintenance. When it comes to environmental impact and carbon footprint, data centres are often thought to be a threat, contributing to climate change; but in reality, cloud is a great option. The scalability of cloud infrastructure and the economies of scale they leverage facilitate not just cost but carbon savings too. Rather than a traditional model where a server runs in-house at 20% capacity, using power 24/7/365 and pumping out heat, cloud data centres are specifically designed to run and cater for multiple users more efficiently, utilising white space cooling, for example, to optimise energy consumption. When it comes to the bigger players like Microsoft and Amazon, they are investing heavily in sustainable, on-site energy generation to power their data centres; even planning to feedback excess power into the National Grid. Simply put, it’s more energy efficient for individual businesses to use a cloud offering than to run their own servers – the carbon footprint for each business using a cloud solution becomes much smaller. With many security solutions now being cloud based too, security doesn’t need to be compromised and can be managed remotely via SOC teams either in-house or via the security provider (where the resources are greater and have far more specialist expertise). Ultimately, a cloud services solution, encompassing servers, storage, security and more, will best service SMBs. It’s scalable, provides economies of scale and relieves in-house IT teams from many mundane yet critical tasks, allowing them to focus on more profitable activities. For more from Espria, click here.

Latest version of StarlingX cloud platform now available
Version 9.0 of StarlingX - the open source distributed cloud platform for IoT, 5G, O-RAN and edge computing - is now available. StarlingX combines Ceph, OpenStack, Kubernetes and more to create a full-featured cloud software stack that provides everything telecom carriers and enterprises need to deploy an edge cloud on a few servers or hundreds of them. Container-based and highly scalable, StarlingX is used by the most demanding applications in industrial IoT, telecom, video delivery and other ultra-low latency, high-performance use cases. "StarlingX adoption continues to grow as more organisations learn of the platform’s unique advantages in supporting modern-day workloads at scale; in fact, one current user has documented its deployment of 20,000 nodes and counting,” says Ildikó Váncsa, Director, Community, for the Open Infrastructure Foundation. “Accordingly, this StarlingX 9.0 release prioritises enhancements for scaling, updating and upgrading the end-to-end environment. “Also during this release cycle, StarlingX collaborated closely with Arm and AMD for a coordinated effort towards increasing the diversity of hardware supported,” Ildikó continues. “This collaboration also includes building out lab infrastructure to continuously test the project on a diverse set of hardware platforms.” “Across cloud, 5G, and edge computing, power efficiency and lower TCO is critical for developers in bringing new innovations to market,” notes Eddie Ramirez, Vice President of Go-To-Market, Infrastructure Line of Business, Arm. “StarlingX plays a vital role in this mission and we’re pleased to be working with the community so that developers and users can leverage the power efficiency advantages of the Arm architecture going forward.” Additional New Features and Upgrades in StarlingX 9.0 Transition to the Antelope version of OpenStack. Redundant / HA PTP timing clock sources. Redundancy is an important requirement in many systems, including telecommunications. This feature is crucial to be able to set up multiple timing sources to synchronise from any of the available and valid hardware clocks and to have an HA configuration down to system clocks. AppArmor support. This security feature makes a Kubernetes deployment (and thus the StarlingX stack) more secure by restricting what containers and pods are allowed to do. Configurable power management. Sustainability and optimal power consumption is becoming critical in modern digital infrastructures. This feature adds to the StarlingX platform the Kubernetes Power Manager, which allows power control mechanisms to be applied to processors. Intel Ethernet operator integration. This feature allows for firmware updates and more granular interface adapter configuration on Intel E810 Series NICs. Learn more about these and other features of StarlingX 9.0 in the community’s release notes.A simple approach to scaling distributed clouds StarlingX is widely used in production among large telecom operators around the globe, such as T-Systems, Verizon, Vodafone, KDDI and more. Operators are utilising the container-based platform for their 5G and O-RAN backbone infrastructures along with relying on the project's features to easily manage the lifecycle of the infrastructure components and services.  Hardened by major telecom users, StarlingX is ideal for enterprises seeking a highly performant distributed cloud architecture. Organisations are evaluating the platform for use cases such as backbone network infrastructure for railway systems and high-performance edge data centre solutions. Managing forest fires is another new use case that has emerged and is being researched by a new StarlingX user and contributor. OpenInfra Community Drives StarlingX Progress The StarlingX project launched in 2018, with initial code for the project contributed by Wind River and Intel. Active contributors to the project include Wind River, Intel and 99Cloud. Well-known users of the software in production include T-Systems, Verizon and Vodafone. The StarlingX community is actively collaborating with several other groups such as the OpenInfra Edge Computing Group, ONAP, the O-RAN Software Community (SC), Akraino and more.

Vultr launches Sovereign Cloud and Private Cloud to boost digital autonomy
Vultr, the world’s largest privately-held cloud computing platform, today announced the launch of Vultr Sovereign Cloud and Private Cloud. They have been introduced in response to the increased importance of data sovereignty and the growing volumes of enterprise data being generated, stored and processed in even more locations — from the public cloud to edge networks and IoT devices, to generative AI. The announcement comes on the heels of the launch of Vultr Cloud Inference, which provides global AI model deployment and AI inference capabilities leveraging Vultr’s cloud-native infrastructure spanning six continents and 32 cloud data centre locations. With Vultr, governments and enterprises worldwide can ensure all AI training data is bound by local data sovereignty, data residency, and privacy regulations. A significant portion of the world's cloud workloads are currently managed by a small number of cloud service providers in concentrated geographies, raising concerns particularly in Europe, the Middle East, Latin America, and Asia about digital sovereignty and control over data within their countries. Enterprises meanwhile must adhere to a growing number of regulations governing where data can be collected, stored, and used while retaining access to native GPU, cloud, and AI capabilities to compete on the global stage. 50% of European CXOs list data sovereignty as a top issue when selecting cloud vendors, with more than a third looking to move 25-75% of data, workloads, or assets to a sovereign cloud, according to Accenture. Vultr Sovereign Cloud and Private Cloud are designed to empower governments, research institutions, and enterprises to access essential cloud-native infrastructure while ensuring that critical data, technology, and operations remain within national borders and comply with local regulations. At the same time, Vultr provides these customers with access to the advanced GPU, cloud, and AI technology powering today’s leading AI innovators. This enables the development of sovereign AI factories and AI innovations that are fully compliant, without sacrificing reach or scalability. By working with local telecommunications providers, such as Singtel, and other partners and governments around the world, Vultr is able to build and deploy clouds managed locally in any region. Vultr Sovereign Cloud and Private Cloud guarantee data is stored locally, ensuring it is used strictly for its intended purposes and not transferred outside national borders or other in-country parameters without explicit authorisation. Vultr also delivers technological independence through physical infrastructure, featuring air-gapped deployments and a dedicated control plane that is under the customer’s direct control, completely untethered from the central control plane governing resources across Vultr’s global data centres. This provides complete isolation of data and processing power from global cloud resources. To further ensure local governance and administration of these resources, Vultr Sovereign Cloud and Private Cloud are managed exclusively by nationals of the host country, resulting in an audit trail that complies with the highest standards of national security and operational integrity. For enterprises, Vultr combines Sovereign and Private Cloud services with ‘train anywhere, scale everywhere’ infrastructure, including Vultr Container Registry, which enables models to be trained in one location but shared across multiple geographies, allowing customers to scale AI models on their own terms.“To address the growing need for countries to control their own data, and to reduce their reliance on a small number of large global tech companies, Vultr will now deploy sovereign clouds on demand for national governments around the world,” says J.J. Kardwell, CEO of Vultr’s parent company, Constant. “We are actively working with government bodies, local telecommunications companies, and other in-country partners to provide Vultr Sovereign Cloud and Private Cloud solutions globally, paving the way for customers to deliver fully compliant AI innovation at scale.” For more from Vultr, click here.

Nasuni launches Nasuni Edge for Amazon S3
Nasuni, a hybrid cloud storage solution, has announced the general availability of Nasuni Edge for Amazon Simple Storage Service (S3), a cloud-native, distributed solution that allows enterprises to accelerate data access and delivery times, while ensuring low-latency access that is crucial for edge workloads, including cloud-based artificial intelligence and machine learning (AI/ML) applications, all through a single, unified platform. Amazon S3 is an object storage service from Amazon Web Services (AWS) that offers scalability, data availability, security, and performance. Nasuni Edge for Amazon S3 supports petabyte-sized workloads and allows customers to run S3-compatible storage that supports select S3 APIs on AWS outposts, AWS local zones, and customers’ on-premises environments. The Nasuni cloud-native architecture is designed to improve performance and accelerate business processes. Immediate file access is essential across various industries, where remote facilities with limited bandwidth generate large volumes of data that must be quickly processed and ingested into Amazon S3. In addition, increasingly, customers are looking for a unified platform for both file and object data access and protection, enabling them to address small and large-scale projects with on-prem and cloud-centric workloads through a single offering. With an influx of large volumes of data, simplified storage is a priority for any business looking to collect and quickly process data at the edge in 2024. Forrester Research expects 80% of new data pipelines in 2024 will be built for ingesting, processing, and storing unstructured data. Nasuni Edge for Amazon S3 is specifically designed for IT professionals, infrastructure managers, IT architects, and ITOps teams who want to improve application performance and data-driven workflow processes. It enables application developers to read and write using the Amazon S3 API to a global namespace backed by an Amazon S3 bucket from multiple endpoints located within AWS local zones, AWS outposts, and on-premises environments. Nasuni Edge for Amazon S3 enhances data access by providing local performance at the edge, multi-protocol read/write scenarios and support for more file metadata. “Nasuni has been a long-time AWS partner, and this latest collaboration delivers the simplest solution for modernising an enterprise’s existing file infrastructure. With Nasuni Edge for Amazon S3, enterprises can support legacy workloads and take advantage of modern Amazon S3-based applications,” says David Grant, President, Nasuni. “Nasuni Edge for Amazon S3 allows an organisation to make unstructured data easily available to cloud-based AI services.” In addition to providing fast access to a single, globally accessible namespace, the ability to ingest large amounts of data into a single location drives potential new opportunities for customer innovation and powerful new insights via integration with third-party AI services. Importantly, Nasuni’s multi-protocol support means these new data workloads are accessible from a vast range of existing applications without having to rewrite them. 

Navigating enterprise approach on public, hybrid and private clouds
By Adriaan Oosthoek, Chairman at Portus Data Centers With the rise of public cloud services offered by industry giants like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), many organisations have migrated or are considering migrating their workloads to these platforms. However, the decision is not always straightforward, as factors like cost, performance, security, vendor lock-in and compliance come into play. Increasingly, enterprises must think about workloads in the public cloud, the costs involved and when a private cloud or a hybrid cloud setup might be a better fit. Perceived benefits of public cloud Enterprises view the public cloud as a versatile solution that offers scalability, flexibility, and accessibility. Workloads that exhibit variable demand patterns, such as certain web applications, mobile apps and development environments, are well-suited for the public cloud. The ability to quickly provision resources and pay only for what is used may make it an attractive option for some businesses and applications. Public cloud offerings also typically provide a vast array of managed services, including databases, analytics, machine learning and AI, and can enable enterprises to innovate rapidly without the burden of managing underlying infrastructure. This is also a key selling point of public cloud offerings. But how have enterprises’ real life experiences of public cloud set-ups compared against these expectations? Many have found the ‘pay-as-you-go’ pricing model to be very expensive and to have led to unexpected cost increases, particularly if workloads and usage spike unexpectedly or if the customer packages have been provisioned inefficiently. If not very carefully managed, the costs of public cloud services have a tendency to balloon quickly. Public cloud providers and enterprises that have adopted public cloud strategies are naturally seeking to address these concerns. Enterprises are increasingly adopting cloud cost management strategies, including using cost estimation tools, implementing resource tagging for better visibility, optimising instance sizes, and utilising reserved instances or savings plans to reduce costs. Cloud providers offer pricing calculators and cost optimisation recommendations to help enterprises forecast expenses and increase efficiency. Despite these efforts, the public cloud has proved to be far more expensive for many organisations than originally envisaged and managing costs effectively in public cloud set-ups requires considerable oversight, ongoing vigilance and optimisation efforts. When private clouds make sense There are numerous situations where a private cloud environment is a more suitable and cost-effective option. Workloads with stringent security and compliance requirements, such as those in regulated industries like finance, healthcare or government, often necessitate the control and isolation provided by a private cloud environment - hosted in a local data centre on a server that is owned by the user. Many workloads with predictable and steady resource demands, such as legacy applications or mission-critical systems, may not need the flexibility of the public cloud and could potentially incur much higher costs over time. In such cases, a private cloud infrastructure offers much greater predictability and cost control, allowing enterprises to optimise resources based on their specific requirements. And last but not least, once workloads are in the public cloud, vendor lock-in occurs. It is notoriously expensive to repatriate workloads back out of the public cloud, mainly due to excessive data egress costs. Hybrid cloud It is becoming increasingly clear that most organisations will benefit most from a hybrid cloud setup. Simply put, ‘horses for courses’. Only put those workloads that will benefit from the specific advantages into the public cloud and keep the other workloads within their own control in a private environment. Retaining a private environment does not require an enterprise to have or run their own data centre. Rather, they should take capacity in a professionally managed, third-party colocation data centre that is located in the vicinity of the enterprises’ own premises. Capacity in a colocation facility will generally be more resilient, efficient, sustainable, and cost effective for enterprises compared to operating their facilities. The private cloud infrastructure can also be outsourced – in a private instance. This is where regional and edge data centre operators such as Portus Data Centers come to the fore. In most cases, larger organisations will end up with hybrid cloud IT architecture to benefit from the best of both worlds. This will require careful consideration of how to seamlessly pull those workloads together through smart networking. Regional data centres with strong network and connectivity options will be crucial to serving this demand for local IT infrastructure housing.  The era where enterprises went all-in into the cloud is over. While the public cloud offers scalability, flexibility, and access to cutting-edge technologies, concerns about cost, security, vendor lock-in and compliance persist. To mitigate these concerns, enterprises must carefully evaluate their workloads and determine the most appropriate hosting environment. 

Own Company empowers customers to capture value from their data
Own Company, a SaaS data platform, has announced a new product, Own Discover, that reflects the company’s commitment to empower every company operating in the cloud to own their own data. Own Discover is expanding its product portfolio beyond its backup and recovery, data archiving, seeding, and security solutions to help customers activate their data and amplify their business. With Own Discover, businesses will be able to use their historical SaaS data to unlock insights, accelerate AI innovation, and more in an easy and intuitive way. Own Discover is part of the Own Data Platform, giving customers quick and easy access to all of their backed up data in a time-series format so they can: Analyse their historical SaaS data to identify trends and uncover hidden insights Train machine learning models faster, enabling AI-driven decisions and actions Integrate SaaS data to external systems while maintaining security and governance “For the first time, customers can easily access all of their historical SaaS data to understand their businesses better, and I’m excited to see our customers unleash the potential of their backups and activate their data as a strategic asset,” says Adrian Kunzle, Chief Technology Officer at Own. “Own Discover goes beyond data protection to active data analysis and insights and provides a secure, fast way for customers to learn from the past and inform new business strategies and growth.”



Translate »