Fujitsu has introduced a new storage solution which leverages software-defined storage technology to enable enterprises to master petabytes of data distributed across multiple data centres and the cloud. The disruptive file data platform from Qumulo is said to be the most advanced solution for managing and accessing file data, paving the way for the creation of new services and applications in large-scale enterprise storage solutions.
Enterprises are recognising the opportunities from analysing multiple sources of data to supercharge business operations. Processes as diverse as diagnostic imaging, modeling, simulations, LIDAR, GIS, genetic sequencing and video production all revolve around the creation and use of unstructured data. However, managing substantial amounts of file data is often challenging, especially as it can be distributed between the network edge – coming from IoT devices – as well as on-premises and the cloud.
Qumulo’s file system delivers real-time visibility, scale, and control of data across on-premises and cloud, even at a granular level, allowing easy system configuration and performance management.
Getting to grips with unstructured data
Specifically designed for hybrid environments spanning the data centre, private and public clouds, Qumulo makes it possible for users to share information, while supporting multiple storage protocols that enable data consolidation. This allows for easier management of data in distributed environments and provides the ability to absorb unpredictable, unstructured data growth and cope with data demands from increasing numbers of applications, both on and off the cloud.
“We’re seeing plenty of claims that data is the new gold,” says Olivier Delachapelle, Head of Category Management, Product Sales Europe at Fujitsu. “That’s certainly the case, but before the data is of any value, all those petabytes of unstructured data must be mined, managed and refined. Fujitsu’s new approach with the Qumulo solution is a custom-built, high-performance information repository. It scales without limits and enables customers to rapidly shift through vast amounts of data to find the nuggets of true value.”
Thore Rabe, Vice President and General Manager EMEA at Qumulo comments: “Organisations need a file data platform to meet the scale, performance, and cloud requirements for their unstructured data environments. Qumulo is excited to be partnering with Fujitsu to enable a seamless transition to modern infrastructure that will accelerate time-to-results and cost containment.”
Qumulo solution plays a key role in Fujitsu’s data-driven transformation Strategy
The Qumulo solution fits in the second stage of Fujitsu’s four-layer data-driven transformation strategy – which involves creating the target data architecture, once the data transformation baseline is defined. Fujitsu’s broad portfolio for hybrid IT landscapes enables successful digital transformation by controlling, protecting and securing data while also maximising its business value.
Fujitsu works together with customers to define target architectures, analyse hybrid landscapes in terms of platforms, storage, workloads and data management, and then to provide appropriate solutions. Once an organisation has established a transformation baseline, the next step is the creation of a target data architecture to enable full access and control of data across edge, core and cloud.
Scalable, cloud-native Qumulo instances
The Qumulo solution rapidly scales to support tens of billions of files, and the cloud-native file system allows for workloads to be seamlessly transitioned to the public cloud. This means customers can extend short-term computing power beyond their on-premises or hosted capabilities to manage spikes in demand, as well as providing remote collaborators with instant access to data. Businesses can also easily run microservices that leverage cloud data, via APIs enabling application-level access.
Real-time visibility and intelligent optimisation
Qumulo provides real-time visibility into data and users, enabling effective data management. Potential performance issues, such as massive spikes in demand, and longer-term trends can be identified at a glance. Built-in machine learning enables more than 90% of read requests to be handled in under one millisecond from the built-in cache. This is achieved by anticipating user demands, pre-loading the data most likely to be requested next, and automatically tiering data and optimising file management and storage media based on user behaviour. A flash-first strategy means that 100% of write processes and more than 98% of read processes are handled by much faster flash-based drives, while data not immediately required is stored on inexpensive but slower hard disk drives.