Looking back, there are many significant milestones in the history of enterprise storage. For instance, writes Wes van den Berg, VP & GM, Pure Storage UK&I, we can trace the development of block storage to the early generations of computers, file storage emerging alongside the personal computer, and more recently, object storage rocketing to prominence with the take-off of the web.
of these storage workloads fulfill important roles. Applications that draw on
file storage have grown more demanding, database applications have become more
sophisticated, and the web, IoT, and demand for analytics have caused an
explosion in the need for object storage.
as the way in which we use data has shifted, the technical silos typically
observed between these workloads have created challenges which can no longer be
ignored. As a result, we are seeing the emergence of a new category of storage
to address the needs of modern data: unified fast file and object storage
modern data experience, whereby data is easily accessible, commutable and
delivered where it is needed instantly, does not respect the technical silos
that exist between different environments. In the past, each data workload
would largely have resided and been used within its data store. For instance,
database applications would have drawn on dedicated block resources; files
would have stayed in file stores and web applications would have depended upon
object storage resources.
is not to say that both file and object data stores have been neglected from
innovation. Fast file emerged to consistently deliver high performance within
traditional architectures for small or large
files, as well as sequential or random
file workloads. However, with modern data requiring all of the above at the
same time, its limitations are clear. Similarly, object storage underwent its
own transformation. Initially built to house large amounts of non-mission
critical data, fast object emerged in direct response to the rise of
cloud-native applications. These apps used object as their default storage and
so required higher performance levels to process the workloads.
what if we could bring both together and unleash their benefits in tandem for
enterprises? End users and IT leaders have asked themselves the same questions
over again: if we could have multi-purpose, high-performance, low latency
storage at a manageable cost from the outset, would we even have storage tiers
and different storage types? The answer is obvious – who would want the
operational complexity? So we must consider that as technology and economics
move on, there will be architectural implications.
Enter UFFO storage
operational challenges have led to the emergence of the UFFO storage platform.
Built to combine the capabilities of fast file and object stores under one
roof, enterprises can now directly address their modern data requirements and
power modern applications that will enable them to innovate fast and move
forwards in an otherwise turbulent business environment.
what trends have driven the need for UFFO? Let’s look into the five drivers a
- The growth of machine-generated data: As enterprises increasingly use data-intensive applications, they require storage platforms that can read, store and action large sets of data and provide the real-time strategic insights without the fear of high latency or downtime.
- The rise and popularity of fast-object: For real-time analytics, machine-learning or AI applications to deliver a return on investment, their performance needs to be consistently high. Fast-object storage has rapidly grown in demand for its ability to cost-effectively serve both ML and software development workflows.
- Re-using data across applications: Unlike ever before, the high-performance and data-heavy applications that enterprises rely upon today, be it real-time analytics or AI, requires calling on multiple data sets from multiple applications. The convergence between fast-file and object is a key facilitator here, allowing for easier data re-use and limiting the performance hit to any one application.
- Desire for reliable and consistent data performance: Emerging technologies like machine- or deep-learning rest upon throughput-hungry applications in technical computing environments. With UFFO, enterprises can house massively parallel architecture that can address the speed, reliability and performance issues often found in data-intensive applications.
- The possible disruption caused by ransomware attacks: It’s no secret that ransomware poses a significant challenge in both the private and public sectors. By converging fast-file and fast-object under one platform, enterprises can rapidly restore information as fast as 270TB per hour, in the event of an attack, from an immutable backup copy of data. This leads to a faster return-to-operations and minimal disruption to business operations.
five challenges are shaping the future of modern data demands. Today, many
organisations require either fast file or fast object, while some already need
both. However, if one thing is for certain it’s that before long most organisations
will face a set of challenges that will collectively require both.
single platform that delivers fast file and object with the multi-dimensional
performance these workloads require, and is underpinned by a focus on
simplicity both in architecture and manageability, is what the industry is
crying out for.
ability to consolidate diverse workloads onto a single storage platform
provides investment protection by addressing current and future challenges.
Eliminating silos via this convergence also delivers efficiency gains both in
the data center as well as for staff managing separate data environments,
struggling with the complexity this brings. This is why the emergence of UFFO
is of fundamental importance, and will undoubtedly power the innovation of