Sponsored Feature The world's appetite for data seems to be infinite, but enterprises' willingness to put up with expensive, proprietary, and often fragile infrastructure for storing that data may not be.
Total enterprise storage capacity shipments were up 25.2 percent to 156.1 exabytes in the second quarter of 2021, according to IDC. But raw capacity is only half the story. Organizations face the challenge of managing that capacity, both in terms of how they deploy it to match their changing workloads and applications, and in ensuring resilience and security.
Adding to the challenge of remaining agile in how they manage and deploy storage, enterprises face the issue of vendor lock-in. Storage is a lucrative market, dominated by a few key players, who have a vested interest in keeping customers tied into their hardware and software stacks, and paying handsomely for that privilege. Moreover, if users need to use multiple storage types - file, block or object - this typically means separate systems, and often multiple vendors.
These are not exactly new problems. In fact, they're pretty much the same problems that faced Sage Weil when he was undertaking his PhD at the University of California, Santa Cruz more than 15 years ago. Weil was sponsored by organizations including Los Alamos National Laboratory, Sandia National Laboratory and Lawrence Livermore National Laboratory, with the aim of producing an open source, software defined storage OS that could run on commodity hardware.
"In retrospect especially, but even at the time, there was a glaring hole in the market," he explains. "Everybody needed storage, it needed to be scalable, and there was no open source option; you had to buy expensive proprietary solutions."
The result of Weil's research was Ceph, an open source, software defined storage platform which provides object, block and file storage on the same system. Following his graduation in 2007, Weil continued building out the platform, forming Inktank to provide services and support.
Ceph was quickly adopted by users, many of whom were wary of being locked into the singular visions promoted by some of the dominant storage vendors. Then in 2010, the Ceph client was integrated into the Linux kernel.
Opening up the Inktank
A major milestone came with the 2014 acquisition of Inktank, then just 50 strong, by open source pioneer Red Hat for $175m in cash. This ensured stability for the core development team and meant that Inktank adopted Red Hat's "pure open source model", opening up some previously proprietary software the firm had developed.
Further stability came with the launch of the Ceph Foundation in 2018, under the auspices of the Linux Foundation. Founding members included Canonical, DigitalOcean, Intel, Red Hat, SoftIron, SUSE, and Western Digital.
The creation of the foundation formalized a way for those organizations to contribute funds that could be managed and spent to further Ceph's development and the community.
Of course, in regards to ease of use and predictability, and the potential for vast amounts of storage, some might argue that the easiest option is to switch to the cloud which has seen its overall capacity balloon since Ceph's founding. But as Canonical's storage product leader Philip Williams notes, "for people with quite significant amounts of data, public cloud - and those traditional proprietary storage options - typically aren't cost effective or feasible."
This makes Ceph's original vision of a robust storage service, supporting object, block and file storage on the same system, even more attractive, particularly as that system is optimized for commodity components, including those a company might already have.
"You take a bunch of individual hard drives that can fail, a bunch of networks that can fail, plus switches and servers that all individually are very fallible," Weil explains. "You put them all together with Ceph and the net result is something that's highly reliable and tolerates any single point of failure - or in many cases many points of failure. It's highly available and highly scalable as well."
He adds, "it doesn't matter which vendor you're buying your hardware from, whether you're using hard drives or SSDs, what kind of switches are in your network; it's fully software defined."
This means that users have more flexibility in how they architect and grow their systems, rather than having to guesstimate what their capacity needs will be years in the future and having to match that with a rigid capital investment cycle. Likewise, because they are able to support multiple storage types on the same system, they aren't forced to split their investment across multiple architectures and vendors, which also typically means using multiple management tools.
And it frees them from the roadmaps laid out by proprietary storage hardware vendors – roadmaps that can impose arbitrary architectural constraints, such as ceilings on the number of disks that can be added to a system.
"Because it's so flexible and built to scale, Ceph doesn't require a lot of foreknowledge about where your organization's going to be in a couple of years' time," says Weil. "You can just expand your hardware footprint in whatever direction you end up growing."
As SoftIron's VP of Product, Craig Chadwell, adds, "Because of the way Ceph works and because organizations that comply with Ceph's operating model can have products that work together seamlessly, it means you can swap out a particular vendor's hardware without having to swap out Ceph, That means everything above the Ceph layer from a service delivery perspective is unaffected by the lower level technology changes."
How big do you want to go?
Given Ceph's roots in the academic world, it's perhaps no surprise that the platform is found in supercomputing and HPC type applications such as Australia's National Computational Infrastructure (NCI), and the USAF's Eglin Air Force Base, which specializes in flight testing.
But it is also used to underpin commercial systems that demand resilience and scalability - testing by the Evaluator Group in 2020 showed deterministic performance at scale for 10 billion objects.
For example, THG Plc's SoftIron Ceph deployment, spanning America, Great Britain and Germany, powers object storage use cases such as warehouse data replication and backup solutions based on SoftIron technology partners like Veeam. In addition, SoftIron Hyperdrive underpins the ecommerce giant's Openstack-as-a-Service APIs, and 'vanilla' bare-metal offerings that power well-known gaming, streaming and hosting brands, Schalk Van Der Merwe, THG's CTO, explains.
"We see strong interest in direct SoftIron Ceph integrations with Kubernetes workloads" This is relevant as contention around storage has traditionally been a hot-button for internal IT, says Schalk. "Being able to unleash API integrations such as Kubernetes' CSI (common storage interface) with SoftIron have significantly sped up the pace and confidence - both for development and operations teams. As we further invest we will be exposing a wider range of powerful API endpoints which our customers and partners can use as integration weapons alongside our managed ecommerce platform."
Finance and data giant Bloomberg also uses Ceph to underpin its OpenStack-powered private cloud infrastructure and its private S3 object stores. As Red Hat Data Foundation Architect Kyle Bader explains, the company has several customers "supporting north of a hundred petabytes of data."
Michael St-Jean, Senior Principal for Red Hat Hybrid Platforms explains, "Ceph is not only massively scalable to deal with expanding global data footprints, but it is highly extensible. It provides an S3-compatible object store for cloud-native applications, is built on APIs and provides the underlying storage for private cloud deployments built on OpenStack. Plus, in Kubernetes environments it delivers block, file and object storage classes and Kubernetes data services."
This growth is not stopping, and it becomes challenging once you get to larger scales. At Eglin Airforce base in Western Florida, for example, each aircraft tested averages 5.4PB of audio, video and telemetry data per year. The United States Airforce uses high performance computing to analyze audio, video, and telemetry data from test flights, and sought to consolidate their data on geographically dispersed production and dev-ops pods. With power, space, and management issues specific to edge deployments, as well as sensitive, mission critical data on the line, SoftIron's HyperDrive Ceph appliance was, SoftIron says, "the clear choice to deliver an efficient, secure, and supremely reliable storage platform that outperformed traditional, legacy infrastructure solutions."
Ease of use
Whether large or small, enterprises arguably have a sharper focus on issues like ease of use, and cost performance, than some of their counterparts in the research world. At the same time, a small business may have 100TB of storage, but that 100TB is essential to it and it will want something with an accessible interface "that just works".
"Over the last three to four years, there's been a huge investment of time and resources in the Ceph community on the usability front," says Weil. "We've created a whole new, integrated GUI dashboard for Ceph for management. We've also developed an orchestrator layer for Ceph that can call out to whatever tools you use to deploy it, so that you can do just about anything you need to do from the new GUI."
Ceph's most recent user survey certainly showed that users are ramping up the performance and breadth of their systems. The average number of clusters reported operational was five, with the average raw capacity of the largest clusters coming in at just under 19PB. Almost 90 percent of respondents were using HDDs and SSD usage stood at 80 percent, while NVMe usage was at 49 percent, compared to 32 percent in 2018.
Users flagged scalability and high availability as key reasons for using Ceph, at 80 percent and 76 percent respectively. Over 90 percent of respondents cited the fact that Ceph is open as a reason for using it, a figure that has climbed dramatically over the last few years.
That desire for scalability will be further sated with the next major release of the storage platform, Quincy, which will be highlighted at the upcoming Cephalocon user event in July. Quincy has been subjected to large-scale testing at a variety of organizations, including the Pawsey Supercomputing Center's 4000 OSD (object storage daemon) cluster, which allowed for hardening of cephadm and the ceph dashboard. Subsequent logical scale testing went to over 8000 OSDs in a cluster.
So, just over 15 years since its launch, and ten years on from the launch of Inktank, the Ceph community feels it is poised for a decade of meteoric growth, supporting ever larger installations and ensuring it evolves in step with innovations in underlying hardware.
Sponsored by Ceph.