What was RAID storage built for?
Storage architects for mainframe computers were feeling the heat from faster inexpensive disk drives from the emerging PC market. A group of Berkeley grad students wanted to explore a storage subsystem that could hide the intrinsic physics challenges of a large single HDD and turn a group of inexpensive smaller HDDs into a viable reliable alternative for what was then mainframe storage. That effort was accepted, adopted and became known as The Berkeley RAID Paper.
1990s. RAID became the standard for protecting enterprise data. In the client/server era, the number of drives and capacities exploded higher, as did the resources it took to manage them. Large enterprises with dedicated resources managing disk failures, performing hot swaps, data migrations and upgrades. It was quite the cat-herding process.
2000s. In the cloud era, where infrastructure elastically scales to meet demand, storing massive amounts of data in multi-terabyte commodity disks is cost effective, BUT, storage architects had bigger issues to solve.
How to store data reliably, with highly efficient storage overhead and data integrity?
Creating replica copies to tolerate a certain number of failures is easy, but, very expensive at cloud-scale. Conventional RAID has lower overhead, but it is limited in the number failures it can tolerate (high risk of data loss). And then there’s the management. Herding all the cats (overseeing capacity, rebuilds, migrations, and degraded performance) would be off the charts.
A new approach was necessary.
One that supported the attributes of the cloud with the millions of applications and users depending on it for their business and personal lives.
Today. Object storage. It’s the millennial approach to cloud-based data storage, archival, retrieval and cost. Why? Because it delivers significantly higher data reliability and allows virtually unlimited expansion for storage. Perfect for a hash-tagging, selfie-taking, always communicating and collaborating on-demand world.
Aren’t you happy with the millennial approach? Yeah, me too.
So, how’s this possible? Object storage uses a flat address space which creates a single massive pool of storage, unlike files organized in a hierarchy. Each object gets a unique identifier which makes it simple to retrieve without knowing the physical location.
Basically, lots of automation and efficiency. Exactly what the cloud is all about.
Now add advanced erasure coding. You gain protection and major data durability across the system, at the drive, node, rack and even data center level that traditional architectures cannot match. Throw some juicy application software on top of the object stack and voila!
The risks, complexity, manageability and expansion headaches that traditional RAID-based storage would introduce at cloud-scale are avoided with object storage. Early adopters like Amazon S3™ and Facebook® helped pave the way and now all major cloud storage service providers have implemented cloud-scale storage architecture.
In the cloud, the idiom of herding cats does not apply with object storage.
Petabytes of capacity. Scale that is seamless and simple. Consistent performance. Automatic rebuilds. Are you backing up to the cloud? To object storage? If not, does this make you re-think your backup strategy?
I hope you will allow me one second to tell you about our object storage system. What if we told you, along with all of the above, our Active Archive System delivers a total cost of ownership that rivals public cloud and in many cases archival-tape? How about surviving an entire data center outage with fully protected assets?
Earth-shattering savings and data that’s always accessible? We take storage to the next level.
We, of course, are a little biased. So we invite you to try out our TCO calculator which showcases the value of object storage vs. private cloud, tape and D2D2D replication. Keeping tabs on your treasure is more economical than you think. We can prove it.
Here’s to stop herding cats.