I’ve been in and around Enterprise IT for roughly 25 years, and during much of that time have identified as a storage specialist. I’ve architected and implemented DAS storage with RAID arrays, NAS, SAN, Unified, Scale-out, block, file, object, HPC, HyperConverged, Backup, Archive, Cloud and Data lakes.
With so many applications moving to the public cloud, and modern software-defined technologies like all-flash based HyperConverged Infrastructure commoditizing what’s left within the datacenter, one might get the impression that the days of storage evolution and innovation are behind us. This couldn’t be further from the truth, and I’d argue that we’ve seen more evolution in the past 10 years than we had in the past 40, with the last five having been extremely exciting. Storage media itself has come a long way, from magnetic tape to magnetic HDD disk, to NAND Flash, with interface technologies evolving from SCSI/IDE, Fibre-Channel, SAS and now NVMe. We have officially reached a point where the storage media density and performance has exploded, with a single modern 2.5” PCIe Gen5 Datacenter SSD being capable of 15 GB/s (yes, Gigabytes per second, not Gigabit) while having 15TB of capacity, and these aren’t even the fastest or most dense SSD’s. Building High-performance storage servers using these SSD’s to support HPC environments has become a problem as the CPU in many cases cannot actually keep up with the SSD’s themselves, causing other issues.
The software-defined storage that enables HyperConverged platforms continues to evolve, but we’ve reached the point where it’s likely that 80%+ of most Corporate Production datacenter workloads will perform fine on an all-flash HyperConverged platform, with the remaining workloads fitting the shrinking use-cases of traditional 3-tier shared storage arrays, albeit their modern all-flash versions. New storage technologies are entering the datacenter that have been designed differently from the ground up to better align to modern cloud-consumption models while also being designed with NAND Flash and object storage in mind. Companies like Weka and VAST Data offer solutions that are on an entirely different plane in scale and performance when looking at more traditional Datacenter players, while organization like NetApp and Dell EMC work to rapidly evolve their storage platform capabilities to better align with where modern Datacenter workloads are trending.
I also see a continued consolidation of non-production or secondary storage across the enterprise. In addition to the primary production copy of data, it has been common for organizations to also retain multiple copies of the same data across various access types and locations fitting a variety of different use cases across the data’s lifecycle. This can include test, development, QA/UAT, Disaster Recovery, primary backup, secondary off-site backup, tertiary off-site backup, deep-archive, etc. In recent years, we’ve seen technologies and storage platforms emerge that can consolidate many of the secondary copies to a single cost-efficient storage platform, while still meeting the various RTO/RPO, retention, archive, redundancy, reliability, and immutability needs. While Cohesity is one of the most capable early players in this space, organizations like Rubrik not far behind with Dell EMC, NetApp and IBM also seeing the light.
This will be the first storage-centric post in a series of the Modern Datacenter blog entries where we at SnowCap will explore evolving enterprise storage requirements, challenges and business needs along with understanding the many modern storage solutions available to leverage in the Modern Datacenter as we continue the journey of Digital Transformation, both on-premises and in the cloud.
If your storage needs are evolving and you need help understanding how to navigate these scenarios, or if you’d like to challenge some of my ideas, I’d love to hear from you. Please feel free to comment in our social media forums.
Thanks!
-Sean