The rise of the VAST era has finally started to put the incumbent, legacy guard on the defensive. To see evidence of our rapid ascent, look no further than the 2021 storage users survey just published by Coldago Research as an example of our Y/Y doubling in end user mindshare.
What’s less obvious is that this broad user study goes out to all file/object customers – not just the subset of customers who buy petabyte-class systems – which is where VAST’s focus is. Mindshare is growing, but we are growing even faster if you consider that subset. Earlier this year we announced that our business grew by 4x year over year, and this year we’re (so far) on track to beat last year’s historic growth benchmark. The VAST ascent into the market has been both swift and jarring to legacy systems vendors. As such, it’s no surprise that they are starting to lean on old tricks in an attempt to slow us down.
We recently discovered that Dell has published a competitive storage comparison against VAST (Update: Dell keeps moving the URL, thanks Chris Mellor for the screenshot). As we were the only privately-held company to have received such an honor, the fact that they’re now publishing competitive positioning documents is a validation of our ascent. The whole situation is also a little awkward, given that Dell is also a long-time investor in VAST dating all the way back to our Series A.
The Dell comparison is an interesting one, as two things have become apparent:
- Isilon’s storage tiering is now being sold as a “value add feature” against VAST’s solution whose primary value is to bring an end to the complexity and tradeoffs of tiering – pointing to a radical departure in philosophy that’s driven by VAST’s innovation
- Almost none of the conversation is around the radical storage architecture differences, but rather focuses on not having support for a wide range of protocols (some of which are very old)
On point 1, Dell is right. We don’t offer an option for HDD-based storage or tiering to mechanical media… and we never will. Our Universal StorageUniversal Storage is a single storage system that is fast enough for primary storage, scalable enough for huge datasets and affordable enough to use for the full range of a customers data, thus eliminating the tyranny of tiers. offering is specifically engineered to deliver all-flash performance at archive economics so that customers never need to compromise on I/O access for any of their data. Dell continues to sell hard drives and tiers, because they have an architecture which was not born in the age of hyperscale flashA class of SSDs designed to deliver the capacity and reliability required by hyperscale storage systems without the additional cost of the “enterprise” features required by legacy storage architectures. Hyperscale SSDs: • Use high bit/cell flash (4 bit/cell QLC and/or 5 bit/cell PLC)
• H... and at a technical level these shared-nothing born-in-the-age-of-HDD tiered storage systems are not designed to eliminate the cost penalty that customers have historically paid for flash. At a business level, the companies that have sold these systems have not really cared to optimize for the new generation of hyperscale flash, because they can always resort to solving the cost problem by selling slower tiers of infrastructure and putting the challenge of data management at the feet of application owners.
Additionally, legacy scale-outScale-Out storage systems allow users to grow a system by adding compute power along with media in the form of HDDs and SSDs. Scale-Out storage systems can usually scale larger than systems based on the older scale-up architecture. Compare to Scale-up storage’s penchant for storage tiering is based on the assumption that applications accessing data enjoy a narrow and predefined view of their data. On the contrary, many of today’s new game-changing AI and analytics applications see the greatest value by being able to access and process all data in order to achieve the most accurate models. HDDs and tiering essentially forces organizations to devalue these data sets and compromise the modern application experience by subjecting applications to a response time that’s up to 100x slower than NVMeNVMe is a new protocol designed specifically to provide systems access to non-volatile memory devices like SSDs. NVMe has much lower overhead, and allows much more parallel I/O than the older SCSI protocol. access.
On Point #2, Dell is also right. We don’t have support for NDMP, FTP or a proprietary REST API for data access. And we may never. Some of the features that were critical to storage systems 15 years ago simply are less relevant in the modern era. Features just accumulate in storage products as they age… that doesn’t make them better or worse than another product. Customers vote with their wallets on the features that matter to them, and so far we’ve been doing pretty well when it comes to what matters.
So – when deciding on the future of your infrastructure – be certain to look beyond the battlecard and focus on what matters to your business, to your users and the future of your organization. Our modern architecture advantage and the fundamental difference in data management philosophy is what has compelled leading enterprise customers such as the Broad Institute of MIT, Invitae and Squarepoint Capital to make the leap from legacy scale-out solutions and propel the ascent of VAST Data.
Our success also helps us build out the features that matter.
Stay tuned for version 4.0 as we announce support for token ring and SMB1. No we wont.
In the meanwhile, do check out our upcoming webinar where I speak with GigaOm analyst Enrico Signoretti to explain why modern workloads such as Artificial Learning, High-Performance ComputingHigh-performance computing (HPC) harnesses the power of computer clusters to solve complex problems with massive data sets., and Big Data Analytics cannot be successful without access to ALL data.