It seems like every week another ransomware attack against a major company is in the news—and those are only the attacks announced publicly. Gartner expects at least 75% of IT organizations to face one or more ransomware attacks by 2025. And many of the IT leaders I meet operate with a “not if or when, but how many” mindset.
It used to be enough for organizations to develop a hardened network perimeter that would satisfy security concerns, but the interconnectedness of today’s various on-prem, hybrid cloud, and SaaS applications present a massive opportunity for bad actors to infiltrate your network.
That means the primary DR consideration for almost any organization is how fast you can recover. Having backup data is only part of the equation. You must ensure you can quickly restore those backups – and this is where legacy backup targets cannot help you.
The Evolving Nature of Restores
We used to think about RTO objectives in hourly timeframes, but today’s sophisticated attacks that lock up entire IT estates have shifted such intervals to weeks, if not months. And while the nature of ransomware attacks is well understood, the path to recovery is less clear. Some organizations will pay the attackers the ransom to (presumably) regain access to their data, while others may choose to immediately start restoring data to their last known non-infected state. If you can do the latter well, you can save your organization massive ransom payouts, and more importantly, protect yourself from future attacks.
The path to fast restores is typically dependent on a few variables:
Your network topology, or the path your data takes to move from source to destination and the time it needs to get there. Organizations often optimize bandwidth through data reduction, load balancing, throttling, etc.
Your number of data movers (physical or virtual), i.e. the backup agents running on your servers. Increasing the number of data movers in a system enables multi-streamed restores.
Your landing zone, or a clean network destination for the recovery of your immutable data, or indestructible snapshots, in the event of an attack.
Your data protection target. In enterprise data centers this is most often the bottleneck, as traditional backup appliances read (restore) data about 20% as fast as they write (backup) data. Therefore, even if you do everything else right (steps 1-3 above), restores are only as fast as the read performance of the backup target.
Purpose-built backup appliances (PBBAs) simply cannot deliver strong read performance. While they can write data quickly to disk and deduplicate it to optimize for capacity, these same characteristics inherently limit them from restoring large-scale data sets quickly, which is the name of the game in the ransomware era. PBBAs are a bit like Hotel California in that way: easy to get data in for a backup, but hard to get it out for a restore.
In fact, the smaller the system block size (Dell EMC PowerProtect/Data Domain has variable block lengths between 4kb and 8kb, for example) the more individual blocks that must be rehydrated via random IO when restoring data. Disk doesn’t do random IO well, thanks to time-consuming disk head movement. (Read more about the “rehydration tax” in conventional storage systems in our data reduction white paper.) And when you consider that a large dataset was 500GB when PBBAs came to market in the early 2000s, it’s no wonder that today’s backups weigh heavily upon the limitations of the architecture. The result is that large, full system recoveries via legacy backup appliances can last weeks.
In fact, let’s consider it a modern data protection truth: deduplication on hard drives will never work for fast restores. It’s for this reason that Data Domain and other legacy PBBAs never discuss restore performance in their spec sheets, leaving customers to discover their recovery limitations at the worst possible times. Backup throughput and fitting your backup jobs into tight windows is important, but as my colleague Howard Marks likes to say, you only back up to restore.
Therefore, it’s imperative to ensure that your backup storage has the performance to meet your RTO needs. For many organizations, this means a backup target built on all-flash storage while leveraging global data algorithms that optimize cost without impacting performance.
All-Flash Restores. Archive Economics.
I joined VAST to lead our data protection division and help customers understand how they could use an all-flash storage system for all of their data – backups and archives included.
VAST has flipped the script on backup target restore performance: the fundamental differences in our DASE architecture allows our system to read data 8x faster than we write it. And so whereas legacy PBBAs like Dell PowerProtect/Data Domain are almost always where restores bottleneck, the VAST Data Platform is the fastest part of the restore function.
But performance is just one part of the equation. With VAST customers enjoy an exabyte-scale platform with a single global data reduction pool, rather than having multiple dedupe pools with the same data on each pool.
Moreover, our architecture and data algorithms enable unprecedented levels of flash affordability and system longevity that eliminate the complexity of buying, deploying, and managing backup infrastructure. No longer do customers have to balance cost versus restore performance or worry about capacity and scale.
Don’t wait to get protected from the scourge of ransomware. Learn more about how VAST Data can help you protect your environment and meet your goals by scheduling a demo, or reaching out to email@example.com to learn more.