Universal Storage comes to Media, and the Enterprise, with SMB
With version 3.0, we at VAST are making a big step in fulfilling the promise of Universal Storage is a single storage system that is fast enough for primary storage, scalable enough for huge datasets and affordable enough to use for the full range of a customers data, thus eliminating the tyranny of tiers. adding Server Message Block is the file protocol developed by Microsoft. The now obsolete and deprecated version 1.0 was known as CIFS. Primary file protocol for Windows and OS X. (Server Message Block) to the NFS, and S3 protocols our customers have been using to access the data on their VAST systems. Windows and Macintosh workstations and servers can now access files in the VAST namespace using their OS’s native file protocols.
That support for Macs and PCs is especially important to users in the media and entertainment business where many of their key applications from compositing to editing run PC and Mac workstations. Media companies are finding VAST’s Universal Storage attractive for several reasons:
- As they make the transition to 4K and even 8K resolution; media customers quickly discover that only all-flash storage system can provide the performance editors need to stream uncompressed 4K assets without stuttering.
- Increasing resolution doesn’t just demand all-flash performance, it also demands scalability as file sizes grow with every new camera format. The uncompressed 1080p that was state of the art just a few years ago only took a gigabyte or two of storage per minute of video today’s 4K video can take up to 15GB for each minute of video. One of our customers expects to consume as much space with content created in the next two years as the use to hold their archive of over 100 years of content.
- While many media applications run on Windows and Mac workstations and therefore access their data via SMB other tasks like VFX rendering are assigned to farms of Linux servers that access their data over NFS. CGI has expanded from a way to make the spaceships look more realistic in SciFi flicks to more mainstream tasks like de-aging Robert De Niro in The Irishman and becoming an element in the post-production workflow not just a specialty task to be farmed out.
Media organizations have traditionally addressed the varying storage requirements of their applications with dedicated islands of storage for each stage in their workflows. As an example, a broadcaster may have 3-4 storage systems dedicated to editing, one for ingest of content from suppliers, another for transcoding and so on creating many systems to update, maintain and manage space on. The Media Asset Management system (MAM) may automate moving data from place to place but managing 20 or more storage systems is still a lot of work.
VAST’s true multi-protocol storage, that allows SMB, NFS and S3 applications to access the same data in place eliminates those islands of storage, allowing media customers to use a single system for everything from rendering the special effects to feeding their internet streaming service and holding the most valuable of all a media companies assets, their archive.
Broadcasters and studios create islands of storage to prevent some users or applications from becoming noisy neighbors who’s demands on a shared storage system interferes with other applications. To keep potential noisy neighbors from becoming a problem, VAST users can create server pools, allocating stateless VAST Servers, and therefore the performance they provide, to users or applications. Users can create multiple server pools to manage application performance while maintaining the simplicity of a single, multiprotocol namespace.
Even before the current production shutdown; the rapid expansion of internet OTT (Over The Top) networks like Disney+ and HBO Max had media companies turning to their archives like never before only to discover that the spinning disks, or even worse tape libraries, behind that archive just weren’t fast enough for them to properly monetize their archive. As a professional sports league customer of VAST discovered having multiple PB of flash capacity vastly accelerates the re-use of their library.
So why VAST for media?
- All-flash performance for 4K, 8K and beyond
- Scalability from petabytes to exabytes
- Scale performance independent from capacity to match applications
- With application performance isolation through server pools
- At the cost of archive storage
- Industry-leading resilience from VAST’s The VAST DASE (Disaggregated Shared Everything Architecture) disaggregates (separates) the storage media from the CPUs that manage that media and provide storage services. This disaggregated storage, including all the system metadata, is shared by all the VAST Servers in the cluster. DASE allows use... More architecture
- Multiprotocol (SMB, NFS, S3) to any or all data
The smallest Universal Storage system, a single 675TB A highly-available NVMe over Fabrics JBOF (Just a Bunch of Flash) making the Optane and QLC Flash SSDs the enclosure contains available to all the VAST Servers in a cluster. and four VAST Servers, delivers enough performance to edit or playback low compression 4K UHD formats like ProRes 444 or DNxHR 444 smoothly. Even more than with some other scale-out storage systems VAST system performance is a function of scale. Bigger systems, with more VAST Servers, provide more bandwidth to support tens or hundreds of users, accessing many petabytes of data.
True Multi-protocol Storage
Unlike some other systems that may support multiple protocols by slicing their capacity into multiple namespaces, VAST Universal Storage Systems present a single namespace that users can access as files over NFS and SMB, or as objects via S3. The The VAST Element Store defines how VAST Universal Storage Systems store files and objects and the metadata that describes them. The Element Store is neither a traditional file system nor an object store, abstracting both to create an abstraction that serves both the hierarchical presentation of a fi... More uses a metadata structure that abstracts the data layout and includes all the metadata objects each protocol requires.
The result is a fully multi-protocol storage system that allows users to select the best tools for their job rather than settling for the best tools that work on their storage. Organizations that want to adopt cloud-style application architectures can develop new applications using S3 and have those applications share data with their existing NFS or SMB based apps eliminating the need to synchronize multiple datasets.
Multi-protocol access will also simplify workflows for our media and entertainment customers. Animators can perform their 3D modeling in Houdini or Maya on PCs and save their data to an SMB share, where the render farm of Linux servers mounting the animator’s output folder via NFS.
Broadcasters can edit on PCs and Macs, have their transcoding servers read the results, and feed the CDN behind their video-on-demand OTT (Over The Top) network without the delays, and duplicates, created by moving assets between dedicated storage systems for each stage in the workflow.
To be truly universal, storage has to deliver all-flash performance. So we made sure that our SMB implementation delivered all the performance OUR users would surely ask for. Performance is after all a key part of the Universal Storage vision. Here, version 3.0 delivers single stream performance fast enough to playback and edit the biggest 4K UHD files (500MB/s for a single connection) scaling to aggregate bandwidth in the 100s of GB/s all for less than other vendors charge for scale-out NAS using HDDs.
VAST Builds Protocols In-House
SMB is a lot more complex than NFS or S3 because, unlike those protocols, SMB is stateful. Each NFS or S3 HTTP request stands alone, but an SMB server has to maintain a lot of state information about every file any user has open, the including file and byte-range locks those users have on files and more.
Many storage vendors are intimidated by SMB’s complexity, and rather than build their own SMB stack; they bolt on third-party solutions like Samba, the open-source SMB stack for Linux. Those solutions, and the products that end up using them, all have limitations that fall far short of universal storage. One scale-out, all-flash NAS vendor learned this lesson when they expanded their systems beyond Samba’s cluster scaling limit. Their system can only scale SMB access to 1/5th as many nodes as NFS.
We knew we could do a better job on SMB if we built our own from scratch, so that’s what we did. Building SMB from scratch gave us two significant advantages.
The first advantage was to allow us to integrate the SMB services into the VAST Element Store, where an outside SMB server would treat the element store as a black-box POSIX file system. The VAST Element Store integrates NFS, S3, and SMB specific metadata objects and concepts, into a single metadatabase simplifying, and therefore accelerating access.
The second big advantage of creating our own SMB stack is the deep integration of the SMB stack into the DASE architecture and cluster management methods. The third-party SMB stacks that support scale-out at all include their own clustering that replicates state data from node to node, and if the reports we hear from customers are to be believed, still drop SMB connections when a cluster node fails.
VAST’s SMB service stores all the SMB state information in a VAST Enclosure’s 3D Xpoint is a new non-volatile memory technology that has much lower latency and much higher endurance than NAND Flash. Optane is Intel’s trade-name for SSDs using 3D Xpoint as their storage media. VAST Universal Storage Systems use Optane SSDs to store system and element store metadata as well a... More alongside the element store metadata. Storing SMB server state in shared 3D XPoint allows any Stateless VAST Servers, which run in VAST Server Containers, provide all the storage protocol and management services in a VAST Cluster. Each VAST Server can directly access all the Optane and QLC SSDs in all the VAST Enclosures in the VAST Cluster. to check, or set a lock, or a lease, in just a few microseconds. Since all the VAST servers in the cluster access the same 3D XPoint directly, there’s no need to replicate state changes between nodes, and just as with metadata, the shared 3D XPoint provides a single source of truth for SMB state while the VAST servers remain stateless.
That means there’s no need to rebuild state when a VAST Server goes offline, another VAST Server assumes the offline server’s VIP (Virtual IP address) and since it can access the state in 3D XPoint picks up right where the failed server left off.
Direct access to the SMB server state, and metadata, eliminates the need for east-west network traffic that limits the maximum practical cluster size for other scale-out systems. Users can serve a single namespace through 100s or 1000s of VAST servers.
With SMB Universal Storage becomes genuinely universal, providing a single namespace that supports the native protocols for 99 44/100ths of the world’s computers. Our universe has now expanded from Linux servers using NFS and “modern” applications, speaking S3 to include the billions of humans sitting in front of Macs and PCs who need their SMB access.
It’s going to be an exciting ride, and there is, of course, more to come we can’t tell you about yet, so climb aboard and see how Universal Storage can simplify your life.