Designed for AI Workloads with the power of 3,000 HDDs in 2U
Not just a product. A machine learning storage philosophy.
As organizations race to develop new AI-based product and service offerings, this new class of computing interacts with data in ways never-before encountered. This raises critical infrastructure planning issues that will challenge how storage has been built for decades.
VAST combines a new hardware offering, blueprints for scalability & forthcoming support for accelerated GPU computing to establish Universal Storage as the platform of choice for intelligent applications.
The light-touch simplicity of NAS + The speed to support any AI ambition
Coupled With The Scale & Economics Of VAST’s Award-Winning DASE™ Architecture
The Storage Platform for the Next Decade of Machine Intelligence
Ideal Performance for AI
Unrivaled Flash Capacity
Over 1PB of capacity for training and inference data in every 2U device
Scale to Exabytes
Balance your GPU cluster scale with capacity & performance-rich storage
Unrivaled Simplicity and Speed
Most enterprise storage systems adhere to the standard NFS/TCP model that has been the core offering of scale-out NAS providers for decades. New AI machines need more than the 2GB/s of read throughput than a standard TCP NFS client can provide. This single-stream, non-RDMA limitation is why organizations abandon NFS in favor of parallel file systems and chose performance over simplicity.
In 2020, VAST has combined support for a number of NFS accelerations that are in the Linux kernel to make it possible to get best-in-class throughput for AI applications and GPU computers without requiring customers to choose complex parallel file systems.
NVIDIA® GPUDirect® enables VAST servers to directly place data in GPU memory, bypassing CPU memory bottlenecks to increase throughput, eliminate CPU overhead and lower latency for GPU I/O.
Up to 40GB/s Per LightSpeed Enclosure
More Than 88GB/s via NVIDIA DGX-2™’s
Born in the Era of Scalable AI Clustering
Universal Storage clusters can scale to meet the need of any AI ambition. These example cluster configurations can help to build balanced architectures that combine the speed needed to keep AI processors working with all-flash capacity needed for deep learning.
VAST delivered an all-flash solution at a cost that not only allowed us to upgrade to all-flash and eliminate our storage tiers, but also saved us enough to pay for more GPUs to accelerate our research. This combination has enabled us to explore new deep-learning techniques that have unlocked invaluable insights in image reconstruction, image analysis, and image parcellation both today and for years to come.