VAST Data Platform for AI

  • Born In the Era of Scalable AI

  • Unrivaled Simplicity and Speed

images
LightSpeed Philosophy

Not just a product. A machine learning storage philosophy.

As organizations race to develop new AI-based product and service offerings, this new class of computing interacts with data in ways never-before encountered. This raises critical infrastructure planning issues that will challenge how storage has been built for decades.

VAST combines a new hardware offering, blueprints for scalability & forthcoming support for accelerated GPU computing to establish VAST Data Platform as the platform of choice for intelligent applications.

  • The light-touch simplicity of NAS + The speed to support any AI ambition

  • Coupled With The Scale & Economics Of VAST’s Award-Winning DASE™ Architecture

LightSpeed

The storage platform for the next decade of machine intelligence.

New Storage HW for AI Workloads

The Storage Platform for the Next Decade of Machine Intelligence

Ideal Performance for AI

Designed for AI Workloads with the power of 3,000 HDDs in 2U

Unrivaled Flash Capacity

Over 1PB of capacity for training and inference data in every 2U device

Scale to Exabytes

Balance your GPU cluster scale with capacity & performance-rich storage

Ideal Performance for AI

Designed for AI Workloads with the power of 3,000 HDDs in 2U

Unrivaled Flash Capacity

Over 1PB of capacity for training and inference data in every 2U device

Scale to Exabytes

Balance your GPU cluster scale with capacity & performance-rich storage

Accelerated NFS for GPUs

Unrivaled Simplicity and Speed

Most enterprise storage systems adhere to the standard NFS/TCP model that has been the core offering of scale-out NAS providers for decades. New AI machines need more than the 2GB/s of read throughput than a standard TCP NFS client can provide. This single-stream, non-RDMA limitation is why organizations abandon NFS in favor of parallel file systems and chose performance over simplicity.

In 2020, VAST has combined support for a number of NFS accelerations that are in the Linux kernel to make it possible to get best-in-class throughput for AI applications and GPU computers without requiring customers to choose complex parallel file systems.

NVIDIA® GPUDirect® enables VAST servers to directly place data in GPU memory, bypassing CPU memory bottlenecks to increase throughput, eliminate CPU overhead and lower latency for GPU I/O.

Up to 40GB/s Per LightSpeed Enclosure

40GB/s

More Than 88GB/s via NVIDIA DGX-2™’s

88GB/s

Up to 40GB/s Per LightSpeed Enclosure

40GB/s

More Than 88GB/s via NVIDIA DGX-2™’s

88GB/s
Scalable Reference Architectures

Born in the Era of Scalable AI Clustering

VAST Data Platform clusters can scale to meet the need of any AI ambition. These example cluster configurations can help to build balanced architectures that combine the speed needed to keep AI processors working with all-flash capacity needed for deep learning.

VAST delivered an all-flash solution at a cost that not only allowed us to upgrade to all-flash and eliminate our storage tiers, but also saved us enough to pay for more GPUs to accelerate our research. This combination has enabled us to explore new deep-learning techniques that have unlocked invaluable insights in image reconstruction, image analysis, and image parcellation both today and for years to come.

Bruce Rosen
Executive Director, Martinos Center for Biomedical Imaging