BannerImg

LightSpeed Philosophy

not just a product.A machine learning storage philosophy.

As organizations race to develop new AI-based product and service offerings, this new class of computing interacts with data in ways never-before encountered. This raises critical infrastructure planning issues that will challenge how storage has been built for decades.

VAST combines a new hardware offering, blueprints for scalability & forthcoming support for accelerated GPU computing to establish Universal Storage as the platform of choice for intelligent applications.

the light-touch simplicity of nas

the speed to support any ai ambition

Coupled With The Scale & Economics Of VAST’s Award-Winning DASE™ Architecture

Image
Watch the film
WorkloadIMG

New Storage HW for AI Workloads

the storage platform for the next decade of machine intelligence.

Accelerated NFS for GPUs

unrivaled simplicity and speed

Most enterprise storage systems adhere to the standard NFS/TCP model that has been the core offering of scale-out NAS providers for decades. New AI machines need more than the 2GB/s of read throughput than a standard TCP NFS client can provide. This single-stream, non-RDMA limitation is why organizations abandon NFS in favor of parallel file systems and chose performance over simplicity.

In 2020, VAST has combined support for a number of NFS accelerations that are in the Linux kernel to make it possible to get best-in-class throughput for AI applications and GPU computers without requiring customers to choose complex parallel file systems.

NVIDIA® GPUDirect® enables VAST servers to directly place data in GPU memory, bypassing CPU memory bottlenecks to increase throughput, eliminate CPU overhead and lower latency for GPU I/O.

Up to

40GB/s

Per LightSpeed Enclosure

More Than

88GB/s

via NVIDIA DGX-2™’s

Scalable Reference Architectures

Born in thE Era of scalable ai clustering.

Universal Storage clusters can scale to meet the need of any AI ambition. These example cluster configurations can help to build balanced architectures that combine the speed needed to keep AI processors working with all-flash capacity needed for deep learning.

Example GPU Cluster Architectures
  • TitleIMG
    Inner-img
    FloatIMG

    40 gb/s

    1 pb*

    400k iops

    1x LightSpeed Enclosure

    1x VAST Server Chassis

    Right-sized to provide balanced I/O for up to 16 GPU clients

  • TitleIMG
    Inner-img
    FloatIMG

    200 gb/s

    5 pb*

    2m iops

    5x LightSpeed Enclosure

    5x VAST Server Chassis

    Right-sized to provide balanced I/O for up to 80 GPU clients

  • TitleIMG
    Inner-img
    FloatIMG

    400 gb/s

    10 pb*

    4m iops

    10x LightSpeed Enclosure

    10x VAST Server Chassis

    Right-sized to provide balanced I/O for up to 160 GPU clients

*Assumes 2:1 Data Reduction
BannerIMG
+

“VAST delivered an all-flash solution at a cost that not only allowed us to upgrade to all-flash and eliminate our storage tiers, but also saved us enough to pay for more GPUs to accelerate our research. This combination has enabled us to explore new deep-learning techniques that have unlocked invaluable insights in image reconstruction, image analysis, and image parcellation both today and for years to come.”

Bruce Rosen, Executive Director

Martinos Center for Biomedical Imaging

Simplicity, Performance, and Scale To Power the next decade of machine intelligence.