product
Feb 13, 2024

Closing AI’s Operational Gaps: VAST Partners with Run:ai

Closing AI’s Operational Gaps: VAST Partners with Run:ai

Posted by

Neeloy Bhattacharyya, Director of Solutions Engineering for HPC/AI

AI is slowly but surely making its way into every aspect of our lives, both at home and at work. According to a recent Deloitte survey* of 2,800 organizations, the focus for most companies is using AI for efficiency gains and cost savings. Only 29 percent say they are currently working on AI that will help drive innovation and growth. 

The same survey also highlights that given the current focus on efficiency, most organizations are currently relying on off-the-shelf generative AI solutions, things like productivity applications with integrated gen AI (71%) and publicly available LLMs (56%). From our conversations at VAST with customers, the reason for that is twofold: a) there is plenty of easy-to-realize value those approaches provide to customers, especially for generic tasks across industries and b) there are still a lot of technology gaps that need to be closed to help enterprises tackle the complex, high-value use cases. 

To help close some of those gaps, VAST Data has partnered with several industry leaders focused on helping enterprises operationalize hybrid AI pipelines. This week we announced a partnership with Run:ai. 

Until recently, most AI environments, with their heritages in HPC, have relied on a combination of SLURM and other open-source technologies to orchestrate most aspects of the AI pipeline. The challenge is that these technologies have derived the features and capabilities from environments where activities take place in a mostly linear fashion: a user gets access to a system; they load their data and run a set of jobs/experiments that each consume 100% of the available GPUs for a period of time; and then the system is turned over to the next user. 

images

While VAST integrates well with those open source tools, the orchestration needs for enterprise development of high-value AI use cases is very different. In an enterprise, there need to be 1000s of AI pipelines running in parallel. Depending on the phase of a particular pipeline, it may be able to leverage all the available GPUs or only be able to capitalize on a smaller subset. In an enterprise, different jobs will have different priorities and SLOs based on business goals. And most importantly, enterprises have strict data access, encryption, provenance, and lineage requirements.  

If AI is going to used for innovation, your general counsel had better be able to stand before a judge and state unequivocally what data was used to train a model, what code was used for that training run, which prompts were used to get to the new idea and by whom, which responses did the AI system provide and what data was retrieved and incorporated into those responses.

Fortunately, we are not starting from scratch. Over the last decade, Kubernetes (K8s) has emerged as the de facto standard for enterprise software development, striking a good balance between enabling environment isolation (which was originally provided by virtualization) and providing common services such as networking and logging that all applications can benefit from. Run:ai is a market leader in showing organizations how K8s and containers can be used to orchestrate available GPU-based server resources to meet the needs of enterprise customers. 

images

VAST is at the center of today’s AI action with a data platform for the age of deep learning. The capabilities of the VAST Data Platform are ideal to support the entire AI pipeline, providing the comprehensive software infrastructure required to capture, catalog, refine, enrich, store, and secure unstructured data with real-time analytics for deep learning.  

images

Integrating technologies is never as easy as it appears on a slide, even with ChatGPT helping. That is why Run:ai and VAST have joined forces to create blueprints that enable more efficient, effective, and innovative AI operations at scale.  

Enterprises will understand how to create an internal developer platform that exposes on-premises/customer-managed GPU and data services alongside normalized versions of services offered by both traditional and AI-focused clouds. A service provider-focused blueprint also will be available to help CSPs create service offerings that are easier for enterprises to consume.  

These blueprints will first be made available at GTC 2024

Interested in learning more or discussing this further? Please contact us and we’ll be happy to help you unlock the full potential of your data. Or book some time with us at GTC! 

More from this topic

Learn what VAST can do for you
Sign up for our newsletter and learn more about VAST or request a demo and see for yourself.

By proceeding you agree to the VAST Data Privacy Policy, and you consent to receive marketing communications. *Required field.