Vultr Launches GPU Stack and Container Registry
September 27, 2023

Vultr announced the launch of the Vultr GPU Stack and Container Registry to enable global enterprises and digital startups alike to build, test and operationalize artificial intelligence (AI) models at scale — across any region on the globe. \

The GPU Stack supports instant provisioning of the full array of NVIDIA GPUs, while the new Vultr Container Registry makes AI pre-trained NVIDIA NGC models globally available for on-demand provisioning, development, training, tuning and inference.

Available across Vultr’s 32 cloud data center locations, across all six continents, the new Vultr GPU Stack and Container Registry accelerate speed, collaboration and the development and deployment of AI and machine learning (ML) models.

“Vultr is committed to enabling innovation ecosystems around the world – from Silicon Valley and Miami to São Paulo, Tel Aviv, Tokyo, Singapore, London, Amsterdam and beyond – providing instant access to high-performance cloud GPU and cloud computing resources to accelerate AI and cloud-native innovation,” said J.J. Kardwell, CEO of Vultr’s parent company, Constant. “By working closely with NVIDIA and our growing ecosystem of technology partners, we are removing access barriers to the latest technologies, and offering enterprises the first composable, full-stack solution for end-to-end AI application lifecycle management. This enables data science, MLOps and engineering teams to build on a globally-distributed basis, without worrying about security, latency, local compliance, or data sovereignty requirements.”

Vultr GPU Stack is a finely tuned and integrated operating system and software environment which instantly provisions the full array of NVIDIA GPUs, pre-configured with the NVIDIA CUDA Toolkit, NVIDIA cuDNN and NVIDIA drivers, for immediate deployment. This solution removes the complexity of configuring GPUs , calibrating them to the specific model requirements for each application and integrating them with the AI model accelerators of choice. Models and frameworks can be brought in from the NVIDIA NGC catalog, Hugging Face or Meta Llama 2 and include PyTorch and TensorFlow. With these resources easily provisioned, data science and engineering teams across the globe can get started on their model development and training with a click of a button.

Vultr also launched its new Vultr Kubernetes-based Container Registry, fully integrated with Vultr’s GPU stack. Comprising both a public and private registry, the Vultr Container Registry enables organizations to source NVIDIA ML models from the NVIDIA NGC catalog and provision them to Kubernetes clusters via Vultr’s 32 cloud data center locations. This empowers data science, MLOps and engineering teams to leverage pre-trained AI models from anywhere in the world — regardless of the team’s physical location. Meanwhile, the private registry combines public models with an organization’s private datasets so developers can train and tune models based on proprietary data and then create their own instance of the model for inference. That trained and tuned model is then accessible in each company’s private container registry, accessible only to authorized users. This in turn speeds up global instantiation and tuning of AI models, synchronized across private registries in each region.

Share this

Industry News

November 21, 2024

Red Hat announced the general availability of Red Hat Enterprise Linux 9.5, the latest version of the enterprise Linux platform.

November 21, 2024

Securiti announced a new solution - Security for AI Copilots in SaaS apps.

November 20, 2024

Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.

November 20, 2024

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:

November 20, 2024

Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.

November 20, 2024

Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.

November 19, 2024

OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.

November 19, 2024

Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.

November 19, 2024

Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.

November 19, 2024

Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.

November 19, 2024

Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.

November 19, 2024

Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.

November 19, 2024

Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.

November 18, 2024

MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.