Run NIM Anywhere
Using NVIDIA NIM
NVIDIA NIM™ is a set of easy-to-use inference microservices for accelerating the deployment of foundation models on any cloud or data center and helping to keep your data secure.
There are many ways to use and interact with NVIDIA NIM. Here we show you how to get started, whether you're building a prototype or deploying to production.
Prototyping and learning with NIM
To prototype and start building with NIM, you have several options:
- For development with NVIDIA- hosted endpoints, NVIDIA provides preview APIs in its API catalog. By creating an account, you can use NVIDIA-hosted endpoints. See the API Catalog Quickstart Guide for more information and instructions on getting started.
- With a Hugging Face account, you can interact with selected Serverless NIM endpoints directly through the Hugging Face API.
Hosting NIM Locally
If you are considering a local deployment for your workstation or data center, NVIDIA AI Workbench is a client application that can run NIM with local and hybrid configurations.
You can see the full range of NIM Microservices at build.nvidia.com/nim and follow the instructions to deploy on your own infrastructure.
Hosting NIM on other infrastructure
After initial prototyping, you may want to transition AI models over to your own compute environment to mitigate the risk of data IP leakage, and for fine-tuning a model. NIM can be downloaded for self-hosting as NGC or Docker containers, giving enterprise developers ownership of their customizations, infrastructure choices, and full control of their IP and AI application.
If you’re considering a dedicated cloud deployment, you can:
- Download NIM at build.nvidia.com/nim and deploy it on your cloud provider of choice.
- Deploy NIM on an NVIDIA CSP partner that offers NIM in their model gardens (Azure, AWS, GCP, OCI, and more)
- Launch NIM on a dedicated single-node preconfigured GPU instance on NVIDIA Brev.
After fine tuning and customizing a model, enterprises can deploy private AI model endpoints on a VPC with NVIDIA Cloud Functions (NVCF).
Please refer to the respective quickstart guides on the left hand menu for detailed instructions.
Pricing
Members of the NVIDIA Developer program have free access to NIM API endpoints for prototyping, and to downloadable NIM microservices for research, application development, and experimentation on up to 16 GPUs on any infrastructure—cloud, data center, or personal workstation. Access to NIM microservices is available through the duration of program membership.
Downloadable NIM access is free for research, development and testing via the NVIDIA Developer program. Once you are ready to move to production, you’ll need an NVIDIA AI Enterprise license that can be acquired via any of our partners. The pricing model for NVIDIA AI Enterprise begins at $4500 per GPU per year. You can register and apply for an NVIDIA AI Enterprise License here.
If you have questions regarding NIM access via the Developer program, please check the FAQ.
Need help?
If you run into errors in the console when trying to use hosted endpoints or to pull a NIM, remember to check if you have the right user role for consuming the API or the entitlement to pull the container. Check out the FAQ for account access questions. If you're still having issues, ask your question on our NIM Developer forum.
Updated 9 days ago