Microsoft and Nvidia have announced that they are integrating Nvidia’s AI Enterprise Suite with Microsoft’s Azure Machine Learning Service to better help enterprise developers create, implement, and manage applications built on large language models. The integration offers more than 100 development tools, pretrained large language models, and frameworks for users. The integration is currently available on an invitation-only basis via the Nvidia community registry.
The AI Enterprise Suite is specifically designed for data science acceleration and is equipped with various important softwares such as Nvidia RAPIDS for quicker data science workloads, Nvidia Metropolis for Vision AI model development, Nvidia Triton Inference Server for standardized model deployment, and NeMo Guardrails software for safety and security features for AI chatbots. Microsoft is making Nvidia’s AI Enterprise Suite available to its Azure Marketplace as well so that enterprise developers can access it without any issues.
Nvidia Omniverse Cloud Platform-as-a-Service is also available on Microsoft Azure for private offers. This platform enables enterprises to design, develop, manage, and deploy large metaverse applications. Nvidia is continuously partnering with multiple technology companies such as Oracle, Google Cloud, ServiceNow and Dell to provide services for developing AI and generative AI applications. The partnership with Microsoft is an important part of the chip maker’s mission to make AI accessible for enterprise development.