During GTC 2024, it was announced that it would launch a new product combining artificial intelligence models with accelerated computing application resources.NVIDIA NIM Microservices Platform, recently announced that it will combine Meta's recently released Llama 3.1 open source model with the NVIDIA AI Foundry service to enable developers or enterprises to build various customized AI application functions more quickly. During the SIGGRAPH 2024 held in Denver, it also announced a partnership with Hugging Face to launch an Inference as a Service (IaaS) platform based on the NVIDIA DGX Cloud platform running NVIDIA NIM microservices.
NVIDIA NIM microservices are accelerated by GPUs based on the CUDA computing architecture and can operate with pre-trained AI models to create a variety of AI application services that can be deployed in the cloud, data centers, workstations, and PCs after optimization.
Because it adopts a containerized design, it can be combined with NVIDIA's GPU acceleration to quickly integrate artificial intelligence models, APIs and other resources, allowing various artificial intelligence application services to be built in a short period of time, while allowing developers and enterprises to deploy and use them in a simpler way.
And recently announcedPartnering with MetaBy combining the Llama 3.1 open source model with NVIDIA AI Foundry services and NIM microservices, various customized AI application functions can be created more quickly, thereby lowering the threshold for enterprises to introduce artificial intelligence application services.
The announced collaboration with Hugging Face will be based on NVIDIA NIM microservices and combined with the DGX Cloud platform composed of NVIDIA H100 GPUs. This will increase the data throughput of NVIDIA NIM microservices deployed on the Hugging Face platform by up to 5 times. At the same time, it will also be possible to build various microservices through the latest models such as Llama 3.1, and ensure the stability and security of related APIs, as well as technical support for enterprise applications.
NVIDIA also announced an upgrade to its NIM microservices capabilities, adding new resources like the Llama 3.1 open-source model to address more application possibilities. Currently, over 100 AI application microservices templates are available on ai.nvidia.com.
In the collaboration with Getty Images, the previously launched NVIDIA Edify NIM microservice for automatic image generation has been upgraded to further enhance the recognition of input description commands and the quality of image generation. It can also generate focal length ranges and depth of field performance for different lenses based on the description, as well as provide detailed performance of 4K quality images, allowing users to generate more commercially viable image content through Getty Images' iStock service.
As for the collaboration with Shutterstock, the upgraded NVIDIA Edify NIM function will also enable Shutterstock's Generative 3D tool to generate high-quality, directly editable 3D models, while also producing 16K 360 HDRi-specification backgrounds and 3D light sources. It will currently be available for testing in beta form.





