Join our daily and weekly newsletters for the latest updates and exclusive content on our industry-leading AI coverage. He learns more
Nvidia Today announced the base models that run locally on Nvidia RTX AI PCs That supercharges digital humans, content creation, productivity, and development.
GeForce has long been a vital platform for AI developers. AlexNet, the first GPU-accelerated deep learning network, was trained on the GeForce GTXTM 580 in 2012 – and last year, more than 30% of published AI research papers reported using the GeForce RTX. This was stated by Jensen Huang, CEO of Nvidia, during his conference Consumer Electronics Show 2025 Opening speech.
Now, with PCs with AI and RTX AI, anyone can be a developer. A new wave of low-code and no-code tools, such as AnythingLLM, ComfyUI, Langflow, and LM Studio, allow enthusiasts to use AI models in complex workflows via simple graphical user interfaces.
NIM microservices connected to these GUIs will make it easier to access and deploy state-of-the-art generative AI models. Built on NIM microservices, Nvidia AI Blueprints provide easy-to-use, pre-configured reference workflows for digital humans, content creation, and more.
To meet the growing demand from AI developers and enthusiasts, all PC manufacturers and system builders are launching NIM-ready RTX AI PCs.
“AI is advancing at the speed of light, from cognitive AI to generative AI and now agent AI,” Huang said. “NIM microservices and AI blueprints provide PC developers and enthusiasts with the essential elements to explore the magic of AI.”
NIM microservices will also be available with Nvidia Digits, an AI super-personal computer that provides AI researchers, data scientists and students around the world with access to the power of Nvidia Grace Blackwell. Project Digits features the new Nvidia GB10 Grace Blackwell Superchip processor, which delivers petaflops of AI computing performance for prototyping, tuning, and running large AI models.
Make artificial intelligence smart

Basic models – neural networks trained on massive amounts of raw data – are the building blocks of generative AI.
Nvidia will release a set of NIM microservices for RTX AI computers from top model developers like Black Forest Labs, Meta, Mistral, and Stability AI. Use cases include large language models (LLMs), vision language models, image generation, speech, augmented retrieval generation (RAG) embedding models, PDF extraction, and computer vision.
“Making FLUX an Nvidia NIM microservice increases the rate at which AI is deployed and experienced by more users, while delivering incredible performance,” Robin Rombach, CEO of Black Forest Labs, said in a statement.
Nvidia today also announced the Llama Nemotron family of open models that deliver high accuracy on a wide range of proxy tasks. The Llama Nemotron Nano model will be offered as a NIM microservice for PCs and RTX AI workstations, and excels at agentic AI tasks such as following instructions, calling functions, chatting, programming, and mathematics. NIM microservices comprise the key components for running AI on PCs, and are optimized for deployment across NVIDIA GPUs – both in PCs and RTX workstations, as well as in…
clouds.
Developers and enthusiasts will be able to quickly download, set up, and run these NIM microservices on Windows 11 PCs with Windows Subsystem for Linux (WSL).
“AI is driving innovation in Windows 11 PCs at a rapid rate, and the Windows Subsystem for Linux (WSL) provides a great cross-platform environment for developing AI on Windows 11 alongside them,” said Pavan Davuluri, vice president, Windows, Business at Windows 11. Along with Windows Copilot Runtime”. Microsoft, in a statement. “Optimized for Windows PCs, Nvidia NIM microservices give developers and enthusiasts ready-to-integrate AI models for their Windows applications, further accelerating the deployment of AI capabilities to Windows users.”
NIM microservices, running on RTX AI computers, will be compatible with the best AI development frameworks and agents, including AI Toolkit for VSCode, AnythingLLM, ComfyUI, CrewAI, Flowise AI, LangChain, Langflow, and LM Studio. Developers can connect applications and workflows built on these frameworks to AI models running NIM microservices through industry-standard endpoints, enabling them to use the latest technologies with a unified interface across the cloud, data centers, workstations, and PCs.
Enthusiasts will also be able to try out a range of NIM microservices using the upcoming release of the Nvidia ChatRTX technical demo.
Put a face to the AI agent

To illustrate how enthusiasts and developers can use NIM to build AI agents and assistants, Nvidia today previewed Project R2X, a vision-enabled PC avatar that can put information at a user’s fingertips, assist with desktop applications and group video calls, read and summarize documents, and more. .
Avatar rendering is done using Nvidia RTX Neural Faces, a new AI algorithm that augments traditional rasterization with fully generated pixels. The face is then animated by a new diffusion-based NVIDIA Audio2FaceTM-3D model that optimizes the movement of the lips and tongue. R2X can be connected to cloud AI services such as OpenAI’s GPT4o and xAI’s Grok, NIM microservices and AI Blueprints, such as PDF retrievers or alternative LLMs, via developer frameworks such as CrewAI, Flowise AI and Langflow.
AI blueprints are coming to PC

NIM microservices are also available to PC users through AI Blueprints – reference AI workflows that can run natively on RTX PCs. Using these charts, developers can create podcasts from PDF documents, create stunning vector images with 3D scenes, and more.
PDF to Podcast Converter extracts text, images, and tables from a PDF file to create a podcast script that users can edit. It can also create an entire audio recording from the script using the sounds provided in the sketch or based on the user’s voice sample. Additionally, users can have a real-time conversation with the AI podcast host to learn more.
The scheme uses NIM microservices such as Mistral-Nemo-12B-Instruct for language, Nvidia Riva for text-to-speech and automatic speech recognition, and the NeMo Retriever set of microservices for PDF extraction.
3D-Guided Generative AI Blueprint gives artists better control over image creation. While AI can create stunning images with simple text prompts, controlling image composition using only words can be difficult. With this blueprint, creators can use simple 3D objects placed in a 3D viewer like Blender to guide the AI image creation process.
The artist can create 3D assets manually or create them using AI, place them in the scene, and adjust the 3D view camera. Then, a pre-packaged workflow powered by the FLUX NIM microservice will use the existing composition to create high-quality images that match the 3D scene.
Nvidia NIM microservices and AI blueprints will be available starting in February. NIM-equipped RTX AI PCs will be available from Acer, ASUS, Dell, GIGABYTE, HP, Lenovo, MSI, Razer, and Samsung, and from local system builders Corsair, Falcon Northwest, LDLC, Maingear, Mifcon, Origin PC, PCS, and Scan. .
https://venturebeat.com/wp-content/uploads/2025/01/RTX-AI-PC-Image-2.jpg?w=1024?w=1200&strip=all
Source link