Nvidia and Microsoft accelerate the processing of artificial intelligence on computers

Photo of author

By [email protected]


Nafidia And microsoft Declare Work to speed up the performance of artificial intelligence processing on NVIDIA RTX computers.

AI Generative converts computer software into penetration experiments – from digital people to assistants who write, smart agents and creative tools.

NVIDIA RTX AI PCS operates this transformation through technology that makes it easier to start the AI ​​Time Time, and opens greater performance on Windows 11.

Tensorrt for RTX AI PCS

Tensorrt has been re -imagined to the RTX AI computers, combining the leading Tensorrt performance in the industry with a time engine construction and the size of a smaller 8x package to spread AI fast to more than 100 million RTX AI pcs.

It is announced in Microsoft Build, Tensorrt is supported by RTX original by Windows ML – a new inference staple provides both applications for both applications and technical performance.

Gerardo Deljado, AI PC product manager at NVIDIA, said in a press briefing that computers that start from AI with NVIDIA RTX devices, Cuda programming and a set of artificial intelligence models. He pointed out that at a high level, the artificial intelligence model is a set of sports operations mainly with a way to operate it. A set of processes and how to operate it is what is usually known as the graph in machine learning.

He added: “Our graphics processing units will carry out these operations with a tensioner nuclei. But the average core changes from generation to Generatio.

First, NVIDIA must improve the artificial intelligence model. It should make the form of the form to reduce the accuracy of the parts of the form or some layers. Once NVIDIA is improved, Tensorrt consumes this perfect model, then NVIDIA prepares a plan with a prior criticism of the nucleus. “

If you compare this to the standard method of running artificial intelligence on Windows, NVIDIA can achieve a performance about 1.6 times on average.

Now there will be a new version of Tensorrt for RTX to improve this experience. It is specifically designed for RTX AI PCS and provides the same Tensorrt performance, but instead of having to connect Tensorrt in advance for each graphics processing unit, it will focus on improving the model, and will charge the TenSorrt engine in general.

He said: “Once you install the app, Tensorrt for RTX will create the correct TenSorrt engine for your 201S column in just seconds. This greatly simplifies the functioning of the developer.”

Deljado said that among the results is a decrease in the size of libraries, better performance of video generation, and live broadcast is the best quality.

NVIDIA SDKS facilitates app developers to merge artificial intelligence features and speed up their applications on GeForce RTX GPU. Supreme software applications this month from Autoodesk, Bilibili, Chaos, LM Studio and Topaz top up updates to cancel RTX AI and acceleration features.

Artificial intelligence lovers and developers can start easily with artificial intelligence using pre -filled and improved artificial intelligence models that work in famous applications like anything, Microsoft Vs Code and Comfyui. FLUX.1-SCHNELLLGENERIONG is now available to generate NIM images, and Flux.1-DEV NIM has been updated to support more RTX graphics processing units.

For the option not to dive into the development of AI, the Project G-ASSIST-RTX PC AI assistant in the NVIDIA-enables a simple way to create the additional ingredients to create an auxiliary workflow. The new additional components of society are now available, including the search for the Google Gemini, Spotify, Twitch, Ifttt and Signalrgb web.

Estimation of artificial intelligence accelerated with tensorrt to RTX

Today’s AI’s PC software requires developers to choose between frameworks that have wide support for devices but low performance, or improved tracks that only cover only types of devices or models and require the developer to maintain multiple tracks.

The new Windows ML work frame is built to solve these challenges. Windows ML is designed at the highest ONNX operating time and is smoothly connected to the improved artificial intelligence implementation layer that it provides and maintained by each hardware factory. For GeForce RTX GPU, Windows ML automatically uses Tensorrt for RTX – improved inference library for high publication and rapid publication. Compared to Directml, Tensorrt offers 50 % faster performance to the burden of work intelligence on computers.

Windows ML also provides the quality of life quality for the developer. It can automatically define the right devices to run each AI feature, download the implementation provider for this device, and remove the need for these files packages in their application. This allows NVIDIA to provide the latest Tensorrt performance improvements to users as soon as they are ready. Because it is based on ONNX, Windows ML works with any ONNX.

To increase the enhancement of developers ’experience, the Tensorrt of RTX was re -visualized. Instead of having to Tensorrt in advance and filling it in the application, Tensorrt for RTX uses the engine construction in time only to improve how the user’s GPU’s GPU is operating in seconds. The library was simplified, which reduces the size of its file by eight times. Tensorrt for RTX is available for developers by a Windows ML preview today, and it will be directly available as an independent SDK in NVIDIA Deleper, and aims to release June.

Developers can learn more in the Microsoft Build Developer Blog from NVIDIA, Tensorrt for RTX Launch Blog, and Windows ML blog from Microsoft.

Expanding the Ecological Organization of Amnesty International on Windows computers

Developers looking to add artificial intelligence features or enhance the performance of the application can take advantage of a wide range of NVIDIA SDKS. These include Cuda and Tensortrt to accelerate GPU. DLSS and Optix 3D graphics; RTX video and stories of multimedia; And Riva, Nemotron or ACE for the IQ.

Higher applications make updates this month to enable the unique NVIDIA features using this SDKS. Topaz launches AI Molidic Video to enhance the quality of the video accelerated by Cuda. Escape and Autodesk VRED DLSS 4 adds to the fastest performance and better image quality. Bilibili combines NVIDIA broadcast features, enabling signs to activate the NVIDIA virtual background inside Bilibili Livehime to enhance direct broadcast quality.

Amnesty International has become easy with microscopic services and artificial intelligence plans

Begin to develop artificial intelligence on computers can be hard. Artificial and enthusiastic intelligence developers have to choose from more than 1.2 million models of artificial intelligence on the face embrace, and their completion in coordination that works well on a personal computer, find all dependencies to run it, and more. NVIDIA NIM makes it easy to start by providing a coordinated menu of artificial intelligence models, pre -packed with all the files needed to run, and improve them to achieve full performance on RTX graphics processing units. As container microservices, the same NIM can run smoothly via a personal or cloud.

NIM is a package – a model of the pre -filled artificial intelligence that you need everything you need to run.

It has already been improved using Tensorrt for RTX GPU, and comes with an easy -to -use application programming interface compatible with API, which makes them compatible with all the higher AI applications that users use today.

In Computex, NVIDIA launches Flux.1-SCHNELL NIM- Model to generate Black Forest Labs to generate fast images-and update Flux.1-Dev NIM to add compatibility to a wide range of Geforce RTX 50 and 40 Series. This NIMS allows faster performance with Tensorrt, as well as additional performance thanks to quantitative models. On Blackweell, this is the fastest running as it runs original, thanks to FP4 and RTX improvements.

Artificial intelligence developers can also start their work with NVIDIA AI plans – a sample of workflow and projects using NIM.

Last month, NVIDIA released a 3D AI chart, a strong way to control the composition and camera corners of the images created using a 3D scene as a reference. The developers can modify the open -ended scheme for their needs or extend it with additional functions.

Project G-SSIST additional ingredients and sample projects are now available

NVIDIA recently released the Project G-SSIST as a pilot assistant of artificial intelligence integrated into the NVIDIA application. G-SSIST enables users to control their Geforce RTX system using simple sound and text orders, providing a more convenient interface compared to manual controls that spread through multiple old control panels.

Developers can also use the Project G-SSIST to create additional ingredients easily, and the use of test assistants and publish them through Discord and GitHub.

To facilitate the start of creating additional ingredients, NVIDIA allowed the consistent of the additional ingredients that are easy to use-which is based on ChatGPT that allows the development of a non-symbol/symbol with natural language orders. These light -light additional functions, which depend on society, increase the JSON definitions directly and the logic of Python.

New open source samples are now available on GitHub, as it displays various ways how the AI ​​device can enhance computer and games.

● Gemini: The current additional component of Gemini, which uses the free LLM, is updated to the Google Group of Google to include the actual search capabilities.

● Ifttt: Enable automation from hundreds of end -of -end points, such as Internet of Things and home automation systems, providing routine measures that extend over digital settings and physical surrounding areas.

● Discord: Share games easily or messages directly to Discord Services without disrupting toys.

Explore the GitHub warehouse for additional examples-including hands-free music control via Spotify, check the state of direct broadcast with Twitch, and more.

Project G-ASSIST-AI’s assistant for your RTX computer

Companies also adopt artificial intelligence as a interface for the new personal computer. For example, the Signalrgb develops an additional G -ssist component that allows uniform lighting control across many manufacturers. Signalrgb users will soon be able to install this additional component directly of the Signalrgb app.

The enthusiasts interested in developing and experimenting with the additional components of the G-SSIST project are invited to join the Discord Developer Nvidia for cooperation, sharing creativity and receiving support during development.

Every week, the RTX AI GARAGE blog chain features innovations and content of artificial intelligence that society drives for those looking to learn more about NIM Microservices and AI plans, as well as building artificial intelligence agents, creative workflow, digital human beings, productivity applications and more on personal computers and work stations.



https://venturebeat.com/wp-content/uploads/2025/05/ai-pc.jpg?w=1024?w=1200&strip=all
Source link

Leave a Comment