Nvidia Builds A Server To Run X86 Workloads Alongside Agentic AI

Computex Nvidia has delivered a server design that includes x86 processors and eight GPUs connected by a dedicated switch to run agentic AI alongside mainstream enterprise workloads.

Announced today at the GPU Technology Conference Nvidia is running alongside the Computex event in Taiwan, the design is called the RTX PRO and is the GPU giant's play to find a place in mainstream enterprise datacenters.

CEO Jen-Hsun Wang, as Nvidia's founder is referred to in Taiwan, used his GTC keynote speech to revisit his argument of general-purpose enterprise IT being unfit for the AI age, but also acknowledged that users have plenty of workloads they need to run on familiar infrastructure and aren't going to migrate.

The RTX Pro will therefore be entirely happy running Kubernetes or hypervisors from Broadcom, Nutanix, or Red Hat. It will also run eight GPUS – the new NVIDIA RTX PRO 6000 Blackwell Server Edition packing 24,064 CUDA cores, 752 Nvidia fifth-gen Tensor cores, 188 fourth-gen RT cores, and humming along at 120 TFLOPS at single-precision FP32. Huang mentioned an internal switch to allow GPU-to-GPU chat, but didn't divulge details other than to say that 800 Gbps connections link the GPUs.

The presence of that switch, and the machines' ability to run Nvidia's AI data platform that pumps data from external storage into GPUs, means Huang believes his company has reformed compute, storage, and networking for the AI age.

But he also promised IT departments won't need to adopt new tools or techniques to manage the servers, which he said are fit to run ERP or desktop virtualization workloads.

Nvidia's partners plan to deliver the servers starting in July.

Huang was vastly enthusiastic about agentic AI – he said all Nvidia software engineers use them – but admitted proliferation of the bots could be problematic and promised his company's tools would become the equivalent of an HR department for AI agents.

Huang also mentioned that the Blackwell 300 series accelerators, for desktops and datacenters, will debut in Q3. Which means 20-petaFLOPS desktops are getting nearer – and Huang showed off one of Nvidia's own DGX mini-AI-PCs.

Robotics was another topic of interest, as Huang opined robots aren't taking off because today's models are built for niche uses, saying they remain expensive and sales remain small because manufacturers can't achieve economy of scale.

He thinks humanoid robots will change that, because they will be designed to work alongside humans and therefore need common elements - a robot built for a factory will have many of the qualities needed to operate in a home. Nvidia is trying to accelerate development of humanoid bots – and give itself a shot at what Huang thinks is a multi-trillion-dollar opportunity. ®

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more