Hyperconverged Infrastructure Is So Hot Right Now It Needs Liquid Cooling
Hyperconverged infrastructure most often involves a collection of modest 2U servers powered by mid-range processors that aren’t particularly challenging to operate. But Lenovo’s new models packing Xeon 6 processors may need liquid cooling.
The Chinese hardware giant yesterday launched “ThinkAgile HX Series GPT-in-a-Box solutions”, hyperconverged infrastructure (HCI) products that have the option to use its Neptune Core Module, a direct liquid cooling device that pipes cold water to a cold plate that sits atop a CPU.
The hardware comes in three flavors, each tuned to run HCI stacks from Nutanix, VMware, or Microsoft’s AzureStack HCI.
All can run a variety of 6th-generation Intel Xeon CPUs (aka Granite Rapids) including parts that 350-watt parts like the 86-core and 172-thread model 6787p.
Interestingly, Lenovo’s fact sheet for the ThinkAgile MX650 V4 Hyperconverged System it’s built for AzureStack lists several single-socket models that it will only build to order.
The servers can also run GPUs – up to ten of them although the sheer size of parts like Nvidia’s H100 means only a pair can be installed.
- Non-x86 servers boom even faster than the rest of the AI-infused and GPU-hungry market
- Lenovo isn't fussed by Trumpian tariffs or finding enough energy to run AI
- Lenovo teases solar-powered and folding screen concept laptops
- Lenovo’s enterprise hardware biz booms but profit remains elusive
HCI is often touted as a fine candidate for installation in branch offices or at the network edge, as its inclusion of software-defined storage in homogenous appliances and central management features mean it’s less complex to deploy than some other hardware options.
News that Lenovo feels the need to equip HCI boxes with liquid cooling for their CPUs could be seen to dilute the HCI value proposition by adding complexity.
However Lenovo’s bundling of these devices into GPT-in-a-Box configs suggests most will be racked and stacked in formal datacenter settings, probably at orgs that already run plenty of HCI and want to keep doing so as they implement on-prem AI workloads rather than creating new hardware silos. Such orgs will still need to invest in the extra hardware required by liquid cooling. But at least they’ll still be using a familiar software stack and operating environment as they do so.
And maybe a few of these boxes will make it out to the edge, too. A half-height liquid cooled rack is easier to deploy than a small immersion tank. ®
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more