Micron, SK-Hynix's Shipping Bandwidth-boosting LPDDR5 For On-device AI

Memory vendors Micron and SK Hynix this week began shipping their first LPDDR5 memory modules capable of achieving speeds up to 9,600MT/s.

For reference, that's technically 12 percent faster than the LPDDR5 spec, and between 30-50 percent faster than the memory found in most thin and light notebooks.

That speed translates into higher memory bandwidth, something that's become increasingly important as chipmakers have boosted core counts and embedded ever faster GPUs, neural processing units, and other co-processors into their system on chips (SoCs).

For instance, with the announcement of Qualcomm's Snapdragon 8 Gen 3 system on chip (SoC) this week, the silicon slinger is betting on a future where customers run machine learning and large language models, like Meta's Llama 2 or Stable Diffusion, entirely on their personal devices.

Most GPUs and accelerators used to run AI workloads use speedy GDDR or high-bandwidth memory (HBM) modules. However in a slim laptop, tablet, or smartphone this isn't always practical, and the CPU, GPU and other co-processors must often share a common pool of DDR5.

One of the techniques to prevent bandwidth from becoming a bottleneck is co-packaging memory alongside their compute dies. Apple's M-series processors are a prime example of that approach, with the memory modules on the same die as the CPU and co-processors.

Apple's M2 Max — for the moment, its most powerful notebook SoC — can deliver 400 GB/s of memory bandwidth to the CPU and GPU. To put that in perspective, that's just shy of the 460GB/s of bandwidth AMD's Epyc 4 datacenter CPU can manage when all 12 of its memory channels are full up.

If Apple were to move to Micron or SK-Hynix's latest 9,600 MT/s memory, the company might just be able to eke out another 200GB/s of bandwidth.

Intel is also rumored to be working on a version of its Meteor Lake processors with on-package LPDDR memory. However, it's not in space constrained mobile devices that we're seeing chipmakers go this route. Nvidia's 144-core Grace CPU Superchip uses LPDDR5X memory to keep the processors fed with 1TB/s of bandwidth.

One of the downsides to LPDDR memory is you can't really upgrade the device by tossing in a higher capacity SODIMM. This isn't really a problem for smartphones and tablets but may be a turn off for prospective laptop buyers. LPDDR modules are designed to be soldered down to the motherboard or co-packaged alongside the SoC, so taking advantage of LPDDR5's higher operating frequencies means forgoing upgradability.

Having said that, we won't have to wait long for SK-Hynix and Micron's latest memory modules hit the market. The companies claim that Qualcomm's Snapdragon 8 Gen 3 will be among the first to support their 9,600 MT/s memory modules. ®

RECENT NEWS

Google Leverages AI To Automatically Lock Phones During Theft

Amid increasing incidents of mobile phone thefts, Google has launched an AI-based feature that automatically locks the s... Read more

Microsofts Emissions Surge Nearly 30% Amid AI Demand Growth

Microsoft has reported a nearly 30% increase in its emissions from 2020 to 2023, underscoring the challenges the tech gi... Read more

Impact Of AWS Leadership Change On The Global AI Race

The recent leadership transition at Amazon Web Services (AWS), with Adam Selipsky stepping down and Matt Garman taking t... Read more

The Global Impact Of App Stores On Technology And Economy

Since Apple launched its App Store in 2008, app stores have become a central feature of the digital landscape, reshaping... Read more

Alibaba's Cloud Investment Strategy: Fuelling AI Innovation And Growth

Alibaba Group's cloud business, Alibaba Cloud, has emerged as a powerhouse in the tech industry, spearheading innovation... Read more

Elon Musk Takes On Government 'Censorship': A Clash Of Titans In The Digital Arena

Elon Musk's recent endeavors to challenge government-led content takedowns mark a significant development in the ongoing... Read more