Sam Altman's Chip Ambitions May Be Loonier Than Feared

Opinion OpenAI CEO Sam Altman's dream of establishing a network of chip factories to fuel the growth of AI may be much, much wilder than feared.

As reported last month, Altman is supposedly seeking billions of dollars in funding from partners including Abu Dhabi-based G42, Japan's SoftBank, and Microsoft, to build out all those neural-network accelerator fabs.

Now, a Wall Street Journal report, citing yet more anonymous sources, claims the ambitious project could involve raising up to $7 trillion.

That's an eye-watering sum that, in this vulture's view, defies logic.

To put the figure in perspective, that's nearly 14 times the total revenue for the entire semiconductor market last year. According to Gartner, worldwide semi revenues topped $533 billion in 2023. And in spite of all the hype around generative AI, analysts expect that sales figure to grow 17 percent to $624 billion this year.

But let's say, for the sake of argument, Altman and his partners really are this courageous, and can somehow wrangle a quarter of the United States' gross domestic product for 2023 to finance the endeavor. What does $7 trillion buy you?

It's enough cash to gobble up Nvidia, TSMC, Broadcom, ASML, Samsung, AMD, Intel, Qualcomm, and every other chipmaker, designer, intellectual property holder, and equipment vendor of consequence in their entirety – and still have trillions left over.

Although it would be fun to watch Sam burn a prodigious amount of dosh kicking off what would be the biggest antitrust battle of the century, plowing that money into factories and processor packaging is more likely what he has in mind to boost chip production. Actually, we can think of plenty of better ways to spend that kind of cash, but let's stick to chips for a bit.

Now that's a lot of fabs

No matter how you slice it, $7 trillion is still an enormous sum to spend on fabs, even a network of them.

The cost of a cutting-edge chip factory today falls somewhere between $10 and $30 billion, depending on the size of the site and its location. Lets say Altman's envisioned facilities end up costing about $20 billion on average. At that rate, $7 trillion nets you about 350 foundry sites.

The issue then becomes, who's going to build them? These facilities are among the largest, most complex operations in the manufacturing world, requiring components and materials from countless suppliers and personnel specially trained to install, maintain, and operate them.

Because of this, it's not uncommon for these facilities to take four or more years just to bring online and potentially much longer to bring yields to acceptable levels. There's nothing fast about building fabs properly.

In the US, we've seen a flurry of investment in domestic semiconductor manufacturing and research and development, driven in no small part by a $53 billion government subsidy pot made available thanks to the CHIPS funding bill. Yet foundry operators have already run into serious issues.

As we've previously reported, a shortage of skilled workers has already delayed development of TSMC's Arizona fab. TSMC has gone so far as to send technicians from Taiwan to America in an attempt to get the facility back on track.

Last summer, the Semiconductor Industry Association (SIA) and UK-based Oxford Economics warned that the US semiconductor industry faced a shortfall of 67,000 technicians, engineers, and computer scientists by 2030. Intel, which is spearheading one of the largest build outs of American fabs, puts that number at closer to 70,000 to 90,000 over the next few years.

And that's for just a handful of fabs under development in the US. It doesn't take much imagination to see how 350 additional sites would be problematic at a global scale.

Flooding the market

If that weren't enough, semiconductor demand tends to ebb and flow in a cyclical fashion. Buying sprees are usually followed by lengthy digestion cycles, and ramps in PC sales tend to coincide with operating system or software releases.

We're assuming for a moment that those hundreds of fabs won't just service OpenAI or the AI world in general but also everything adjacent to it, though it may be that Altman really wants just an endless stream of machine-learning accelerators and related compute.

The memory market is only just now recovering from an inventory glut that drove average selling prices to record lows. Meanwhile, Intel has reportedly postponed the completion date for its Ohio fabs to late 2026, blaming current weaknesses in the semiconductor market and delays in getting CHIPS act funding.

Of course industry gossips have yet to detail a timeline over which Altman's supposed $7 trillion semiconductor venture will play out. It's safe to assume it's not going to happen overnight. These kinds of developments need to be adjusted to avoid building out too aggressively and flooding the market with too many chips.

Even spread out across the next 25 years, we're still talking about a huge amount of money, enough for 14 fabs a year at a cost of $280 billion annually. To hit that mark, TSMC, Samsung, and Intel would need to roughly triple their capex spending and direct all of it into chip plants.

That, admittedly sounds less crazy, but given that theoretical timeline, why would Altman possibly need to raise $7 trillion now? Usually, when you see companies like Intel talk about their foundry roadmaps, they only tend to fund what's immediately in the pipeline.

For example, when the x86 giant announced its plan to invest $100 billion over the next decade on an Ohio megafab, it only actually committed to building two sites at an estimated cost of $10 billion a piece. And, as we mentioned before, even that's been delayed.

Part of a larger plan?

So, perhaps this $7 trillion project is really a larger plan to fuel OpenAI's ambitions. All of those chips are going to have to go somewhere. That means he'll not only need fabs to make the chips, but datacenters to use them, and (hopefully) clean energy to make everything run, and that'll cost top dollar, too.

The chips used to power AI models are notoriously power hungry. A single eight-GPU Nvidia H100 node is rated at 10.2 kilowatts. Scale that up to 350,000 GPUs — that's how many Meta claims it's going to deploy this year — and you're looking at a huge amount of power.

Budgeting $100 billion, just 1.4 percent of the $7 trillion budget, for GPUs, you could buy five million H100s at the volume rate of $20,000 each. For the record, that's more than twice the number Nvidia is predicted to ship in the entirety of 2024.

Needless to say, power is gonna be a problem. So carving off some cash to address that challenge would make sense.

The good news here is Altman has a long history of backing energy startups. Last year, Oklo, a nuclear fission startup supported by the OpenAI CEO, announced its plans to go public.

Meanwhile, on the more experimental side of things, Altman has thrown his weight behind Helion Energy, which is working to commercialize a modular helium-3 fusion power plant. Despite the fact that Helion had yet to prove its reactor actually works, Altman's involvement appears to have been enough for Microsoft to sign a power purchase agreement with the startup. The tech isn't expected to see deployments until at least 2028, assuming they ever manage to make it work.

In any case, this leads your humble hack to the conclusion that either the $7 trillion used to describe the scope of Altman's ambitions is either gross hyperbole, or part of some larger, more holistic plan. ®

RECENT NEWS

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more

OpenAI And Broadcom Collaborate On New AI Chip To Boost Computational Power

OpenAI and Broadcom are reportedly in discussions to co-develop a new AI chip. This strategic partnership aims to enhanc... Read more

Tech Industry Left In Limbo As AI Bill Missing From Kings Speech

The King’s Speech, a key event outlining the government's legislative agenda, has notably excluded a much-anticipated ... Read more