Micron Is Fashionably Late To The HBM Party, But Not Too Late

By Timothy Prickett Morgan

Micron Is Fashionably Late To The HBM Party, But Not Too Late

Here is what memory bandwidth and a certain amount of capacity is worth in the GenAI revolution.

Imagine that you have a bunch of DDR5 memory chips that cost $70 a gigabyte to create a 256 GB memory stick that lays them out in serial fashion horizontally on a DIMM memory stick. Crank their clocks up to 4.8 GHz and you can charge around $18,000 for that memory stick. And very few server customers will buy it because it is so expensive. Take the same DDR5 memory, but with slightly less capacious chips, and stack it up vertically in eight stacks of HBM3E and you can sell that same 256 GB of capacity as fast as you can make it, perhaps for a 60 percent premium per unit of capacity but perhaps with slightly smaller profits because making HBM is hard.

If you were in the memory business, with its crazy boom-bust cycles, wouldn't you try to figure out how to make HBM - and sell as much of it as you could?

This is precisely the plan at Micron Technology, which is a late comer to the HBM memory racket alongside pioneers SK Hynix and Samsung but which is bringing its substantial engineering talents to bear to take a run at the Nvidia money as well as every other AI chip maker that wants to have HBM to boost the performance of their accelerators.

It is still early days, but the indications are that Micron's entry into the HBM market will be a financial watershed event for the company when we look back a few years from now.

Micron posted its financial results for the first quarter of its fiscal 2025 this week, and Wall Street predictably freaked out because the recovery in the PC and smartphone businesses has not panned out as expected. It looks to us that lots of people are drinking the AI Kool-Aid about how we all want AI embedded in our PCs and smartphones, but thus far this has not spawned a massive upgrade cycle and there is no indication that anyone wants to pay an AI premium for such products. (We are getting a new smartphone soon because our iPhone X is looking a little ghetto and has a battery that is crankier than a teething toddler - not because we want AI.)

The ramp of HBM memory at Micron is dramatic, and is a refreshing after the fiasco that was its partnership with Intel to make 3D XPoint flash-ish ReRAM memory. Micron played around in the HBM2E generation and skipped over HBM3 generation and went straight to the HBM3E generation, where it booked over $100 million in sales in Q3 F2024. At the time, back at the end of June, Sanjay Mehrotra, Micron's president and chief executive officer, put a few stakes in the ground for fiscal 2024 and fiscal 2025.

"We expect to generate several hundred million dollars of revenue from HBM in fiscal 2024 and multiple billions of dollars in revenue from HBM in fiscal 2025," Mehrotra told Wall Street. "We expect to achieve HBM market share commensurate with our overall DRAM market share sometime in calendar 2025."

Micron's DRAM market share is somewhere between 20 percent and 25 percent, depending on who you ask and how you want to bracket the DRAM space. Two quarters ago, Micron put the HBM TAM for 2025 at around $25 billion and on this week's call, Mehrotra raised that TAM to exceed $30 billion, based on what Micron is hearing from compute engine manufacturers chasing the AI opportunity. (There is also some enthusiasm for HBM memory for HPC workloads, but this is noise in the data.) You might do the math on that by taking more than 20 percent of more than $30 billion and get more than $6 billion in HBM memory sales in fiscal 2025 for Micron. And that would be pretty good for a product line that was doing maybe 1/100 that revenue in Q2 F2024. But the math is not that simple, as we will explain.

Mehrotra & Co are being cagey about just how much HBM memory it sold in fiscal 2024, but our best guess is around $530 million. We think that Q4 F2024 was around $350 million and Q1 F2025 was around $750 million, with that eight-high stacks of HBM3E memory used in the "Hopper" H200 GPU accelerators from Nvidia driving most of that revenue. The same eight-high HBM3E from Micron is being put into the "Blackwell" B200 GPUs from Nvidia as well, and Micron has just started "high volume shipments" to its second HBM customer and will start shipments to a third large customer in calendar Q1 2025.

Everyone is excited by Micron's twelve-high HBM3E stacks, which the company says will consume 20 percent less power than the competition's eight-high HBM3E even as it delivers 50 percent higher capacity. HBM capacity is sold out for fiscal 2024 and fiscal 2025, so Micron has a very good idea of what its revenue streams will be provided it can keep cranking out the HBM stacks.

Mehrotra gave some clarification about the HBM ramp that helps us all build a better model, saying that Micron expects "to achieve HBM market share commensurate with our overall DRAM market share sometime in the second half of calendar 2025."

Assuming sequential growth through all of fiscal 2025 for HBM sales and a rise from about 10 percent share in Q1 to 20 percent by Q4, we reckon Micron will sell at least $4.7 billion in HBM memory this fiscal year, and we reckon further that this will drive more than 1 million accelerators. That is a factor of 8.8X increase in revenues for the HBM line. Other DRAM drove around $19.3 billion in sales for Micron in fiscal 2024 in our model, and we have no reason to believe that DRAM will do better with PCs and smartphones in a slump. LPDDR5 memory and other DDR5 memory used in AI servers helps, to be sure, but assume that it all balances out. In our model, HBM will add an incremental $4.7 billion, or about 25 percent, to the DRAM business in fiscal 2025.

Eventually, we think PCs, smartphones, and general purpose servers will recover and there could be more DRAM upside for Micron - just because old stuff needs to be replaced, not because these machines are sprinkled with AI pixie dust.

The other interesting tidbit from Micron was that it had one greater than 10 percent customer in Q1 2025, and thanks to the US Securities and Exchange Commission it has to tell us how many such customers it has and how much revenue they drove. In this case, that one company drove 13 percent of total sales at Micron, or about $1.13 billion. We are sure that this is Nvidia, with that $750 million going for HBM3E memory and the rest going for LPDDR5 used with the "Grace" G100 processor and perhaps DDR5 memory and NAND flash drives used in DGX systems.

In the quarter, the Compute and Networking Business Unit, which is the closest thing to a datacenter proxy at Micron, posted sales of $4.36 billion, up 2.5X year on year and up 45.6 percent sequentially. This unit posted an operating loss of $397 million in the year ago period and booked a $1.71 billion gain this time around, which is very nice swing indeed.

As you can see from the chart above, Micron has been suffering, along with other chip suppliers with the server recession and the slump in PC and smartphone sales and its operating income has been also hurt by heavy investments in HBM technology that are now going to start bearing fruit. Fiscal 2023 and the first half of fiscal 2024 were not a lot of fun for the middle and bottom lines at Micron, but it sure looks like Micron turned a corner earlier this calendar year and is hitting on the cylinders that it can right now.

In fact, with all of the HBM already sold for fiscal 2025, Micron has a pretty good idea of how the year might turn out and there are still three quarters to go. It all comes down to execution now.

Previous articleNext article

POPULAR CATEGORY

corporate

9477

tech

10467

entertainment

11662

research

5088

misc

12221

wellness

9160

athletics

12271