On January 29, 2026, Apple did what Apple does best: it made the stellar results look like a regular affair.Β 

The Cupertino giant β€” now the world's second most valuable company in terms of market capitalization β€” posted its first-quarter results that were, by any measure, extraordinary.Β 

In the earnings call, CEO Tim Cook called it a record-breaking quarter. β€œWith revenue of $143.8 billion, up 16 percent from a year ago, we are well above our expectations," he said in the call.Β 

However, amid record revenues, Cook acknowledged a quietly brewing supply chain problem across the tech industry: a shortage of memory chips.Β 

Cook warned that Apple will not be immune to the crisis. Rising memory chip prices, he said, are expected to pressure the company's margins in the quarters ahead.Β 

These are the same chips that go into every iPhone, Mac, iPad, and, for that matter, every Android phone, gaming console, and modern automobile.Β 

Tim Cook is not alone. Across Silicon Valley and beyond, tech leaders are calling it RAMageddon: an industry-wide shortage of memory chips. Experts say the shortage is severe enough to rattle the supply chains of some of the world's most powerful tech companies, derail corporate plans, and inflate prices for end users.

So what is causing this shortage? The answer points to the AI boom and rampant data center buildouts.

But how is a smartphone that fits in your pocket competing for the same chips as a data center that is the size of a city block?Β 

To understand this, we need to first know what memory chips are, their supply chain, and why they are the most fought-over component today.Β 

History of Memory Chips

The story of memory chips begins not in Silicon Valley, but in the Second World War. At Bletchley Park, the British government brought together mathematicians, engineers, linguists, and cryptanalysts for a single mission that could change the outcome of World War II: to decipher the encrypted Nazi messages at machine speed. Codebreaking, by that time, had moved beyond pen and paper. It required machines that could repeatedly and reliably process, store, and retrieve information.Β 

Enter Aquarius, a cryptanalytic machine built by the British to crack secret wartime messages. But for the machine to break codes efficiently, it needed a way to briefly store and reuse information as it worked.

This is where engineers turned to capacitors, which could temporarily hold an electric signal. This simple method of storing information became the early foundation of modern computer memory.

Two decades later, IBM engineer Robert Dennard turned that idea into a product. In 1966, he designed DRAM β€” Dynamic Random Access Memory β€” a chip that stored data using just one transistor and one capacitor.Β 

Intel commercialized it in 1970, and within two years, Intel’s 1103 chip became the world's best-selling semiconductor memory. The PC boom of the 1980s accelerated things. Memory chips were now inside every personal computer, in homes and offices around the world.

Over the next few decades, as technology advanced, the chips kept pace β€” shrinking in size, growing in capacity, and falling in cost with almost every passing year.

Then came the smartphone, and with it, a new chapter entirely.Β 

Today, every smartphone runs on two types of memory: DRAM and NAND. DRAM β€” Dynamic Random Access Memory β€” is the phone's short-term memory. It handles everything the device does while working, such as opening an app, running a camera, or loading a page. NAND, on the other hand, is the phone's long-term memory. It holds everything that needs to stay β€” your photos, messages, apps, and files.

And it is not just smartphones. Gaming consoles like PlayStation, the navigation system in your car, the infotainment screen on your dashboard, the collision-avoidance sensors in modern SUVs, almost every electronic device built in the last two decades has memory chips inside them.

Interestingly, even though memory chips power almost every modern device, their production is concentrated in the hands of just three companies.

Samsung and SK Hynix from South Korea, and Micron from the United States, produce more than 90% of the world's memory chips. In fact, South Korea alone accounts for over 73% of global DRAM output and 51% of NAND flash β€” making it, effectively, the memory capital of the world.Β 

The scale of the industry is also staggering. According to TrendForce, the DRAM market generated $165.7 billion in 2025 β€” a 73% jump from the year before. NAND flash added another $69.7 billion. Combined, the global memory market crossed $235 billion in a single year.

And for decades, the memory chip industry ran on a predictable boom-and-bust cycle. Everyone in the business knew the pattern. But what no one in the industry anticipated was the arrival of generative AI and the unprecedented gold rush for memory chips it has unleashed.

The Super Cycle

Let's start by discussing how and why AI models need memory. Let’s compare an AI model to a musician performing live. In the middle of their performance, they cannot stop to check their music sheet. Every note, every chord, every cue must already be memorized and ready to play the instant it is needed.

AI models work the same way.Β 

Every time a user inputs a prompt, the model must hold its entire knowledge β€” billions of learned patterns β€” in active memory simultaneously, and process them at speed. The larger the model, the more memory it needs. And since 2023, AI models have grown larger with each generation. The memory requirements have grown with them.

This has created demand for an entirely new kind of AI memory chip called High Bandwidth Memory, or HBM.Β 

And HBM is not like the memory in our phones or laptops. It is a premium, power-dense chip designed to sit directly on top of a GPU β€” the processor that runs AI models β€” and feed it data at extraordinary speed.Β 

And GPUs need humongous amounts of memory to work efficiently.Β 

❝

For example, every NVIDIA H100 GPU, the workhorse of today's AI data centers, is stacked with multiple HBM chips. A modern AI data center runs thousands of these GPUs, side by side, around the clock. Interestingly, a single top-of-the-line GPU, like NVIDIA’s Blackwell, can consume six to ten times more memory than H100.Β 

With tens of billions of dollars being spent on data center infrastructure every quarter, the demand for HBM chips has reached a scale the industry has never seen before.

But why is this affecting memory chips like DRAM and NAND?

Here is the problem. HBM is made in the same factories, by the same three companies β€” Samsung, SK Hynix, and Micron β€” that make these memory chips.

With Big Tech paying top dollar and signing long-term contracts to guarantee supply, for chipmakers, the math is simple: HBM is far more profitable than the commodity memory used in phones and laptops.Β 

So Samsung, SK Hynix, and Micron have done what any rational business would do β€” they have redirected their factories toward it. Wafer capacity that once churned out DRAM for smartphones and NAND for laptops is being retooled, line by line, to produce HBM for AI data centers.

This shift has created the supply crunch. And nowhere is that more visible than in the prices of memory chips.

Winners and Losers

Memory chip prices have risen so sharply that they have outpaced gold. According to data from the Bloomsbury Intelligence and Security Institute, DRAM prices are up 171% year-over-year, with DDR5 – the latest memory standard for modern devices – spot prices more than quadrupling since September 2025.Β 

And it shows no signs of stopping.Β 

According to Counterpoint Research, DRAM prices have risen 80-90 percent so far this quarter. NAND flash prices are surging, too. TrendForce, which has revised its forecasts upward twice already this year, now expects NAND contract prices to jump 55–60% quarter-on-quarter in Q1 2026 alone.

Interestingly, this memory shortage and the subsequent historical price rise have created a clear group of winners and losers. And right now, the winners are the three companies that control the world's memory supply.Β 

Let’s first take a look at SK Hynix.Β 

The South Korean chipmaker bet heavily on HBM years before the AI boom, and that bet has paid off. In its January 2026 earnings call, SK Hynix reported record full-year profits for 2025, with operating earnings more than doubling year-on-year.

Revenue rose about 66% in the December quarter from a year earlier, while operating profit surged 137% over the same period. SK Hynix said its HBM revenue more than doubled in 2025, helping it reach a record 97.147 trillion won ($66.5 billion) in revenue for the year β€” up nearly 50% from 2024. Its annual operating profit reached 47 trillion won ($32.3 billion), more than double the previous year's.

SK Hynix's stock tells its own story. Shares hit an all-time high on the Korea Exchange in October 2025 and have continued to climb since then. The company is even considering a U.S. stock listing to meet growing investor appetite. And in 2025, it achieved something no one in the industry had seen before: SK Hynix overtook Samsung in operating profit and now holds the largest share of the global HBM market.

Micron's growth is no less dramatic. As recently as 2023, the company was deep in losses β€” one of the memory industry's worst downturns in decades. By the end of 2025, it had posted record full-year revenue of $37.4 billion, up nearly 50%, with data center products β€” led by HBM β€” accounting for 56% of its total revenue. Net income nearly tripled in its most recent quarter.Β 

However, for Samsung, the world’s biggest memory chipmaker, the story has both ups and downs. It posted a record quarterly revenue of 93.8 trillion won ($65 billion) in Q4 2025, with operating profit more than tripling to 20 trillion won ($14 billion).Β The memory division alone posted an operating profit of 16.4 trillion won ($11.5 billion).Β 

But unlike SK Hynix and Micron, Samsung is not just a memory company. It is also one of the world's largest makers of smartphones, tablets, and consumer electronics, products that depend on the very memory chips its semiconductor division is now prioritising for AI.Β 

As memory prices surge, Samsung's electronics division faces rising input costs on every device it ships. In Q4 2025, its mobile and networks division saw operating profit fall nearly 9.5% year-on-year and more than 45% from the prior quarter.Β 

While the memory makers are witnessing a purple patch, the companies that depend on these chips are now counting their losses.

In February 2026, Qualcomm β€” one of the world’s largest smartphone chip designers β€” saw its stock fall 8% in a single session after forecasting quarterly revenue well below Wall Street expectations. The reason was memory. CEO Cristiano Amon put it plainly: memory suppliers have effectively committed their production capacity to data centers, leaving device makers scrambling for supply.

The ripple effects are also spreading across the electronic ecosystem.Β 

Lenovo reported a 21% year-on-year drop in net income for the December 2025 quarter, even as revenue grew, with management pointing directly to rising memory costs as the main driver of the decline. HP delivered a similar warning β€” forecasting double-digit declines in PC shipments through 2026 and trimming its full-year profit outlook to the low end of guidance, a move that sent its shares down 6%.

Most of these companies operate on razor-thin margins. When a key component like memory doubles or triples in price, there is only so much of that cost a manufacturer can absorb before it starts flowing downstream. And, experts say, that will begin soon.

Price War

The list of companies that have already raised prices β€” or announced they will β€” spans every corner of the electronics industry. Lenovo, Dell, HP, Acer, and ASUS have all flagged price increases on laptops and PCs. In November, Dell said it expected its cost basis for all its products to rise due to the memory shortage. According to Counterpoint Research, the average selling price of a smartphone globally is expected to rise 6.9% in 2026 β€” the steepest single-year increase the industry has seen in over a decade.Β 

But it is not just prices. Companies are also cutting production and delaying product launches.

❝

For example, Chinese smartphone makers including Xiaomi, Oppo, and Transsion are trimming their shipment targets for 2026, with Oppo cutting its forecast by as much as 20%. In gaming, Nintendo has flagged potential price increases on its Switch 2, the most anticipated console launch in years. Sony, meanwhile, is weighing a delay of the PlayStation 6 to 2028 or even 2029.Β 

The electronics industry is not the only one bracing for impact. The automotive sector, which already lived through the trauma of the 2021 chip shortage that prevented more than 10 million vehicles from being built, is staring down a new crisis.Β 

According to S&P Global Mobility, automotive DRAM prices are expected to rise 70–100% in 2026 compared to 2025. Wells Fargo analysts estimate that current DRAM content per vehicle ranges from $50 to $110 β€” and that the recent surge in prices will translate into a meaningful cost headwind for automakers across the board.

Tesla is already sounding the alarm. Elon Musk has warned that rising memory costs and tighter chip availability are beginning to ripple through Tesla’s own ecosystem. During the Q4 2025 earnings call, Elon Musk announced the possibility of building its own large-scale semiconductor manufacturing capacity.Β 

So, as Elon Musk suggests, could building new memory fabs ease the supply crunch? The answer is not straightforward.

Memory Loss

Building a memory fab is not like building a factory. It is one of the most complex, expensive, and time-consuming construction projects β€” costing between $10 billion and $20 billion per facility and taking anywhere between 3-5 years, according to experts.

And even if new memory fabs come online, the demand that triggered this crisis is not going anywhere. Smartphone sales will not fall permanently. Laptop demand will not disappear. And AI data center buildouts are accelerating, not slowing.Β 

And the chipmakers themselves have made their priorities clear. Micron formally exited the consumer memory business in late 2025 β€” stopping sales to consumer PC makers entirely by February 2026 and redirecting all available capacity to enterprise and AI data center customers.Β 

Its recent investments underscore where its future lies. Micron has committed $200 billion in U.S. expansion β€” with its greenfield Idaho fab targeting advanced DRAM and HBM capacity β€” and separately announced a major investment in Singapore to expand its advanced packaging capabilities, also focused entirely toward AI server memory. Samsung and SK Hynix have both announced massive capital expenditure programs directed towards HBM.Β 

With supply already constrained and future production already strained, the memory chip shortage is only going to get more severe and larger from here.Β 

This newsletter was written by Shyam Gowtham

What did you think of this edition?

Login or Subscribe to participate

Keep Reading