by Suraj Malik - 4 days ago - 4 min read
For years, AI hardware headlines have obsessed over GPUs. But the real choke point sits one layer closer to the silicon: memory. This week, SoftBank and Intel moved directly at that bottleneck by announcing a partnership to commercialize a next-gen stacked DRAM project known as “Z-Angle Memory” (ZAM) through SoftBank’s subsidiary SaiMemory.
The pitch is simple and dangerous to incumbents: HBM-like performance with dramatically better power efficiency, aimed at the same AI infrastructure buyers currently trapped in expensive, supply-constrained High-Bandwidth Memory (HBM) contracts.
HBM has become a strategic dependency in modern AI servers because high-end accelerators need enormous bandwidth to keep compute units fed. That has concentrated leverage in the hands of a few producers, most notably SK hynix and Samsung Electronics, with Micron Technology as a smaller third player.
What makes the moment especially tense is that AI models keep scaling, and the industry’s answer has been: “add more memory bandwidth, stack more memory, pay more for it.” A credible alternative doesn’t need to “beat HBM forever” to matter — it only needs to exist at scale.
The partnership centers on commercializing Z-Angle Memory (ZAM), a stacked DRAM approach intended to compete in the same performance neighborhood as HBM, but with a major emphasis on power reduction and manufacturability.
According to reporting, the collaboration is positioned as a route to bring the technology to market using Intel’s packaging/stacking strengths and SaiMemory’s program structure, tied to Japanese research/industry participation.
Important reality check: these are program targets, not shipping product specs. The credibility test will come later: prototypes, yields, and qualification in real AI platforms.
One reason “lower power memory” is such a loaded promise: in AI data centers, heat removal becomes a scaling wall. Cutting memory power does not just reduce the electricity bill — it can reduce thermal density, rack constraints, and cooling overhead.
That’s why an efficiency-focused alternative can be strategically attractive even if it’s only “close enough” on performance. It doesn’t have to win a benchmark war; it has to win an operations war.
If ZAM reaches production viability, the most immediate impact is not “HBM dies.” It’s this:
That’s exactly why the market treats an alternative memory stack as more than a technical curiosity.
SoftBank’s strategy has increasingly leaned toward “owning critical AI infrastructure choke points,” not just investing in apps. A memory play fits that logic: if AI compute scales into the next decade, memory and interconnect become recurring constraints.
It also fits the broader “Japan wants back into strategic semiconductors” theme that has shown up in multiple initiatives tied to academia + industry partnerships.
Signs: working prototypes, credible bandwidth/power figures, and a manufacturing partner story that doesn’t implode on yields.
Result: meaningful share capture in specific AI server segments; pricing pressure across the board.
Signs: adoption inside a limited ecosystem first, slower qualification elsewhere.
Result: still important — it gives buyers leverage — but doesn’t dethrone incumbents.
Signs: delays, shifting goalposts, vague performance updates.
Result: incumbents keep control; the HBM squeeze continues.
Next proof points to track