Technology

SoftBank and Intel Just Fired a Shot at the HBM Bottleneck and AI Data Centers Heard It

by Suraj Malik - 4 days ago - 4 min read

For years, AI hardware headlines have obsessed over GPUs. But the real choke point sits one layer closer to the silicon: memory. This week, SoftBank and Intel moved directly at that bottleneck by announcing a partnership to commercialize a next-gen stacked DRAM project known as “Z-Angle Memory” (ZAM) through SoftBank’s subsidiary SaiMemory.

The pitch is simple and dangerous to incumbents: HBM-like performance with dramatically better power efficiency, aimed at the same AI infrastructure buyers currently trapped in expensive, supply-constrained High-Bandwidth Memory (HBM) contracts.

Why this announcement matters: HBM isn’t “a component” anymore - it’s the limiter

HBM has become a strategic dependency in modern AI servers because high-end accelerators need enormous bandwidth to keep compute units fed. That has concentrated leverage in the hands of a few producers, most notably SK hynix and Samsung Electronics, with Micron Technology as a smaller third player.

What makes the moment especially tense is that AI models keep scaling, and the industry’s answer has been: “add more memory bandwidth, stack more memory, pay more for it.” A credible alternative doesn’t need to “beat HBM forever” to matter — it only needs to exist at scale.

What SoftBank + SaiMemory + Intel are actually building

The partnership centers on commercializing Z-Angle Memory (ZAM), a stacked DRAM approach intended to compete in the same performance neighborhood as HBM, but with a major emphasis on power reduction and manufacturability.

According to reporting, the collaboration is positioned as a route to bring the technology to market using Intel’s packaging/stacking strengths and SaiMemory’s program structure, tied to Japanese research/industry participation.

The claims being attached to ZAM 

  • Power draw: targets described as materially lower than conventional HBM (often framed around “half” in program discussions).
  • Cost: positioned as cheaper than HBM if it can avoid the same yield/capex traps.
  • Timeline: prototypes and commercialization targets have been discussed in multi-year steps (late-decade commercialization is the typical memory-hardware cadence).
  • Ecosystem: efforts have been linked to Japanese institutions like The University of Tokyo and RIKEN in broader reporting around the initiative.

Important reality check: these are program targets, not shipping product specs. The credibility test will come later: prototypes, yields, and qualification in real AI platforms.

The real battleground is not watts - it’s cooling

One reason “lower power memory” is such a loaded promise: in AI data centers, heat removal becomes a scaling wall. Cutting memory power does not just reduce the electricity bill — it can reduce thermal density, rack constraints, and cooling overhead.

That’s why an efficiency-focused alternative can be strategically attractive even if it’s only “close enough” on performance. It doesn’t have to win a benchmark war; it has to win an operations war.

Who gets hurt if ZAM works

If ZAM reaches production viability, the most immediate impact is not “HBM dies.” It’s this:

  • Pricing power gets challenged. Even a partial substitute changes negotiation dynamics.
  • Supply leverage weakens. Buyers gain a second credible option.
  • Margins face pressure. The HBM premium becomes harder to defend when customers can threaten a switch.

That’s exactly why the market treats an alternative memory stack as more than a technical curiosity.

Why SoftBank is in this fight at all

SoftBank’s strategy has increasingly leaned toward “owning critical AI infrastructure choke points,” not just investing in apps. A memory play fits that logic: if AI compute scales into the next decade, memory and interconnect become recurring constraints.

It also fits the broader “Japan wants back into strategic semiconductors” theme that has shown up in multiple initiatives tied to academia + industry partnerships.

The three outcomes that matter (and what to watch next)

1) ZAM becomes a real HBM alternative 

Signs: working prototypes, credible bandwidth/power figures, and a manufacturing partner story that doesn’t implode on yields.

Result: meaningful share capture in specific AI server segments; pricing pressure across the board.

2) ZAM ships, but only in niche deployments 

Signs: adoption inside a limited ecosystem first, slower qualification elsewhere.

Result: still important — it gives buyers leverage — but doesn’t dethrone incumbents.

3) ZAM slips on timeline or qualification 

Signs: delays, shifting goalposts, vague performance updates.

Result: incumbents keep control; the HBM squeeze continues.

Next proof points to track

  • Prototype milestone dates and whether they slip
  • Any named manufacturing/packaging partners and capacity commitments
  • Whether major AI platform players validate compatibility (even quietly)
  • Independent measurements (power, bandwidth, thermals) rather than roadmap claims