Technology

Google and Intel deepen AI infrastructure partnership as CPU demand surges

by Suraj Malik - 5 hours ago - 4 min read

Google and Intel have expanded their long-running collaboration into a broader, multi-year push to power the next phase of artificial intelligence infrastructure, signaling a shift in how AI systems are being built and scaled globally.

What happened

Google and Intel announced a deeper partnership focused on deploying Intel’s latest server chips across Google Cloud while jointly developing new custom infrastructure processors tailored for AI workloads.

At the center of the deal:

  • Continued deployment of Intel Xeon processors, including Xeon 6, across Google Cloud infrastructure
  • Expansion of custom chip co-development, particularly infrastructure processing units (IPUs)
  • A multi-generation roadmap aligning Google’s data center growth with Intel’s CPU evolution

This builds on a collaboration that began earlier in the decade, but now scales it to meet rapidly increasing AI compute demand.

The bigger shift: AI is moving beyond GPUs

For the past few years, AI infrastructure has been dominated by GPUs used for training large models. That is now changing.

The new partnership reflects a growing industry reality:

  • Training is no longer the only bottleneck
  • Inference and deployment workloads are exploding
  • CPUs are becoming critical again for AI systems

As companies deploy AI models at scale, CPUs handle orchestration, data movement, and real-time inference workloads, making them essential alongside accelerators.

Intel CEO Lip-Bu Tan emphasized this shift, noting that scaling AI requires “balanced systems” where CPUs and specialized chips work together rather than relying solely on GPUs.

What are IPUs and why they matter

A key component of the deal is the expansion of Infrastructure Processing Units (IPUs).

These chips are designed to:

  • Offload networking, storage, and security tasks from CPUs
  • Improve data center efficiency and predictability
  • Enable better utilization of expensive AI hardware

In practical terms, IPUs act as specialized coordinators inside data centers, freeing CPUs to focus on compute-heavy tasks while improving overall system performance.

Why this partnership matters now

1. CPU demand is rising again

The industry is facing a growing shortage of CPUs, driven by the expansion of AI services and real-time applications.

2. AI workloads are becoming more complex

New “agentic AI” systems, capable of multi-step reasoning and actions, require significantly more backend orchestration and compute coordination.

3. Cloud competition is intensifying

Google Cloud is competing with AWS and Microsoft Azure, both of which are investing heavily in custom silicon and AI infrastructure.

This deal helps Google:

  • Maintain supply stability
  • Optimize infrastructure costs
  • Compete with vertically integrated rivals

Strategic implications for Intel

For Intel, the partnership represents more than just a supply agreement.

  • It reinforces Intel’s position in AI data center CPUs, a segment it had been losing ground in
  • It aligns with a broader comeback strategy involving new chip designs and partnerships
  • It comes amid a wave of deals aimed at restoring competitiveness in the AI era

Investor response has been positive, with Intel’s stock seeing gains following the announcement and broader momentum building in 2026.

The competitive backdrop

Even as Google deepens ties with Intel, it continues to diversify its chip strategy:

  • Developing in-house Arm-based CPUs (Axion)
  • Expanding use of TPUs for AI training
  • Leveraging GPUs from external vendors

This reflects a broader industry trend where hyperscalers avoid dependence on a single chip provider, instead building hybrid compute stacks.

Why it matters

This partnership highlights a critical evolution in AI infrastructure:

  • The focus is shifting from raw model training power → full-stack deployment efficiency
  • CPUs are re-emerging as a core layer of AI systems, not just supporting components
  • Custom silicon, like IPUs, is becoming essential for scaling AI economically

In short, the future of AI will not be defined by a single type of chip, but by how well different compute layers are integrated.

The bottom line

The expanded Google - Intel partnership signals a structural shift in the AI ecosystem.

As AI moves from experimentation to large-scale deployment, the winners will not just be those with the fastest GPUs, but those who can build balanced, efficient, and scalable infrastructure stacks.

This deal positions both companies to compete in that next phase, where performance is measured not just in model size, but in how effectively AI can run in the real world.