by Suraj Malik - 4 days ago - 5 min read
AI data centers are growing fast. But behind the scenes, a much bigger problem is emerging. Power, not compute, is becoming the real bottleneck.
A new India-based startup, C2i Semiconductors, believes it has a solution. The Bengaluru company is building a “grid-to-GPU” power platform designed to reduce massive energy losses inside AI data centers and improve overall infrastructure economics.
Here is what is happening and why the industry is paying attention.
Artificial intelligence workloads are pushing data centers to consume unprecedented amounts of electricity. Industry projections suggest global data center power demand could nearly triple by 2035.
According to Goldman Sachs, data center electricity usage may rise about 175 percent by 2030 compared with 2023 levels. That increase is roughly equal to adding another top-10 power-consuming country.
But the real issue is not just generating electricity. It is delivering power efficiently to GPUs.
Inside modern data centers, electricity passes through multiple conversion stages before reaching AI chips. Each step wastes energy.
This inefficiency directly increases operating costs and cooling requirements.
C2i, short for Control, Conversion and Intelligence, is trying to redesign how power flows inside AI data centers.
Instead of using separate components for each power conversion stage, the startup is building a unified, plug-and-play system that runs from the data center power bus all the way into the processor package.
In simple terms, C2i wants to treat power delivery as one integrated system rather than many disconnected parts.
The company is focusing on:
By reducing the number of inefficient handoffs between components, C2i believes it can significantly cut energy waste.
C2i estimates its architecture could reduce end-to-end power losses by roughly 10 percentage points.
What that means in real numbers:
At hyperscale levels, even single-digit efficiency gains can translate into massive financial savings.
Peak XV’s Rajan Anandan has noted that reducing energy costs by 10 to 30 percent across the industry could unlock tens of billions of dollars in value.
Investors are already backing the bet.
C2i was founded in 2024 by former Texas Instruments power executives:
The company now has about 65 engineers and is based in Bengaluru, with customer-facing operations being set up in the United States and Taiwan.
The startup’s first two silicon designs are expected back from fabrication between April and June.
After that, early validation with hyperscalers and major data center operators will begin.
This phase will be critical. In the data center world, power infrastructure changes slowly, and new hardware must prove long-term reliability before large-scale adoption.
Once AI data centers are built, electricity becomes the dominant ongoing cost. That makes power efficiency one of the most important levers in AI economics.
If C2i’s technology works as claimed, the impact could include:
Perhaps most importantly, improved efficiency could free up additional compute capacity without requiring new grid connections, which are becoming increasingly difficult to secure globally.
Power delivery is one of the most entrenched parts of the data center stack. Large incumbents dominate the space, and qualification cycles are long and demanding.
Unlike startups that optimize single components, C2i is attempting a full-stack redesign. That increases both the potential upside and the execution risk.
The next six months, especially the first silicon results and customer feedback, will likely determine whether the company’s approach gains real traction.
C2i’s emergence also highlights a broader shift. India’s chip design ecosystem is maturing, supported by strong engineering talent and government design-linked incentives.
This environment is making it more realistic for startups to build globally competitive semiconductor products directly from India.
AI’s biggest constraint is quietly shifting from compute to power. C2i Semiconductors is betting that fixing the energy delivery chain inside data centers could unlock massive efficiency gains.
If its grid-to-GPU platform performs as promised, the startup could become an important player in the next phase of AI infrastructure. If not, it will face one of the toughest qualification gauntlets in the hardware industry.
Either way, power is now the battleground for AI’s future.