- ■
C2i Semiconductors raised $15M for grid-to-GPU power efficiency solutions, backed by Peak XV Partners and TDK Ventures. The round validates power delivery as an emerging infrastructure category.
- ■
Power constraints now drive AI data center ROI calculations. The shift moves power efficiency from optimization detail to architectural constraint affecting 6-month deployment decisions.
- ■
Enterprise decision-makers planning data center expansion must assess grid capacity and power delivery architecture before committing to GPU clusters. The window for planning closes in Q2 2026.
- ■
Watch for grid capacity announcements in major AI hubs—California, Virginia, Texas. When utilities begin rejecting data center power requests, the market has hit true constraint.
The moment when AI infrastructure's most overlooked constraint becomes its sharpest competitive edge just arrived with C2i Semiconductors closing a $15 million Series A led by Peak XV Partners. The inflection point is specific: grid-to-GPU power loss has moved from a data center optimization problem to the limiting variable that determines whether enterprises can scale AI deployment. Peak XV's backing signals the market has shifted from "can we afford this?" to "can we power this?" That distinction reshapes everything about AI infrastructure expansion timelines.
The story of AI infrastructure has been told as a story about chips—how many GPUs you can buy, how fast they compute, what chip generation you deploy. But the real constraint grinding against expansion right now isn't silicon. It's electricity.
This realization crystallized this morning when C2i Semiconductors, an Indian startup testing grid-to-GPU power loss reduction, closed $15 million in Series A funding. Peak XV Partners led the round, alongside TDK Ventures. On its surface, it reads as another infrastructure startup solving a technical problem. Below the surface, it's a market validation moment—investors are now pricing power delivery as a distinct, investable category within AI infrastructure. That's the inflection.
Here's why the timing matters: data center operators have spent eighteen months trying to answer a deceptively simple question. You have a next-generation GPU that pulls 700 watts. You want to cluster 10,000 of them together. The math seems straightforward until you try to deliver power at that density. The losses between grid connection and GPU die aren't marginal engineering concerns anymore. On a 10,000-GPU cluster, they're the difference between "we can site this data center here" and "we cannot afford the power delivery infrastructure."
TDK's involvement signals something deeper—this isn't just software optimization or better cooling. TDK Ventures backs hardware layer solutions, which means C2i is likely working on physical power delivery architecture—transformer efficiency, distribution losses, rectification stages. The mundane materials science stuff that determines whether your megawatt-scale power infrastructure wastes 5% of input or 15%.
Remember when cloud computing hit its infrastructure constraint? It wasn't CPU cores. It was fiber optic connectivity. Companies that solved the "how do we connect global data centers efficiently" problem—think Equinix, Global Signal—became foundational infrastructure layer winners. Power delivery in AI is that moment compressed into eighteen months instead of eighteen years, because the scale of expansion is more aggressive.
The numbers illustrate the constraint. A single hyperscaler training facility now consumes as much power as a city of 50,000 people. Nvidia's H100s pull 700 watts under load. New architectures coming to market pull more. When you're deploying 50,000 GPUs across multiple sites, power becomes a first-order planning variable, not an afterthought. Grid capacity matters. Transformer efficiency matters. Cable gauge matters. And if your data center loses 8% of input power through inefficiency in the delivery stack, you're funding a power plant somewhere that produces no AI value.
What Peak XV's backing actually signals is that they're seeing enterprises make data center expansion decisions with power constraints as the limiting factor. This isn't hypothetical anymore. When Databricks or Anthropic or a financial services firm trying to train internal models plans their infrastructure, they're now hitting grid capacity constraints in certain regions before they hit chip constraints.
The timing creates a compressed window. Enterprises over 10,000 employees planning AI infrastructure buildout in 2026 need to answer three questions simultaneously: What compute do you need (GPU selection)? Where can you site it (grid capacity)? How do you deliver power efficiently (power architecture)? If you're asking these questions sequentially instead of in parallel, you've already missed six months. If your infrastructure team doesn't have power delivery expertise, that gap just became a hiring priority.
For smaller organizations and startups, the constraint manifests differently but equally real. Cloud provider capacity for GPU instances is increasingly power-constrained, not hardware-constrained. When Azure or AWS throttles your ability to provision more GPUs, it's not because they ran out of chips. It's because grid capacity in that region can't support additional power draw.
C2i's specific angle—grid-to-GPU efficiency—addresses the moment where power delivery and AI deployment collide. They're not building chips. They're fixing the infrastructure layer between the grid and the processor. That's infrastructure-layer thinking, which is why TDK, a components manufacturer, is backing them alongside Peak XV, an infrastructure-focused venture fund.
The market response will follow a predictable pattern. Other startups will emerge with competing power delivery solutions. Established infrastructure companies—Schneider Electric, Eaton, Siemens—will suddenly care deeply about GPU-scale efficiency specifications. Utilities will begin planning new generation capacity specifically for AI data center clusters. And data center operators will shift from "where can we build?" to "where can we build given grid constraints?"
The constraint is real, immediate, and reshaping capital allocation. Power efficiency has moved from a data sheet specification to a competitive moat.
Power delivery just became pricing in AI infrastructure economics as a distinct, investable category. C2i's $15M Series A validates what enterprise infrastructure teams are already discovering—grid capacity constraints are now the binding variable in data center expansion decisions, not GPU availability. Enterprise decision-makers planning 2026 infrastructure buildout should be asking power delivery architects alongside chip architects. For infrastructure investors, the window to back grid-to-GPU efficiency solutions is 6-8 months wide—after that, this category becomes crowded and commoditized. Watch grid capacity announcements in California, Virginia, and Texas. When utilities begin capacity rationing for data center customers, the market has found its true constraint.





