Travailler avec un représentant du fabricant Vertiv permet de configurer des conceptions complexes en fonction de vos besoins uniques. Si vous êtes une organisation à la recherche de conseils techniques sur un projet d’envergure, Vertiv peut vous fournir le soutien dont vous avez besoin.

En savoir plus

De nombreux clients travaillent avec un partenaire revendeur Vertiv pour acheter des produits Vertiv destinés à leurs applications informatiques. Les partenaires disposent d’une formation et d’une expérience approfondies et sont particulièrement bien placés pour spécifier, vendre et assurer le soutien de solutions informatiques et d’infrastructure complètes avec les produits Vertiv.

Trouver un revendeur

Vous savez déjà ce dont vous avez besoin? Vous recherchez la commodité de l’achat en ligne et de l’expédition? Certaines catégories de produits Vertiv peuvent être achetées auprès d’un revendeur en ligne.


Trouver un revendeur en ligne

Besoin d’aide pour choisir un produit? Parlez à un spécialiste Vertiv hautement qualifié qui vous guidera vers la solution qui vous convient.



Contacter un spécialiste Vertiv

AI is redefining data center infrastructure: Insights from Data Center Vision 2030

NVIDIA GTC 2025: Research from analyst company IDC examines how AI is driving a dramatic overhaul of data centers design and capacity.

The adoption of AI and accelerated compute is shaping the future of data center technology development, including the key power, cooling, and services to support AI workloads and infrastructure.

This shift is explored in an Infobrief research report by analyst company IDC titled “Data Center Vision 2030: How Data Center Infrastructure Will Evolve to Support AI and Accelerated Compute”. The research was discussed in more depth during a dedicated session at NVIDIA GTC 2025, in San Jose, California, between Sean Graham, Research Director, Cloud to Edge Datacenter Trends, who authored the research and Vertiv’s Martin Olsen, Senior Vice President, Products and Solutions.

Pivot point for AI strategy

IDC’s Graham opened the session with a clear declaration. “We’re at a pivot,” he stated, noting that after two years of AI experimentation in 2023 and 2024—sparked by ChatGPT and corporate mandates—businesses are now shifting gears. “What does this look like for our business going forward? What’s our AI strategy?” he asked, framing the question now dominating boardrooms. IDC’s research backs this shift: 84% of senior tech leaders see AI as a “new corporate workload,” akin to email or SAP, with dedicated budgets and processes.

Spending is ramping up accordingly, according to IDC. “Budget increases from CIOs [show] two of the top four things that are going to have the most increased spending in 2025 [are] around IT infrastructure to support all of these AI use cases,” Graham said. IDC projects AI infrastructure investment growing at 28.1% annually, but there’s a glaring problem: “The data center facilities that we have today aren’t adequate to support it,” he warned, citing a survey where 80% of leaders flagged power and cooling as critical or important to their AI plans.


“Data center energy is going to grow 23.5% year over year, from 397 Terawatt Hours (TWh) in 2024 to 108 Gigawatts by 2028.”

-Sean Graham 
Research Director, Cloud to Edge Datacenter, IDC


 Power has emerged as the central bottleneck. “Data center energy is going to grow 23.5% a year between 2024 and 2028,” Graham stated, with consumption set to double by decade’s end and 108 gigawatts of new power capacity needed globally. This surge comes with a cost. “That’s going to make them a lot more expensive to build and operate as we compete for power and critical infrastructure,” he explained, pointing to growing community pushback—“not in my backyard”—and competition with sectors like electric vehicles for grid resources.

Accelerated computing redefines design

Martin Olsen from Vertiv drilled into the technical implications, emphasizing AI’s role in breaking current molds. “AI is driving [data centers] to where power, cooling, and compute operate as one holistic unit,” he said, describing it as a “paradigm shift” for an industry once siloed by specialty. Vertiv’s five-year partnership with NVIDIA has tracked this evolution, particularly with platforms like Blackwell, which boasts 35 times the power of its predecessor. “It wasn’t long ago we were talking 25 kilowatts, 40, 80. Now it’s 130 kilowatts at a rack,” Olsen noted, underscoring the rapid density increase.

This shift demands new approaches. “We’ve introduced liquid cooling as part of this,” said Olsen, hinting at a future where racks could hit “multiple hundreds of kilowatts”—or even a megawatt if unchecked. The vision? A “unit of compute” that’s plug-and-play. “I met with a fairly large AI provider yesterday. To them, it’s about a chunk of compute that’s going to come online,” Olsen shared, illustrating the mindset of new entrants like sovereign AI providers.


“AI is driving data centers to where power, cooling, and compute operate as one holistic unit.”

-Martin Olsen 
VP, Global Product Strategy, Vertiv


 Solutions on the horizon

The obstacles are formidable. “Power permitting and availability [are] a really significant problem,” Olsen stressed, citing a 3.5-gigawatt demand in Silicon Valley by 2028 and an 800-megawatt project in Hayward—enough to power a Miami-sized city. AI’s dynamic workloads add complexity. “It becomes extremely important that the power and cooling is tightly integrated with the compute,” he explained, advocating for predictive systems to manage GPU-driven spikes.

Despite the challenges, solutions are taking shape. Vertiv’s seven-megawatt reference architecture for NVIDIA’s GB200 NVL72 promises “20% more energy efficient, 30% more space efficient” designs, cutting deployment times in half and reducing total cost of ownership by 25%. “We continue to evolve that,” Olsen assured, linking it to predictive infrastructure that tracks dynamic workloads.

Watch Martin Olsen’s full presentation, where he details how the data center has evolved into a single unit of compute, unifying power and cooling with GPU-accelerated computing resources. Youtube video

Market stakes soar

The financial stakes are staggering, with Olsen shedding light on the industry’s investment frenzy. “$200 billion spent on training so far,” he said, adding that big investors might push that closer to $300 billion already. Looking ahead, projections suggest another $1 trillion could pour into AI infrastructure by 2030 as demand skyrockets. Yet the revenue picture lags. “Cisco’s CEO pegged it at about five or $10 billion to come out of it from a revenue standpoint,” Olsen noted, framing inferencing revenue as a mere fraction of its potential. The gap between massive outlays and early returns underscores the long game: infrastructure today is the foundation for tomorrow’s value.

Graham’s closing statement crystallized the stakes: “The data center of the future is going to look fundamentally different than it does today.” Olsen echoed this, spotlighting “inferencing” as the endgame— “That’s how we’re going to extract value from this”—while noting that 80% of enterprise data remains on-site, driving an “enterprise revival” in owned infrastructure.

Dive deeper into the data center evolution

For a comprehensive look at the trends, challenges, and solutions, download the InfoBrief from Vertiv and IDC: How data center infrastructure will evolve to support AI and accelerated compute

Articles reliés

Langue et localisation