2026 AI Investment Slowdown: How Capital Shifts From GPUs to Power Infrastructure and Korean Semiconductors
📘 Part 1. The First Signs of an AI Investment Slowdown — From Three Years of Explosive Growth to a New “Speed Adjustment”
From 2023 to 2025, we’ve lived through what can only be called an AI infrastructure gold rush. Large language models (LLMs) like ChatGPT, image and video generators, recommendation engines, search and ad algorithms — you could argue it was harder to find a service not using AI. Every major Big Tech name and a growing number of global enterprises went “all-in” on AI.
What supported this gold rush was not just the algorithms themselves, but the massive amount of physical infrastructure — CAPEX — required to make them run.
NVIDIA’s GPU shipments hit fresh record highs quarter after quarter, with AI and data-center GPUs taking an ever-larger share of the mix.
According to TrendForce estimates, total AI server CAPEX in 2023–2024 grew roughly in the 30–40% year-over-year range.
Demand for high-bandwidth memory (HBM) surged, and HBM lines at SK hynix and Samsung Electronics effectively went into “full-tilt” operation.
TSMC’s 5-nm and 4-nm nodes also maintained “near-record” utilization as AI GPUs and accelerators flooded its order book.
On top of that, AI infrastructure CAPEX at Big Tech names like Meta and Alphabet grew at roughly +40–60% year-over-year, as they rapidly reoriented their investment mix toward AI and data centers.
NVIDIA’s data-center revenue alone more than tripled in just about two years — a sign not of a simple “hot product,” but of an entire global wave of data-center and cloud operators laying down AI infrastructure at once.
In short, “growth” hardly does justice to what we saw from 2023 to 2025. The pace was closer to an outright explosion.
“AI spending will continue, but the growth rate will slow — a normalization phase.”
In other words, AI investment is not about to stop or reverse. What is likely to change from 2026 onward is that the exceptionally high growth rates we’ve seen so far start to drift back toward more “normal” territory.
Put simply, the AI industry looks like it is moving from the initial explosion phase (Phase 1) into a more mature, normalized phase (Phase 2).
1️⃣ Three Engines Behind AI Growth Are Now Flashing “Limit” Signals
Why are so many analysts suddenly talking about AI investment growth peaking around 2026?
It’s not because “AI is over” or the hype is fading. It’s because the three key engines that pushed AI CAPEX higher over the past three years are now bumping into real-world constraints at the same time.
Those three engines are:
- Hyperscaler (Big Tech) CAPEX constraints
- Physical limits of the power grid
- GPU and HBM supply bottlenecks
These may look like separate issues, but they are really different faces of the same underlying problem: the assumption that “we can invest infinitely” no longer holds.
(1) Hyperscaler CAPEX — When the Bill Starts to Bite
From 2023 to 2025, Amazon, Google, Meta, Microsoft and Oracle all used their earnings calls to highlight aggressive AI and data-center CAPEX plans.
Meta sharply shifted away from metaverse and AR, steering roughly $47 billion in CAPEX toward AI and data centers. Microsoft increased data-center CAPEX by more than 50% per year for its cloud and AI build-out, repeatedly calling AI infrastructure its “top investment priority.” Google also ramped up CAPEX to fund in-house TPUs, AI accelerators and a sweeping data-center overhaul.
Seen from the outside, it was easy to think, “they’ll just keep investing like this forever.” But their financial structures tell a different story once the bill comes due.
An AI server typically includes:
- Multiple high-end GPUs
- HBM-based memory
- High-speed networking cards
- High-efficiency power and advanced cooling
With GPU and HBM prices rising, the cost of one AI server is now several times that of a traditional general-purpose server. And when you scale that to racks and clusters, each deployment can easily run into the hundreds of thousands to millions of dollars.
And buying servers is just the beginning of building an AI cluster.
- Advanced cooling systems (liquid, immersion and beyond)
- Power feeds and distribution infrastructure
- Backup UPS and on-site generation
- Network switches and backbone capacity
- Primary and backup storage systems
All of that has to come together before you have a “production-grade AI data center.”
The result is simple: with the same IT budget, hyperscalers can now deploy far fewer servers than before.
Internally, Big Tech CFOs and strategy teams are now forced to ask:
“Out of the AI CAPEX we poured in over the last 2–3 years, how much incremental revenue and profit are we actually getting?”
Better AI search and ad targeting, Copilot-style subscription revenue, more effective content and ad recommendations — these clearly help. But it takes time to prove whether the returns justify the scale of the capital deployed.
That’s why from 2026 onward we’re more likely to see a shift:
- From “let’s grow CAPEX 50% every year”
- To “let’s maintain or selectively grow CAPEX where it truly pays”
This is the first structural reason behind a slowdown in AI CAPEX growth: the reality of hyperscaler CAPEX constraints.
(2) Power Grid Limits — The First Hard Cap on AI Expansion
The second factor is the physical limit of the power grid. When people talk about AI investment, they usually think about GPUs, HBM and parameter counts. But on the ground, the word you hear most often is “electricity.”
The International Energy Agency (IEA) warns that global data-center power demand could nearly double between 2024 and 2026.
The logic is straightforward:
- A single GPT-class model training run can consume power comparable to a small city’s daily usage
- One rack of AI servers can draw 3–5x the power of a conventional server rack
- As models grow and parameters increase, the power curve becomes exponential
And a data center is much more than a room full of servers:
- Cooling systems
- HVAC and environmental control
- UPS and backup generators
- Security and monitoring systems
all draw significant power as well.
The defining characteristic of power infrastructure is that you can’t double it just by writing a bigger check for next year’s budget.
- New power plants need to be built
- New high-voltage transmission lines must be permitted and installed
- Substations need expansion
- Environmental and regulatory approvals must be cleared in each region
All of this takes years, not quarters.
In some parts of the U.S., we’re already seeing:
- Power grids running close to capacity
- New data-center permits being delayed
- Difficulty securing suitable sites with sufficient power
In short, without enough grid capacity, buying more GPUs doesn’t matter.
We’re increasingly hitting a point where it is “physically hard to add more AI data centers.”
This is one of the most fundamental drivers behind slower AI CAPEX growth from 2026 on.
(3) HBM and GPU Bottlenecks — “Even If You Want to Buy More, You Can’t”
The third factor is the supply chain. The GPU is the “heart” of an AI server, and HBM is its “brain.” Through 2024–2025, both remained in chronic short supply — essentially, “if you can get it, you can sell it.”
- Lead times for NVIDIA H100 and next-gen GPUs averaged 6–9 months
- HBM3E and other advanced HBM products saw demand outstrip supply by roughly 20–30%
- For SK hynix and Samsung, adding HBM capacity — equipment, line ramp, yield stabilization — typically takes 9–12+ months
Demand is overflowing, but supply is constrained by fab capacity, equipment lead times and advanced node yields. You can’t simply double output in a year.
From an operator’s perspective, the situation looks like this:
- They want to launch more AI services
- They want to deploy more GPUs
- They want to secure more HBM
But if the supply chain can’t keep up, CAPEX that’s been budgeted simply can’t be deployed as quickly as planned.
That’s why in 2026, it’s quite possible that AI CAPEX growth slows not because demand has collapsed, but because “supply can’t keep pace with demand.”
2️⃣ Slower Growth ≠ Bubble Burst — It’s a Shift in Phase, Not the End of the Cycle
Here’s the crucial point.
A slowdown in AI CAPEX growth does not automatically mean an AI bubble is bursting.
A slower growth rate doesn’t mean:
- AI technology becomes irrelevant
- AI-related revenue suddenly turns negative
- The entire AI industry disappears
In fact, we’ve seen almost the same pattern before in prior IT cycles.
🔹 The “Normalization Pattern” in Mobile and Cloud
Think back to the period when smartphones, mobile internet and cloud computing really took off.
- 2010–2013: Smartphone penetration, mobile data usage and cloud revenue all grew at 30–40% per year. Even then, many reports warned this kind of growth couldn’t last forever and would eventually normalize.
- 2014–2016: Growth slowed to 10–20%, but corporate earnings became more stable. CAPEX also shifted from “expand at all costs” to “prioritize and focus.”
- 2017 onward: Mobile and cloud didn’t disappear — they became embedded infrastructure, so fundamental that they faded into the background of everyday life and business.
Growth rates slowed, but the total market became much larger, and stock prices resumed long-term uptrends after short-term corrections.
AI is likely to follow a similar path:
- Right now we’re entering the transition from the “explosion” phase to the “normalization” phase
- Short-term volatility can spike
- Over the long run, AI becomes pervasive infrastructure across the economy
So from 2026 onward, a slowdown in AI CAPEX growth is less a sign that “AI is over” and more a sign that “AI is settling in as a deeper layer of economic and industrial infrastructure.”
From 2023 to 2025, AI infrastructure went through an exceptionally rapid build-out. From 2026, three real-world constraints — hyperscaler CAPEX discipline, power-grid limits and supply bottlenecks in GPUs and HBM — are likely to pull growth back toward a more normal range.
That shift is not the “end” of AI, but rather a kind of “conditioning phase” that AI must pass through to move into its next stage. This is exactly the kind of signal global investors should be watching most closely going into 2026.
📘 Part 2. What an AI Investment Slowdown in 2026 Could Mean — The Shock of a Speed Change for the U.S., Korea and the Global Market
From 2023 to 2025, companies around the world fought to secure GPUs and poured tens of billions of dollars into data-center expansion — a classic “AI infrastructure overheating” phase. As we move into 2026, that steep growth curve is starting to run into the natural wall of “slower growth.”
This slowdown isn’t just about a lower growth rate. It is a structural turning point that could reshape U.S. tech, Korean semiconductors, power infrastructure and the global value chain all at once.
Let’s look at how this shift might play out in each major segment.
1️⃣ U.S. Tech Stocks — Higher Short-Term Volatility, Stronger Medium-Term Fundamentals
Over the last three years, U.S. tech stocks went through what you could call the “fastest re-rating in market history.” Companies plugged into the AI server and GPU supply chain saw extraordinary momentum.
- NVIDIA: Market cap roughly tripled to quadrupled in two years
- Broadcom: Demand surged for AI data-center networking and ASIC chips
- Super Micro Computer: Revenue doubled to tripled in about two years
- Microsoft, Google, Meta: AI-related CAPEX growing 40–60% per year
This kind of move happens when three forces align: earnings growth, future growth expectations and liquidity.
Now, with AI CAPEX growth widely expected to normalize into the 15–25% range in 2026, tech stocks are starting to reflect a different reality.
📌 (1) Short-Term: Valuation Reset and Profit-Taking
During the peak of the AI boom, many tech names traded at:
- Elevated P/E multiples
- High price-to-sales ratios
- Rich “growth premium” valuations
all justified by exceptional growth.
Any sign of slower CAPEX growth acts as a catalyst to “cool off stocks that ran too far too fast.”
The effect tends to be most pronounced in names like NVIDIA, AMD and Super Micro — companies whose revenues and profits jumped two to three times in a short span. That’s why 2026 could be a year of heightened volatility for tech.
📌 (2) Medium-Term: “Normalize, Then Grow Again”
But again, slower growth is not the same as a crisis. A more accurate analogy is a “runaway train easing back to a sustainable speed.”
The cloud sector went through almost the same pattern:
- 2010–2013: 40%+ annual growth — concerns about overheating
- 2014–2016: Growth slowed — valuations corrected
- 2017–2021: Stable growth supported a new leg of earnings-driven gains
AI is likely to move through a similar arc: from overinvestment to a focus on efficiency, optimization and stable growth.
- Short term: Volatility and valuation reset
- Medium term: Stronger fundamentals after normalization
- In other words: “End of overheating, start of maturity.”
2️⃣ Korean Semiconductors — Short-Term Swings, but Even Stronger Long-Term Growth
In Korea, the sector most sensitive to an AI slowdown is semiconductors — especially Samsung Electronics and SK hynix.
📌 (1) HBM Demand Still Growing, Just Not as Fast
In 2023–2024, the HBM market was in an all-out shortage:
- HBM demand growth ran at 30–40%+
- NVIDIA orders surged — SK hynix and Samsung were effectively running full out
- HBM pricing saw strong upward pressure, with some estimates of 20%+ price moves
By 2026, we’re likely to see:
- HBM demand growth normalizing into the 10–20% range
- More supply coming online, easing price pressure
- AI server growth slowing, which tempers the pace of new orders
That’s not “no growth” — it’s the point where “hyper-growth transitions into steady growth.”
📌 (2) Short-Term: Foreign Flows and Volatility
Korean semiconductors are tightly linked to U.S. tech sentiment.
→ Foreign selling in Korea
→ Sharp moves in Samsung & SK hynix
→ Higher volatility for the KOSPI overall
This pattern has played out many times, and is likely to repeat in 2026 if AI CAPEX growth slows and Big Tech stocks consolidate.
📌 (3) Long-Term: Competitive Position Actually Strengthens
Here’s the more important part:
- The HBM market is effectively an oligopoly dominated by SK hynix and Samsung
- The more AI expands, the more critical HBM becomes
- Korea also has an edge in 2.5D and 3D advanced packaging
So even if stock prices wobble in the short run, the structural competitiveness of Korean semiconductors actually improves over the long term. That’s the key lens for interpreting an AI slowdown from Korea’s perspective.
3️⃣ Power Infrastructure — Not a Casualty of AI, but a Major Beneficiary
Among the reasons cited for an AI slowdown, “power” is arguably the most powerful.
The IEA highlights:
- Data-center power demand could double between 2024 and 2026
- AI servers consume 3–5x more power than conventional servers
- Some U.S. regions are pausing or slowing new data-center approvals due to grid constraints
When you connect the dots:
- GPU demand is strong, but there isn’t enough power to host them
- AI demand is booming, but transmission and distribution are bottlenecks
- So the ultimate speed limit on AI is the power infrastructure itself
That’s a game-changer for the post-AI world. It’s why some analysts argue that from 2026 onward, the biggest winners may not be the AI application providers, but the power-infrastructure players behind them.
📌 (1) HVDC — The “High-Speed Highway” of AI-Era Electricity
- Minimizes losses in long-distance, high-capacity power transmission
- Large HVDC projects are being rolled out in the U.S. and Europe
- China and the Middle East are also ramping up HVDC investments
Korean companies like LS Cable & System and LS Electric are increasingly part of this global supply chain and stand to benefit.
📌 (2) ESS — A Core Enabler of AI Data-Center Power Stability
- Balances the intermittency of solar and wind
- Helps smooth peak power demand
- U.S. and EU CAPEX for ESS is on the rise
Solutions like Tesla’s Megapack and ESS businesses at players such as LG Energy Solution are positioned to tie directly into future AI infrastructure builds.
📌 (3) Power Semiconductors (SiC & GaN) — The Efficiency Levers
- SiC and GaN devices can improve energy efficiency by 20–40%
- They are essential for data-center power-conversion efficiency
- They also see demand from EVs, renewables, AI hardware and robotics
In other words, an AI slowdown is not the end of the AI era — it is the beginning of the era for the power-infrastructure ecosystem that supports AI.
- U.S. tech: short-term correction, medium-term strengthening
- Korean semis: higher volatility, but stronger long-term fundamentals
- Power infrastructure (HVDC, ESS, power semis): potential biggest winners of an AI “slowdown”
- After 2026, capital likely rotates from pure AI into AI + power infrastructure
📘 Part 3. Investing in a Slower-Growth AI World — Five Things Global Investors Need to Watch
As we’ve seen, AI investment growth is likely to slow somewhat from 2026. But many investors still misread what that means.
“Slower growth is not the same as ‘game over.’”
It means the speed of AI investment is changing, not that the AI economy is hitting a hard stop. So what might the global market structure look like after 2026?
The five points below can help map how capital may move across U.S. tech, Korean semis, power infrastructure and emerging markets between 2026 and 2028.
1️⃣ AI Spending Continues — But the Era of Hyper-Growth Is Ending
■ Core theme: Sustainable AI spending
From 2023 to 2025, AI CAPEX growth was essentially an “exceptional overheating phase.”
- Meta and Google: AI and data-center CAPEX growing 40–60% per year
- NVIDIA: Data-center revenue tripled in about two years
- Global AI server shipments: estimated 35–40% annual growth (TrendForce)
No industry can sustain that pace indefinitely.
From 2026 onward, factors like:
- CAPEX efficiency reviews at the company level
- Rising power and operating costs
- More stable GPU/HBM pricing
- Stricter regulation
are likely to push growth into a more sustainable range.
The important point:
AI spending slows, but does not stop. The cycle is shifting from “hyper-growth” to “steady expansion.”
Smartphones and cloud followed the same arc: explosive growth, then stable growth, then long-term structural growth. AI is poised to track a similar trajectory.
2️⃣ Tech Enters a “Selective Survival” Era
(In other words, the days when “every AI stock goes up” are over.)
In the early phase of the AI trade, almost anything with “AI” in the description could rally. By 2026, we should expect a more selective market.
■ (1) GPUs — Still the Core of the Ecosystem
GPU supply shortages may well persist into 2026:
- Ongoing lead times on NVIDIA H100/H200
- Constrained rollout for AMD’s MI300 series
- Stable or rising demand for AI servers
GPUs determine AI performance, training speed and efficiency. Companies with durable advantages in GPU design and supply will remain central to the story.
■ (2) HBM Memory — A Space Where Korea Dominates
HBM is what enables GPUs to fully unlock their performance.
- SK hynix: effectively dominating HBM3/HBM3E shipments
- Samsung Electronics: moving toward HBM4 commercialization
- Micron: still smaller in share, but investing aggressively
Put simply, the HBM ecosystem is Korea-centric — and as long as AI grows, that structural advantage remains.
■ (3) Power Infrastructure and “Picks & Shovels” — The New Tech
Because AI servers consume 3–5x more power than traditional ones, power infrastructure becomes even more critical than GPUs in some respects.
- HVDC and grid expansion
- Transmission and distribution equipment
- ESS for grid and data-center stability
- Advanced cooling systems
- SiC and GaN power semiconductors
These are the components that will set the upper limit on how far and how fast AI infrastructure can scale from 2026 onward.
After a valuation reset, only the names delivering real, durable earnings are likely to lead the next leg of the AI cycle.
3️⃣ Power Infrastructure — The Next Growth Engine for 2026–2028
The biggest constraint on AI is increasingly not capital, but power.
The IEA and the U.S. Department of Energy (DOE) point to:
- Data-center power demand potentially doubling starting mid-decade
- More than $50 billion in planned U.S. grid-expansion investments
- Large-scale HVDC and grid-upgrade projects in Europe as well
This power bottleneck has direct implications for AI.
AI wants to grow faster, but is held back by the grid — so capital will increasingly flow into the infrastructure that removes that bottleneck.
That’s why the outline of the next cycle looks fairly clear:
■ (1) HVDC — Ultra-High-Voltage DC Transmission
Optimized for long-distance, high-capacity transmission.
- New HVDC projects in the U.S., Europe and India
- Key roles for Korean suppliers like LS Cable & System and LS Electric, and for State Grid in China
■ (2) ESS — Power Stability for Data Centers
- Buffers the volatility of renewables
- Helps manage peak-load demand
- Seeing accelerating CAPEX trends in the U.S. and Europe
■ (3) Power Semiconductors (SiC & GaN)
- Reduce energy loss in power conversion
- Improve data-center power efficiency
- Benefit from diversified demand across EVs, solar, AI hardware and automation
➡ Together, these four pillars — HVDC, ESS, grid upgrades and power semiconductors — are strong candidates to lead the post-AI-hyper-growth cycle.
4️⃣ Korean Semiconductors — Volatile in the Short Run, Structurally Stronger Later
A slowdown in AI CAPEX in 2026 would affect Korean semis in two ways.
■ (1) Near-Term: More Volatility
U.S. Big Tech corrections tend to cascade:
→ NVIDIA corrects
→ Foreign investors sell Korean semis
→ KOSPI volatility rises
Given the weight of Samsung and SK hynix in the KOSPI, any wobble in their share prices can swing the entire index.
■ (2) Longer Term: Stronger Global Position
After 2026, Korean semiconductors may actually solidify their global role.
- HBM remains an oligopoly centered on SK hynix and Samsung
- Advanced packaging (2.5D, CoWoS and beyond) scales further
- Foundry demand for advanced nodes continues
- The AI memory market keeps growing structurally
HBM4 and HBM4E, in particular, are likely to be the stage on which Korean players widen their technical lead.
➡ Investors need to weigh near-term volatility against this longer-term structural story.
5️⃣ From a Single “AI Trade” to a Composite Theme: AI + Power + Infrastructure
Between 2023 and 2025, markets were dominated by a single narrative: the AI trade. After 2026, the picture becomes more layered.
■ (1) AI Remains the Core, but at a Different Speed
- More focus on CAPEX efficiency
- Greater emphasis on GPU and HBM supply chains
- More attention to the monetization of AI models and services
■ (2) Power and Infrastructure Become Co-Leads
- HVDC, ESS and power semiconductors move closer to center stage
- Grid expansion and data-center power upgrades rise up policy agendas
- Renewables and power-efficiency tech gain strategic importance
■ (3) Emerging Markets (Especially India and Southeast Asia) Gain Momentum
- Simultaneous build-outs of power grids and data centers
- Growth models centered on infrastructure investment
Taken together, 2026 looks less like “the year AI gets weaker” and more like “the year AI starts pulling other industries up with it.”
If 2023–2025 was the era of “AI explosion,” then 2026 marks the beginning of “sustainable AI expansion.” AI doesn’t stop; what changes are the speed, the mechanisms and the industries that benefit most.
For investors after 2026, the key map looks something like this:
GPUs & HBM → Strain on the power grid
Power-grid strain → Growth in HVDC, ESS and power semiconductors
AI slowdown → Short-term corrections in Korean semis
AI sophistication → Stronger long-term growth for Korean semis
U.S. · Korea · Europe evolve together → A more layered global growth structure
Investors who understand this transition — from a pure AI trade to a composite theme of AI + power + infrastructure — are likely to be the ones who capture the biggest opportunities in the global market between 2026 and 2028.
📚 Sources (Condensed)
TrendForce; International Energy Agency (IEA); NVIDIA, Meta, Microsoft and Alphabet investor-relations materials;
SK hynix and Samsung Electronics IR; U.S. Department of Energy (DOE) and publicly available global reports on power and data-center infrastructure.

댓글
댓글 쓰기