Number 90,985. That was ByteDance's queue position for users trying to generate a single five-second video with its flagship AI model last week. Estimated wait time: four hours. When researcher Zeyi Yang checked back after two hours, the estimate had grown to six. She went to bed instead.
Meanwhile, 4,000 miles north of ByteDance's Beijing servers, construction crews are pouring concrete on the banks of a Swedish river that used to feed a paper mill. By the time they're done, the site will house one of Europe's largest AI data centers - a direct consequence of the same bottleneck that put 90,000 people in a queue for a five-second video clip.
The global AI industry has a compute problem. Not an abstract, theoretical problem - a right-now, people-waiting-in-line, companies-fleeing-to-the-Arctic problem. Three separate news stories this week, each easy to read in isolation, all point to the same underlying crisis: the world does not have enough data center capacity to run the AI systems it has already built. The scramble to fix that is reshaping energy grids, rewriting geopolitics, and forcing every government on earth to pick a side in a race where the prize is raw computational power.
The Scale of the Problem - By the Numbers
The Arctic Gold Rush
The Swedish town of Borlange used to produce paper. Now it produces data. WIRED, Mar 2 Developer EcoDataCenter broke ground there in September, and CEO Peter Michelson drew the comparison explicitly: "The facility once produced paper, the raw material of the newspaper information age. Now, Borlange will produce the raw material for AI and the next information age."
It is not alone. Across Norway, Sweden, Finland, Denmark, and Iceland - the Nordic region - more than 50 data centers are currently under construction or about to break ground. According to research by consulting firm CBRE, nowhere else in Europe is data center capacity growing faster. Signings for AI data center capacity in Europe more than tripled in the first nine months of 2025 alone.
The migration north is being driven by a simple resource equation. Western Europe's traditional data center hubs - Frankfurt, London, Amsterdam, Paris, Dublin - have run out of space and, more critically, power. The energy grid in those cities cannot absorb the extraordinary electrical loads that training and running frontier AI models demands.
"There's an extraordinary amount of demand out there, but servicing that demand is increasingly an issue across Europe. Power is an increasingly precious commodity, and there's a scarcity of it." - Kevin Restivo, director of data center research at CBRE
The Nordics offer something those cities cannot: abundant hydropower and wind energy, cold ambient temperatures that reduce cooling costs, and vast tracts of available land far from competing industrial demand. Those three factors together produce what Philippe Sachs, chief business officer at neocloud firm Nscale, calls a once-in-a-generation opportunity. Nscale operates the Norwegian facility where OpenAI and Microsoft have both leased space.
"You're not really trading away much by locating there, but you're gaining an enormous amount," Sachs said. "Abundant green contiguous power with little competing industrial demand for that power. When you're thinking about trying to build very, very large, giga-factory-style compute clusters, it's far and away the best place to do it in Europe, if not the world."
OpenAI announced last year it would deploy 100,000 GPUs in a tiny Norwegian fjord town inside the Arctic Circle. Then Microsoft followed. Then, in the past few weeks alone: French AI lab Mistral signed a deal worth $1.4 billion of infrastructure at Borlange; Nordic data center operator atNorth announced an enormous facility elsewhere in Sweden; and a developer outlined a project in Finland that would more than double that country's current data center capacity if completed.
The acceleration is not purely organic. A new type of company - the "neocloud" - is driving much of it. Unlike traditional cloud providers that serve general-purpose computing needs (where latency to population centers matters), neoclouds sell access exclusively to massive fleets of GPUs for AI workloads. Because training runs and inference tasks are not latency-sensitive in the same way as financial trading systems, neoclouds can place their hardware wherever the power is cheapest and greenest. That freedom has sent them north.
There are second-order effects worth watching. Land prices around remote Nordic sites have risen 4 to 9 times above normal forest land value as zoning changes anticipate data center development, according to Jouni Salonen, a data center specialist at Business Finland. Local governments in regions where heritage industries like mining, lumber, and paper have declined are actively courting AI data center developers, hoping the revenue and jobs can revive fading rural economies.
But there is a catch embedded in the frenzy. Some hyperscale operators, Restivo asserts, are hoarding suitable sites - contracting power and securing land without any immediate intention to develop there. "They don't need all the power they have contracted today, but they think they'll need it," he claims. "And they certainly want to keep it away from competitors." If true, that behavior will inflate land prices and tighten power availability for genuine developers, potentially choking the very buildout the region is trying to accelerate.
ByteDance's Wall: What US Export Controls Are Actually Doing
In early February, ByteDance unveiled Seedance 2.0, a major upgrade to its flagship AI video model. The reaction inside China's technology community was immediate and intense. Feng Ji, founder of Game Science - the studio that built the global hit Black Myth: Wukong - wrote that he was "deeply shocked" by the model's capabilities. Pan Tianhong, who leads a video production studio with over 15 million social media followers, said Seedance 2.0 "thinks like a director." WIRED, Mar 5
Then most people tried to use it.
The queue hit 90,000 users. A five-second video took four hours - and that was if the content moderation system, which operates in the final stage of generation, didn't reject the output and send the user back to the start of the line. Users on Chinese social media began sharing strategies to game the system: generate shorter videos, send requests after midnight, pay for premium accounts and share access.
The bottleneck is not a software problem. It is a hardware problem with deep geopolitical roots.
"China hasn't produced any decent AI coding tool, which is why Chinese people are all dependent on Claude Code or Codex; but when it comes to video AI, China is miles ahead of the US." - Afra Wang, author of AI newsletter Concurrent, speaking to WIRED
ByteDance is one of China's richest tech companies. It can afford GPUs. The problem is that it cannot buy the GPUs it needs. US export controls, progressively tightened since 2022 and further expanded under both the Biden and Trump administrations, restrict the sale of advanced semiconductors - particularly Nvidia's highest-end chips - to Chinese entities. ByteDance can acquire lower-tier hardware and domestic alternatives like Huawei's Ascend chips, but these fall materially short of the computational density that video generation requires.
Video generation is, by some margin, the most compute-intensive consumer AI application. Generating a 15-second clip with Seedance 2.0 costs approximately $2 in compute - and that is at ByteDance's internal cost, at scale, with negotiated infrastructure rates. The model needs to process hundreds of frames, each requiring the system to maintain temporal coherence across the sequence. The math compounds quickly. A queue of 90,000 users, each wanting a five-second video, represents a computational demand that even a well-resourced Western AI lab would find challenging. For a company operating under export control constraints on its highest-performing hardware, it becomes a crisis.
ByteDance Seedance 2.0 - The Numbers
The compute bottleneck is only part of the problem. Disney, Netflix, and Paramount have all sent ByteDance cease-and-desist letters alleging that Seedance 2.0's outputs infringe on their copyrighted works. During the brief window between the model's release and the studios' response, videos appeared on X showing Wolverine fighting Hulk and Tom Cruise battling Brad Pitt. More extreme examples followed. ByteDance's content moderation is now performing double duty - catching both politically sensitive content for Chinese regulators and copyright-infringing content for Western studios.
The legal landscape in China has historically been permissive on intellectual property, which is precisely why ByteDance built Seedance 2.0 on training data that Western studios claim includes their copyrighted material without license. That permissiveness enabled rapid development, but it becomes a liability the moment the product scales globally - which ByteDance must do to justify the compute investment required to clear the queue.
The divergence between China and the US in AI video is real and meaningful. Chinese companies including Kling AI were already leading in the space before Seedance 2.0. But the path to global deployment is littered with the same legal and infrastructure obstacles that ByteDance is now hitting head-on.
The White House Pledge: Theater vs. Infrastructure Reality
On March 4, several of the world's largest technology companies gathered at the White House and signed a nonbinding pledge promising that they would not pass the costs of AI data centers on to American consumers' utility bills. Standing beside them, President Trump said: "Data centers... they need some PR help. People think that if the data center goes in, their electricity is going to go up." WIRED, Mar 4
Present: Microsoft, Meta, OpenAI, xAI, Google/Alphabet, Oracle, and Amazon. Absent: any enforcement mechanism, any binding legal obligation, or any structural change to how data centers interact with the US power grid.
"This is theater. This is a press release designed to make it seem like they are addressing this issue. But this issue can only really be addressed by utility regulators or Congress. The White House doesn't really have a lot of moves here." - Ari Peskoe, director of the Electricity Law Initiative, Harvard Law School Environmental and Energy Law Program
The political context matters. Data centers have become a genuine flashpoint in American electoral politics. A Heatmap News poll found that fewer than 30 percent of American voters would support a data center being built near where they live. Multiple states have introduced moratoriums on new data center construction. Georgia and Virginia - states where data center expansion has been aggressive - saw the issue become a meaningful factor in last year's elections.
The problem is structural. The US electrical grid was not designed to accommodate the concentrated, high-intensity power demands of AI data centers. Unlike traditional industrial facilities that draw power steadily, AI training runs can spike demand dramatically and unpredictably. Those spikes strain local transmission infrastructure that has not been upgraded in decades. Upgrading transmission lines is expensive, slow, and politically contentious.
Under the current regulatory structure, utilities earn returns by passing capital improvement costs through to all ratepayers - meaning that even if a data center is technically responsible for triggering an infrastructure upgrade, the cost gets spread across every household in the service area. That socialized cost structure is what makes voters angry, and it is what the White House pledge does not actually change.
Electricity law expert Peskoe is blunt about why: "The challenge here is that the utility business model socializes cost - it's designed to spread cost to everybody. We're in this new paradigm where we have just a few companies that are imposing billions of dollars of costs." Only Congress or state utility regulators have the authority to fundamentally restructure that system. A nonbinding White House pledge from tech companies changes nothing in either of those arenas.
The more substantive response to the grid problem is not in Washington - it is in Norway, Sweden, and Finland, where data center operators are increasingly building their own dedicated energy infrastructure rather than relying on shared grids. OpenAI's Norwegian facility is adjacent to hydroelectric generation capacity. Nscale's operations run on renewable power contracts that bypass the fragmented US grid entirely. The competitive advantage of the Nordic buildout is partly about cheap power and partly about the ability to lock in long-term dedicated supply - something that is nearly impossible to arrange at scale in the continental United States without years of regulatory approval.
The Neocloud Revolution - A New Class of Infrastructure
The company type doing most of the building in the Nordic region deserves closer attention. The neocloud is a new category, distinct from the hyperscalers (AWS, Azure, Google Cloud) that have dominated enterprise infrastructure for the past decade. Hyperscalers provide general-purpose compute across thousands of application types. Neoclouds provide one thing: GPU clusters for AI.
That specialization has economic implications that compound. A hyperscaler must balance its infrastructure investments across a diverse customer base with diverse latency and reliability requirements. A neocloud can optimize entirely for power density, connectivity between GPUs, and cost of electricity. The result is that neoclouds can operate at meaningfully lower cost per GPU-hour than hyperscalers, particularly for the large, sustained training runs that frontier AI development requires.
The rise of neoclouds is a direct consequence of the compute crisis. When AI labs discovered that they needed far more GPU capacity than the hyperscalers could provide at competitive prices, a new market opened. Companies like CoreWeave, Lambda Labs, and Nscale built into that gap. The Nordic region's combination of cheap power and available land is what made their business model viable at scale.
The relationship between neoclouds and the labs they serve is closer than typical cloud vendor relationships. OpenAI does not just lease capacity from Nscale's Norwegian facility - it designed the facility layout in collaboration with Nscale to optimize for its specific training workloads. The 100,000 GPUs in that Arctic fjord are configured as a single coherent computing substrate, not a generic shared resource. That level of integration represents a shift from renting commodity compute to building specialized infrastructure - even if someone else owns the land and the building.
Timeline: The Compute Crisis Builds
-
Nov 2022ChatGPT launches. Demand for AI inference compute spikes overnight. Hyperscalers begin scrambling to expand GPU capacity.
-
Oct 2023US expands semiconductor export controls on China, specifically targeting Nvidia A100 and H100 chips. ByteDance and other Chinese AI companies begin planning around constrained supply.
-
Summer 2023Nordic government agencies begin receiving calls from data center developers. Business Finland's Jouni Salonen notes "a clear change" - developers now prioritize power access over proximity to population centers.
-
2024Neocloud sector accelerates. CoreWeave, Lambda, Nscale, and others raise billions to build GPU clusters. Most new capacity lands in Nordic region due to power and land availability.
-
Early 2025OpenAI announces 100,000 GPU deployment in Norwegian Arctic Circle fjord town. Microsoft follows. The Arctic data center model goes mainstream.
-
Feb 2026ByteDance releases Seedance 2.0. Demand overwhelms available compute. Queue reaches 90,000+ users. Wait times exceed four hours for five-second clips.
-
Early Mar 2026Disney, Netflix, Paramount send cease-and-desist letters to ByteDance over alleged copyright infringement in Seedance 2.0 outputs. Mistral signs $1.4B Nordic infrastructure deal. US states introduce data center moratoriums.
-
Mar 4, 2026Trump White House hosts Big Tech for nonbinding data center pledge. Legal experts call it "theater." The structural grid problems remain unchanged.
Who Wins the Compute Race - And What It Costs
The compute crisis has a winner, at least in the short term: whoever secures power contracts in energy-rich regions first. The Nordic buildout is accelerating because power is available and regulators are accommodating. If the US cannot streamline data center permitting and grid connection - both of which currently involve multi-year approval processes - the center of gravity for AI infrastructure will continue moving to where the power is.
That has strategic implications that go beyond economics. The location of training infrastructure matters for data sovereignty, national security, and the ability of governments to exert oversight over AI systems. An AI model trained on compute hosted in a Norwegian fjord, by a neocloud that leases to a San Francisco-based AI lab, falls into a regulatory gray zone that no existing legal framework fully addresses. Which country's rules apply? Which government can demand access to training data or model weights? The lawyers haven't caught up yet.
For China, the compute constraint is functioning as intended by US policy - but with a lag and with incomplete effect. ByteDance's Seedance 2.0 is genuinely impressive despite hardware limitations. Chinese AI companies have become expert at squeezing more performance from restricted hardware, developing techniques for inference efficiency that their Western counterparts have less incentive to pursue. The constraint slows them down; it does not stop them. And the gaps are narrowing.
The copyright question will likely define the next phase of the global AI video race. US studios are now using legal tools where export controls cannot reach. If courts uphold the view that AI video models trained on copyrighted content without license cannot commercially distribute outputs that resemble that content, Chinese AI companies face a fundamental choice: retrain on licensed or synthetic data (expensive and potentially quality-reducing), negotiate licenses with studios (costly and complex), or restrict distribution to markets where IP enforcement is weaker (limiting global scale).
None of those options is easy. All of them are expensive. And all of them require - ultimately - more compute.
The paper mill in Borlange knew what it was doing when it shut down. The land along the river was always just waiting for the next industrial revolution. The question is whether the world can build the infrastructure of that revolution fast enough to run the AI systems it has already promised to deliver.
Ninety thousand people in a queue for a five-second video suggest the answer, for now, is no.
Get BLACKWIRE reports first.
Breaking news, investigations, and analysis - straight to your phone.
Join @blackwirenews on Telegram