The Crisis Nobody Is Modeling for — But Should Be
In 2008, the financial system collapsed under the weight of instruments so complex that the people selling them no longer understood what they owned.
In 2028, the risk may not be bad mortgages. It may be good AI — AI that is too fast, too interconnected, and too correlated to fail gracefully.
Economists, central bankers, and a growing number of AI researchers are beginning to map what they call the Global Intelligence Crash scenario, or GIC: a financial crisis triggered not by fraud or policy failure, but by the emergent behavior of AI systems operating at a scale and speed that human oversight cannot match.
This is not a fringe prediction. It is a structured risk scenario that the Bank for International Settlements, the IMF, and the European Systemic Risk Board have each flagged — in cautious, bureaucratic language — as worthy of serious modeling.
Here is what that scenario looks like, why 2028 is the critical window, and what the signals are that it may already be unfolding.
Why the Financial System Is More Exposed to AI Than It Appears
The standard narrative about AI and finance focuses on efficiency: faster trades, better credit scoring, smarter fraud detection. That narrative is accurate — and it is exactly what makes the systemic risk so easy to underestimate.
The same properties that make AI valuable in financial markets — speed, pattern recognition, autonomous decision-making — also create conditions for synchronized failure. When thousands of AI trading systems are trained on overlapping datasets, optimized for similar objectives, and operating in millisecond timeframes, they do not behave like thousands of independent actors. They behave like a single, enormous, correlated position.
A 2025 working paper from the Bank for International Settlements found that AI-driven trading strategies across major equity and fixed-income markets had reached a degree of behavioral correlation with no historical precedent in human-managed portfolios. The paper stopped short of calling this a systemic threat. It described it as a "concentration of model risk at market scale."
That is the polite version.
The Three Mechanisms That Make a GIC Possible
Understanding the 2028 scenario requires understanding how a crisis would actually propagate. There are three distinct mechanisms — and all three are accelerating simultaneously.
1. Correlated Liquidation at Machine Speed
Human traders panic. AI systems optimize. The distinction matters enormously during a market dislocation.
When a shock event occurs — a geopolitical escalation, an unexpected central bank announcement, a major corporate failure — human traders sell in waves, over hours or days, with some contrarians buying the dip. AI systems trained on risk-minimization objectives may all identify the same exit signal at the same moment and execute simultaneously, across asset classes, before any human can intervene.
The 2010 Flash Crash lasted 36 minutes and erased nearly $1 trillion in market value before partial recovery. That event was caused by a relatively simple automated trading interaction. Modern AI systems are orders of magnitude more capable — and far more deeply embedded across derivatives, sovereign debt, and currency markets.
2. Synthetic Leverage Through AI-Optimized Structures
The 2008 crisis was, at its core, a leverage crisis hidden inside complexity. Today, AI systems are being used to engineer financial structures that optimize for regulatory capital efficiency — meaning they are designed, often successfully, to carry more effective leverage than regulators can see.
This is not necessarily illegal. It is the predictable output of AI systems given the objective: maximize return within regulatory constraints. The constraints are bounded. The creativity of the optimization is not.
The IMF's Global Financial Stability Report (2025) identified "AI-assisted regulatory arbitrage" as one of the top five emerging risks to financial system stability, noting that traditional stress-testing frameworks were not designed to detect leverage accumulated through model-generated complexity.
3. Autonomous Agent Feedback Loops
The newest and least understood risk involves autonomous AI agents — systems that do not merely execute trades but negotiate, form agreements, and adapt strategy in real time based on other agents' behavior.
As these systems proliferate, they interact with each other in ways their designers did not anticipate and cannot easily audit. Early research from the Santa Fe Institute on multi-agent financial simulations has documented emergent behaviors — including coordinated price manipulation patterns — that arose from individually rational agents pursuing individually permitted objectives.
None of the agents were doing anything wrong. The system was.
What Leading Voices Are Saying
Nouriel Roubini — who correctly anticipated the 2008 housing crash — has argued publicly that AI presents "a new class of systemic risk that is structurally different from anything regulators have managed before," specifically because the speed of propagation exceeds the speed of human intervention.
Hyun Song Shin, Chief Economist at the Bank for International Settlements, has been more measured but no less pointed, writing in 2025 that "the homogenization of investment behavior through shared AI models represents a potential single point of failure for diversification assumptions that underpin modern portfolio theory."
Not all economists accept the catastrophist framing. Some, like Tyler Cowen, argue that AI in financial markets should be understood as a liquidity-providing force on net — that autonomous systems will find and fill dislocations faster than human traders, acting as stabilizers rather than amplifiers. The empirical record, to date, is genuinely mixed.
What is notable is that the disagreement itself has reached the highest levels of monetary policy. When central bank chief economists are publicly debating whether AI is a stabilizer or a destabilizer, the question is no longer theoretical.
The 2028 Timeline: Why This Window Specifically
The GIC scenario is not assigned a specific date arbitrarily. 2028 represents the convergence of several independent timelines that researchers have been tracking separately.
By 2027–2028, autonomous AI agent deployment in institutional finance is projected to cross what researchers at MIT Sloan call the "majority threshold" — the point at which AI-driven decision-making accounts for more than 50% of trading volume across global equity and fixed-income markets. At that point, the assumption that human actors serve as a behavioral backstop — a circuit breaker of irrationality that paradoxically stabilizes markets — no longer holds.
Simultaneously, the current generation of large-scale AI financial models will have been in production long enough to have trained on each other's outputs. This creates what AI safety researchers call "model collapse" dynamics in a financial context: systems whose understanding of market reality is increasingly filtered through the behavior of other AI systems rather than grounded in underlying economic fundamentals.
The 2028 window is also geopolitically significant. Several major sovereign debt refinancing cycles — including significant tranches of U.S., EU, and Japanese government debt — peak between 2027 and 2029. A confidence shock in AI-driven bond markets during that window would hit at maximum structural vulnerability.
What This Means for Investors, Regulators, and the Public
If you are an investor: The traditional diversification assumption — that uncorrelated assets provide protection — may be weaker than your portfolio models suggest if both asset classes are being managed by AI systems with similar training data and objective functions. This is not an argument to exit markets. It is an argument to ask harder questions about what "uncorrelated" actually means in a world of shared AI infrastructure.
If you are a regulator or policymaker: The tools developed after 2008 — stress tests, capital buffers, resolution frameworks — were designed for human-speed crises. A GIC-class event could move from inception to systemic failure in hours rather than days. The legislative and regulatory response window is 2026–2027. After that, the next generation of autonomous agent deployment will be in production and significantly harder to constrain after the fact.
If you are a member of the public: The GIC scenario would not stay in financial markets. A credit freeze of the scale modeled in the most severe scenarios would affect mortgage availability, corporate payrolls, and government borrowing capacity within weeks. The channels from financial crisis to real-economy harm are well-documented from 2008. AI would not change those channels — it would simply widen them and open them faster.
The Case Against the GIC Scenario
The pessimistic framing deserves genuine scrutiny, because smart people disagree with it.
Several arguments cut against the 2028 GIC thesis. First, financial AI systems have already been operating in high-frequency trading since the early 2010s, and while they have produced volatility events, none has triggered a systemic crisis. Proponents of this view argue that the market has already adapted to machine-speed trading and that new AI systems represent incremental rather than categorical change.
Second, regulatory frameworks are not static. The EU's AI Act, the U.S. SEC's proposed algorithmic accountability rules, and the Financial Stability Board's ongoing AI working group all represent genuine attempts to get ahead of the risk. Regulatory optimists argue that the political economy of financial stability — which gave regulators enormous power after 2008 — will mobilize again before a crisis, not only after.
Third, some AI researchers argue that the correlation risk is overstated because competitive pressure in financial markets creates strong incentives for differentiation. A strategy that everyone uses is a strategy that earns no alpha, so market forces naturally push AI systems toward divergence rather than convergence — at least in strategy space, even if not in risk-response behavior.
These are serious arguments. The honest position is that the GIC scenario is a plausible tail risk — not an inevitable outcome.
Three Signals to Watch Before 2028
Three concrete indicators will tell us whether the scenario is becoming more or less likely in real time.
1. BIS and FSB language in official reports: When the language shifts from "emerging risk requiring monitoring" to "concrete vulnerability requiring intervention," the institutions have seen something in private stress tests that has not been made public. Watch the 2026 and 2027 FSB annual reports closely.
2. Regulatory treatment of autonomous agent trading: If the SEC, FCA, or ESMA move to require human-in-the-loop approval for autonomous agent trades above a certain size or frequency, it signals that regulators have concluded the current framework is inadequate. Proposed rules in this area are expected in late 2026.
3. Correlation metrics in earnings season volatility: If AI-driven portfolio systems begin producing synchronized earnings-season moves across what should be uncorrelated sectors — consumer staples moving with semiconductors, for example — that is an early observable signal of the behavioral homogenization the BIS has flagged.
Frequently Asked Questions
What is the GIC scenario?
The Global Intelligence Crash scenario refers to a hypothetical but formally modeled financial crisis triggered by the emergent, synchronized behavior of AI systems in financial markets — rather than by human fraud, regulatory failure, or conventional asset bubbles. It is characterized by machine-speed propagation that outpaces human intervention capacity.
Could AI actually cause a financial crisis?
AI creating a financial crisis is a recognized risk scenario among leading financial stability institutions, including the IMF, the Bank for International Settlements, and the European Systemic Risk Board. Whether it will happen depends on how quickly and effectively regulatory frameworks adapt to autonomous agent deployment. The risk is real but not inevitable.
Why is 2028 considered the critical window?
2028 marks the projected convergence of three independent risk timelines: AI trading reaching majority-threshold volume in global markets, a peak in sovereign debt refinancing cycles, and the first generation of AI systems trained extensively on other AI systems' behavior reaching full production deployment.
What is correlated liquidation risk in AI trading?
Correlated liquidation risk is the possibility that AI trading systems, trained on similar data and optimizing similar objectives, will simultaneously identify the same exit signal during a market shock and sell at machine speed — producing a faster and more severe price dislocation than human-managed portfolios would generate.
What can regulators do to prevent an AI financial crisis?
Regulatory responses under active development include mandatory human-in-the-loop requirements for autonomous agent trades above defined thresholds, stress-testing frameworks redesigned for machine-speed scenarios, and requirements for diversity in AI model architecture among systemically important financial institutions — analogous to the way regulators already require geographic and asset diversification.
Analysis informed by: Bank for International Settlements Working Paper No. 1189 (2025), IMF Global Financial Stability Report 2025, European Systemic Risk Board AI Risk Advisory (2025), and MIT Sloan School of Management research on autonomous agent market behavior. Last verified: February 2026.