The Paranoidist | Issue #1 By Paul Morin | February 7, 2026

Here are two trends you already know about. What you probably haven't done is put them together. Once you do, you won't be able to unsee it.

Trend one: We are embedding artificial intelligence into the most consequential systems in human civilization (healthcare diagnostics, financial markets, criminal justice, critical infrastructure, military targeting, insurance underwriting, among many others) at a pace that would have seemed reckless even two years ago. In the last three months alone, the AI Incident Database logged 108 new incident reports, ranging from autonomous vehicles colliding with pedestrians to health insurance algorithms denying claims at the rate of one per second to chatbots dispensing harmful medical advice linked to patient harm in India. ECRI, the nonprofit patient safety organization, just named the misuse of AI chatbots the number-one health technology hazard for 2026. Not a future risk. The top current hazard.

Trend two: Trust is collapsing. Not in one place. Not at one level. Everywhere, simultaneously, at every scale that matters.

Start with the United States. Only 22% of Americans trust the federal government to do what is right. Congress sits at 14% trust. Television news: 18%. Newspapers: 15%. Big business: 14%. These are not cyclical lows. These are structural failures of legitimacy that have been compounding for decades.

Now zoom out. This is not an American phenomenon. The 2026 Edelman Trust Barometer, published three weeks ago, found that 70% of respondents globally are unwilling or hesitant to trust someone with different values, backgrounds, or information sources. Edelman's CEO called it "the next crisis of trust": a slide from polarization to grievance to outright insularity. Across 28 countries surveyed, the trust gap between high-income and low-income respondents has more than doubled since 2012, with the largest disparities in the U.S. (29 points), Indonesia (26 points), and France (22 points). In country after country, the pattern is the same: people are retreating from institutions into the safety of the proximate and the familiar.

Now zoom out further, to the space between societies. The World Economic Forum's Global Cooperation Barometer 2026, built on 41 metrics across trade, technology, climate, health, and security, found that 85% of Global Future Council members described the state of global cooperation in 2025 as "less cooperative" or "much less cooperative" than the prior year. Peace and security cooperation has declined to below pre-COVID levels on every tracked metric. Conflicts intensified. Military spending hit all-time highs. Political violence exceeded 550 daily incidents in 2025, and air and drone attacks reached record levels. The EU's Institute for Security Studies published a report with a title that says everything: "Low Trust: Navigating Transatlantic Relations Under Trump 2.0." Multilateral institutions, the very structures designed to coordinate responses to cross-border crises, are losing coherence at exactly the moment AI is making crises more likely to be cross-border.

The WEF Barometer offers one additional finding that makes this worse, not better: global cooperation isn't dying outright. It's fragmenting into smaller, interest-based alliances among "aligned" partners. Trade flows are rerouting along geopolitical lines. Nations are cooperating in modules, not multilaterally. The risk, as the Barometer itself noted, is that the world becomes very good at striking selective bargains while remaining dangerously bad at the kind of broad, multilateral action that prevents crises from spreading.

That fragmentation is the trust collapse expressed at civilizational scale. And it matters enormously for AI, because AI doesn't respect borders. The algorithm that fails in one jurisdiction was trained on data from a dozen others. The company that deploys it operates globally. The harm it causes cascades across markets, populations, and regulatory regimes. Managing the fallout requires exactly the kind of coordinated, cross-border institutional response that the trust data tells us is becoming harder to mount.

Three nested scales of collapse. Trust within institutions. Trust within societies. Trust between societies. And into this vacuum, we are deploying the most powerful and least understood technology in human history.

Now put them together.

The Collision Course

We are building an entire civilization-scale infrastructure on systems that nobody can fully audit, nobody can fully explain, and nobody can fully control. And we are doing this at the precise historical moment when the institutions responsible for auditing, explaining, and controlling things have lost the public's confidence to do so, not just in one country but around the world, and not just within countries but between them.

This is not a technology problem. It is not a governance problem. It is the meta-risk underneath both: a structural mismatch between the complexity of the systems we're deploying and the capacity of our institutions, at every level, to manage the consequences when those systems fail.

And the systems will fail. They are already failing. A health-risk prediction algorithm used on 200 million Americans systematically discriminated against Black patients by using healthcare spending as a proxy for health needs, reducing the number of Black patients identified for extra care by more than half. Poland's Warsaw Stock Exchange had to halt all trading in April 2025 after algorithmic trading bots triggered a feedback loop that sent the WIG20 index down 7% in minutes. Waymo has been implicated in multiple incident reports in recent months, from collisions to operational failures during emergencies. McDonald's AI-powered hiring platform was found accessible through default credentials ("123456/123456"), exposing data linked to 64 million job applications.

These are not edge cases. They're the normal operating texture of AI deployment in 2025-2026. In January 2026, tens of thousands of users rushed to install an open-source AI agent called Clawdbot (later rebranded Moltbot, then OpenClaw), giving it full access to their email, messaging apps, file systems, and credentials, only for security researchers to discover over 21,000 publicly exposed instances, critical authentication bypass vulnerabilities, hundreds of malicious plugins in the tool's skill library, and credentials stored in plaintext. Palo Alto Networks called it a "lethal trifecta" of security risk. Google's VP of security engineering urged people to stop installing it entirely. The tool's creator acknowledged it wasn't ready for non-technical users. They installed it anyway, by the tens of thousands. And each one of these failures, from the discriminatory algorithm to the unsecured AI agent, requires the same thing to prevent the next one: functioning institutions that the public trusts enough to grant regulatory authority, enforce accountability, and adjudicate disputes. Not just domestically. Across borders. In coordination with counterparts in other jurisdictions who may not share the same regulatory philosophy, the same political incentives, or the same definition of what constitutes harm.

Now ask yourself: where are those institutions?

The Accountability Gap Is a Trust Gap

The regulatory response to AI is not nothing. The EU AI Act enters full application in August 2026. Colorado's AI Act becomes effective in June 2026. California's AI Transparency Act takes effect in August 2026. Proposals for audit standards, bias testing, and algorithmic accountability are multiplying.

But regulation without trust is just paper. And fragmented regulation, without cross-border coordination, is something worse: the illusion of accountability.

Consider what accountability actually requires. It requires that someone has the authority to investigate. That the public believes the investigation is legitimate. That the findings are accepted as credible by enough people to enable consequences. That the institutions imposing those consequences are seen as acting in the public interest rather than serving narrow political or corporate agendas. And, increasingly, that regulators in different jurisdictions can cooperate effectively when an AI system developed in one country, trained on data from a second, and deployed in a third causes harm in a fourth.

Now consider the reality. The 2026 Edelman Trust Barometer found that trust has shifted from "we" to "me." Net trust in national government leaders has dropped 16 points over five years. Net trust in major news organizations has fallen 11 points. The only institutions gaining trust are proximate ones: neighbors, family, coworkers, direct supervisors. People trust what they can see and touch. They distrust abstractions.

AI governance is entirely abstract. Domestically, it requires trusting regulators you've never met, applying standards you can't evaluate, to systems you can't understand, operated by companies you already suspect of acting in their own interest. Internationally, it requires trusting that other nations' regulators are equally rigorous, equally independent, and equally committed to protecting your interests. Every layer of the accountability stack depends on institutional credibility that is actively eroding, at home and abroad.

This is the collision course: AI creates problems that require institutional solutions, including cross-border coordination, at exactly the moment institutions have lost the legitimacy to provide them and nations have lost the trust to cooperate on providing them together.

The Race That Ensures No One Will Slow Down

Everything I've described so far could, in theory, be addressed. Governments could invest in rebuilding institutional trust. Nations could negotiate AI safety frameworks. Companies could voluntarily submit to independent audits. The accountability gap could narrow.

It won't. And the reason it won't has a name: the race to artificial general intelligence.

A brief explanation for readers less immersed in AI discourse. Artificial general intelligence (AGI) refers to AI systems that can perform any intellectual task a human can, with the flexibility to learn, reason, and adapt across domains rather than excelling at one narrow function. Artificial superintelligence (ASI) goes further: AI that surpasses human cognitive ability across every dimension, potentially by orders of magnitude. Whether AGI arrives in three years or thirty is debated. That it is being pursued with enormous resources and urgency is not.

The race is playing out on two levels simultaneously.

Between companies: OpenAI, Google DeepMind, Anthropic, Meta, xAI, and others are competing with billions of dollars and some of the most talented researchers on Earth to reach AGI first. Each company has stated, in one form or another, that it believes it is building the most transformative and potentially dangerous technology in human history. Each company has also concluded that the correct response to this belief is to build faster, because if they don't, someone less safety-conscious will. This is not hypocrisy. It is the logic of the Prisoner's Dilemma playing out in corporate form. But the race extends well beyond the foundation model companies. Every enterprise deploying AI, from banks integrating algorithmic underwriting to hospitals adopting diagnostic models to insurers automating claims, faces the same competitive pressure: adopt now or fall behind. The company that pauses to conduct rigorous safety testing watches its competitor capture market share. The startup that invests in AI governance before launching loses its runway advantage. The result is that the race to deploy outpaces the capacity to deploy responsibly at every level of the stack, from the companies building the models to the organizations integrating them into consequential decisions. And many of these deployers have far less AI expertise, far fewer safety resources, and far less understanding of what can go wrong than the companies that built the models in the first place.

Between nations: The U.S. and China are engaged in what both governments treat as an existential technology race, with AI as the central front. Export controls on advanced chips, investment restrictions, talent competition, and national AI strategies are all expressions of the same underlying belief: whoever reaches advanced AI first gains a decisive strategic advantage, and falling behind is unacceptable. The EU, the UK, India, and others are running their own programs, often explicitly framing AI leadership as a matter of national survival.

Here's why this matters for the trust collapse.

The Prisoner's Dilemma, from game theory, describes a situation where two or more actors would achieve the best collective outcome by cooperating, but each actor's individual incentive is to defect, because they cannot trust the others to cooperate. The result is that everyone defects, and the collective outcome is worse for all.

The AGI race is a Prisoner's Dilemma at civilizational scale. Every major AI company knows that slowing down to invest in safety, accountability, and institutional infrastructure would be wise if everyone else did the same. But no company can verify that its competitors will slow down. So no one does. Every nation knows that coordinating on AI safety frameworks would reduce collective risk. But no nation trusts its rivals to honor those commitments, especially in an environment where transborder trust is already at historic lows. So no one coordinates.

The Prisoner's Dilemma is only unsolvable under specific conditions: when the players can't communicate reliably, can't trust each other's commitments, and can't verify compliance. Look at those conditions again. They are a precise description of the current state of global affairs.

This is the accelerant. The trust collapse is the environment. AI deployment into consequential systems is the trend. The race to AGI is the force that guarantees the gap between deployment speed and institutional capacity will widen, not narrow. It ensures that the accountability infrastructure will remain immature, the safety investment will remain inadequate, and the regulatory frameworks will remain fragmented, because building them requires exactly the kind of cooperation, trust, and willingness to accept short-term competitive disadvantage that the race eliminates.

One more dimension worth noting: the race doesn't just prevent solutions. It actively worsens the underlying trust problem. Every nation that watches its rival pour resources into advanced AI becomes more convinced that the rival intends to use that advantage coercively. Every company that sees a competitor release a more powerful model with fewer safety guardrails concludes that the market rewards speed over responsibility. The race erodes the very trust that would be necessary to end it. It is a feedback loop with no obvious off-ramp.

What Happens When It Breaks

Here's the scenario that keeps me up at night. Not because it's unlikely, but because every element is already in motion.

An AI system deployed in healthcare makes a consequential error. A diagnostic model that misses a cancer pattern in a specific demographic, or an insurance algorithm that systematically denies claims for a particular condition. It affects thousands of patients before anyone catches it. The system was developed by a U.S. company, trained partly on data from European and Asian hospital networks, and deployed across multiple countries with different regulatory frameworks.

When someone catches the error, the company disputes the finding. The U.S. regulator investigates, but a significant portion of the American public distrusts the regulator's motives. The media reports on it, but a significant portion of the public distrusts the media's reporting. European regulators launch their own investigation under the AI Act, but the company argues that U.S. jurisdiction applies. Researchers publish their analysis, but it's challenged by industry-funded counter-research and amplified through information ecosystems where people choose which experts to believe based on tribal affiliation rather than methodological rigor.

The result: not resolution, but fracture. Within the U.S., half the country believes the AI system is dangerous and the company is covering it up; the other half believes the investigation is a political hit job designed to slow American innovation. Between jurisdictions, the U.S. and EU can't agree on who has authority, what standard of harm applies, or what remediation looks like. Affected patients in countries without robust regulatory frameworks have no recourse at all. The actual people who were harmed are trapped between competing narratives and competing jurisdictions, unable to achieve accountability because accountability requires a shared institutional framework that no longer exists within countries and never fully existed between them.

This isn't hypothetical. It's the template of every major institutional failure of the last decade, from pandemic response to election administration to financial regulation, applied to a technology that is more opaque, more pervasive, more cross-border, and more consequential than any of them.

The 2008 Analogy (And Why This Time Is Worse)

If you want to understand what a catastrophic AI failure will look like, don't think about asbestos or tobacco. Think about September 2008.

The financial crisis shares AI's essential characteristics in a way no other historical analogy does. The instruments at the center of the crisis (collateralized debt obligations, credit default swaps, synthetic CDOs) were opaque and interconnected. No single actor fully understood the system they had collectively built. The risk models that were supposed to quantify exposure relied on assumptions that were technically sophisticated and fundamentally wrong. The rating agencies, the trust infrastructure of the entire system, failed catastrophically. And when it broke, contagion crossed borders in hours, because the system was global and the failure cascaded through linkages that regulators hadn't mapped and couldn't control.

Now consider what happened next: the institutional response barely held. The U.S. government bailed out the banking system, central banks coordinated globally, and Congress eventually passed Dodd-Frank. It was messy, inadequate, and politically toxic. But it worked well enough to prevent a complete collapse, for one critical reason: in 2008, institutional trust was higher than it is today. Government trust stood at roughly 30%, compared to 22% now. Transatlantic coordination, while strained, was still functional. Multilateral institutions still had enough legitimacy to convene credible responses. The public was angry, but a sufficient majority still believed that institutions could act in the public interest, even if they had failed to.

That baseline no longer exists.

AI risk carries every feature that made 2008 dangerous: opacity (the models are unexplainable by design), interconnection (systems trained on shared data, deployed across interconnected markets and jurisdictions), diffuse causation (when an AI fails, the causal web includes training data, model architecture, deployment decisions, and downstream systems that interacted with the output), and cross-border contagion (the company that builds it, the data it was trained on, and the populations it affects may span a dozen countries). When 35 state attorneys general recently demanded evidence that xAI's safety filters actually worked, they discovered that no AI system on the market could provide cryptographic proof of what it refused to generate. We are at the "nobody understands what's inside the CDO" stage, except the instruments are more complex, more consequential, and more pervasive.

But AI risk also carries something 2008 didn't: a trust environment that has deteriorated on every dimension since then. The institutions that barely held in 2008 are weaker now, domestically and internationally. The cross-border coordination that enabled a (barely) coherent global response has frayed. The public's willingness to accept institutional action, even imperfect institutional action, has eroded. The AGI race ensures that the system is getting more complex, faster, with less safety investment, while the institutional capacity to manage a crisis shrinks.

The 2008 crisis nearly broke the global financial system, and that was with functioning institutions and enough trust to mount a coordinated response. The question for AI is: what happens when a systemically opaque, globally interconnected crisis hits and the institutions are 16 years weaker?

If you think the accountability failure will self-correct before that happens, consider the dress rehearsal we're currently running: social media and teen mental health. Over a decade of mounting evidence of harm. Companies that dispute causation. Regulators fragmented across jurisdictions. A public split along political lines over whether the harm is real or exaggerated. And after all that time and all that evidence, still no meaningful accountability. Social media is the warm-up act. AI is the main event, at 10 times the speed and 100 times the stakes.

The Pricing Failure

The financial markets are not pricing this convergence. AI companies are valued on capability and growth. Risk is assessed along traditional vectors: regulatory compliance, cybersecurity, competitive dynamics. But nobody is modeling the scenario where AI fails catastrophically and the institutional response fails simultaneously, where the technology breaks and the systems we depend on to manage the breakage are themselves broken, at home and between nations.

This is the definition of a tail risk that isn't a tail risk. The probability of a major AI failure is not low; it's close to certain, given the deployment velocity and the incident rate we're already seeing. The probability that the institutional response will be inadequate is also not low; it's close to certain, given every trust metric available, from Pew to Edelman to the WEF Cooperation Barometer. The only uncertainty is timing.

For boards and executives, the implication is direct. Your organization's exposure to AI failure is not limited to the technical risk of your own systems. It extends to the institutional environment in which that failure would be adjudicated, and if you operate across borders, to the international environment in which jurisdictions would need to coordinate. If the regulatory body that would investigate you isn't trusted, your investigation has no legitimacy. If the media that would report on the failure isn't trusted, the narrative is contested forever. If the courts that would hear the case are politicized, the ruling doesn't settle anything. If the foreign regulators you need to cooperate with don't trust your country's regulatory framework, the response fragments. Your risk model needs a variable for institutional capacity, both domestic and international. That variable is declining on both dimensions.

For investors, the implication is blunter. AI valuations assume a functioning accountability ecosystem. If that ecosystem doesn't function, if the first catastrophic AI failure produces fragmentation rather than resolution, the entire sector reprices. Not because the technology doesn't work, but because nobody trusts the institutions that would certify that it does.

What to Do About It

The Paranoidist is about productive paranoia, not paralysis. So here's what I'd actually do if I were sitting in your seat.

If you're a board director: Ask your management team one question at your next meeting: "If our AI system fails catastrophically, which institution do we depend on to adjudicate the outcome, and how confident are we that institution will function?" Then ask the follow-up: "If the failure crosses borders, which foreign regulators would be involved, and what is our relationship with them?" If they can't answer clearly, you have a governance gap that no compliance checklist will fill.

If you're a CEO or founder: Start building institutional relationships before you need them, both at home and in every jurisdiction where you deploy AI. The companies that will survive the first major AI accountability crisis are the ones that have already established credibility with regulators, researchers, and civil society. Transparency now, even when it's uncomfortable, is insurance against the trust deficit later. Publish your model cards. Fund independent audits. Be the company that doesn't need to be forced to show its work, in any jurisdiction.

If you're a risk leader or CRO: Add "institutional trust environment" to your risk register. Not as a vague concern, but as a specific, monitored indicator. Track the trust data (Edelman, Pew, Gallup, WEF Cooperation Barometer) the way you track economic indicators. When institutional trust drops below certain thresholds in your operating jurisdictions, your AI risk exposure goes up, even if nothing about your systems has changed. And track the cooperation dimension: when the countries in which you operate are retreating from multilateral coordination, your cross-border AI risk exposure compounds.

If you're a citizen and a thinker: Recognize that the trust collapse and the AI acceleration are not separate problems requiring separate solutions. They are one problem, playing out at every scale from your local hospital to the United Nations. The people building AI need to understand that their technology is being deployed into a trust vacuum that extends from your neighbor's house to the UN Security Council. The people trying to rebuild institutional trust need to understand that AI is making their job exponentially harder. And all of us need to understand that the window for getting this right, for building accountability infrastructure before it's needed in a crisis, is shorter than we think.

The Paranoidist's Assessment

Probability that a major, undeniable AI-driven catastrophe occurs within the next 36 months: High. Not a prediction, but a near-certainty, given deployment velocity and current incident rates.

Probability that the institutional response to that catastrophe is adequate: Low. Not because institutions are malicious, but because they lack the trust, the technical capacity, and the political mandate to function effectively. Domestically, institutions are distrusted. Internationally, they are fragmenting. And the AGI race ensures that deployment will continue to outpace institutional capacity for the foreseeable future.

Probability that cross-border coordination in response to a major AI failure will function: Very low. Multilateral cooperation is at its weakest point in decades, the mechanisms for cross-border regulatory coordination on AI are nascent at best, and the dynamics of the AGI race actively discourage the cooperation that coordination requires.

Probability that the AGI race produces a voluntary slowdown sufficient to close the accountability gap: Near zero. The Prisoner's Dilemma holds as long as the players don't trust each other, and every trust metric available tells us they don't.

Probability that this convergence is currently priced into AI valuations, corporate risk models, or insurance underwriting: Near zero.

What I'm watching: The EU AI Act's full application in August 2026 is the first real test. If implementation is credible and enforceable, it provides a template for institutional accountability that could stabilize the system. If it becomes a compliance box-checking exercise with no teeth, or if it's undermined by political pressure from companies and governments that view regulation as a competitive handicap, then the accountability gap widens further. I'm also watching whether the EU and U.S. can establish any meaningful bilateral framework for AI incident coordination. If they can't cooperate, no one else will.

Where I might be wrong: It's possible that AI failures remain distributed and low-level, a constant drumbeat of incidents that never coalesce into a single catastrophic event. In that scenario, institutional trust might erode further but never face a single decisive test. This is actually the more concerning outcome, because it means the problem compounds silently until the eventual crisis is even larger. It's also possible that the fragmentation of global cooperation, paradoxically, limits contagion: if systems are more nationally siloed, failures may be more contained. But the trend in AI deployment is toward more integration, not less. Finally, it's possible that the AGI race breaks the Prisoner's Dilemma from the inside: a major AI incident at one leading company could function as a "wake-up call" that shifts incentives toward collective restraint. But game theory tells us that one defection rarely changes the equilibrium. It usually just redistributes market share.

The Paranoidist publishes weekly. If this changed how you think about one thing, consider subscribing. If it didn't, tell me what I'm missing. The whole point of productive paranoia is that I might be wrong, and I'd rather know now.

Paul Morin is the founder of DeepStrategy.ai and publisher of The Paranoidist, BoardroomRadar and ScenarioWatch. He has spent more than three decades in entrepreneurship, finance, risk management, and insurance, which is why he worries about the things that keep other people awake at night.

Researched, written, and edited in collaboration with Claude by Anthropic.

Keep Reading