The Paranoidist | Issue #4 By Paul Morin | February 28, 2026
Here are two things you probably believe about prediction markets. And here is what nobody is telling you about the gap between those beliefs and reality.
Belief one: Prediction markets are the most accurate forecasting tool available. Polymarket called the 2024 presidential election more accurately than polls, pundits, or models. Kalshi and Polymarket processed over $44 billion in combined volume in 2025. Bloomberg, The Economist, and Reuters now weave real-time prediction market odds into their coverage. CNBC will integrate Kalshi data directly into its broadcast quote displays in 2026. The Intercontinental Exchange, parent company of the New York Stock Exchange, invested up to $2 billion in Polymarket in late 2025. Jump Trading signed on as a market maker. Coinbase launched prediction market access across all 50 states. The institutional seal of approval is complete. Prediction markets are no longer fringe. They are becoming infrastructure.
Belief two: Their utility is about to expand dramatically. One week ago, Vitalik Buterin, co-founder of Ethereum and a Polymarket investor, published a detailed proposal arguing that prediction markets should evolve from speculative betting platforms into generalized hedging instruments. His vision: markets indexed to every major category of goods and services, with local AI models analyzing each user's personal spending patterns to construct customized portfolios of prediction market positions. The goal, in Buterin's words, is to "replace fiat currency" itself, turning prediction market shares into a decentralized alternative to stablecoins. Loxley Fernandes, CEO of the prediction market Myriad, captured the aspiration: "When prediction markets become tools for risk reduction, coordination, and economic stability, they stop being entertainment and start becoming information infrastructure."
Both of these beliefs contain real substance. Prediction markets have demonstrated genuine value in specific contexts. I am not here to dismiss them.
I am here to point out that the enthusiasm is outrunning the scrutiny, and that the gap between what prediction markets actually do well and what they are being positioned to do is where the unpriced risk lives.
The Distinction That Changes Everything
In 2001, Nassim Nicholas Taleb published Fooled by Randomness, a book that has since become foundational in risk epistemology. Among its central arguments is a distinction that the prediction market enthusiasm is systematically ignoring: the distinction between risk and uncertainty.
Risk, in Taleb's framework (drawing on Frank Knight's earlier work), refers to situations where the probability distribution is known or reasonably estimable. You may not know which number the roulette wheel will land on, but you know the distribution. You can calculate expected values. You can price bets rationally. Insurance actuaries work in this domain. So do poker players. So, importantly, do prediction markets when they function well.
Uncertainty is different. Uncertainty describes situations where the probability distribution itself is unknown. You don't know the range of possible outcomes, or their relative likelihoods, or whether the past is a reliable guide to the future. You can't calculate expected values because you don't have the inputs to the calculation. The 2008 financial crisis was not a "low-probability event" that risk models captured and assigned a small number to. It was an event that the models' distributional assumptions couldn't represent at all. The distribution was wrong, not the probability within it.
Prediction markets are exceptionally good at pricing risk. For a presidential election, you have polling data, historical base rates, demographic models, prior election cycles, and a well-defined binary outcome. The inputs are noisy but they exist. The market aggregates real information from participants who have genuine signal. When Polymarket outperformed the polls in 2024, it was doing exactly what prediction markets are designed to do: aggregating dispersed information about a well-defined event with estimable probabilities.
Prediction markets are structurally incapable of pricing uncertainty. For "probability that a U.S. strike on Iran leads to sustained disruption of the Strait of Hormuz lasting more than 30 days," or "probability that AI disruption causes a systemic credit event in private lending markets within 24 months," there is no actuarial table. There is no historical base rate that cleanly applies. There is no well-defined probability distribution for the input variables. The participants in these markets are not aggregating dispersed information. They are aggregating dispersed guesses, each one shaped by the participant's own unexamined assumptions about how the world works, filtered through the cognitive biases that Taleb, Kahneman, and decades of behavioral research have documented.
And the market wraps those aggregated guesses in a number between 0 and 100 that looks, to the decision-maker reading it, exactly like a probability.
Taleb called this the ludic fallacy: the error of applying the clean, closed-system logic of games (where distributions are known by design) to the open, fat-tailed messiness of real-world events. Prediction markets, by their very architecture, institutionalize the ludic fallacy. They take every question, no matter how irreducibly uncertain, and force it into a price between zero and one dollar. The structure assumes that every question has a meaningful probability. The real world does not cooperate.
The Invisible Input Problem
There is a precise way to understand why this matters, and it comes from a tool that risk professionals know well: Monte Carlo simulation.
Monte Carlo simulation is powerful when you understand how the input variables behave. You specify probability distributions for each independent variable (normal, lognormal, beta, uniform, whatever the data supports), run thousands of iterations, and get a distribution of outcomes for the dependent variable you care about. The output can be genuinely useful for decision-making: here is the range of likely outcomes, here are the tails, here is where the risk concentrates.
But Monte Carlo simulation has a well-known vulnerability. If you don't actually know how the input variables are distributed and you're guessing at the parameters, the output is meaningless. You can run a million iterations. If the input distributions are wrong, you get a beautifully precise wrong answer. The simulation doesn't know its inputs are bad. It just processes them. Garbage in, garbage out, at scale.
Prediction markets have the same architecture and the same vulnerability, except the problem is worse.
In a Monte Carlo simulation, the analyst must declare the distributional assumptions. The model says explicitly: "I'm treating this input as a normal distribution with mean X and standard deviation Y." That declaration is visible, auditable, and challengeable. A competent risk reviewer can look at the model and ask: "Why did you assume a normal distribution when the historical data shows fat tails?"
In a prediction market, the distributional assumptions are implicit and invisible. Each of the thousands of participants carries their own mental model of how the relevant variables behave, with their own (usually unexamined) assumptions about probability distributions, correlations, base rates, and tail behavior. The market price aggregates all of those hidden models into a single number. But nobody can audit the assumptions underneath it.
You see a price of $0.35, implying a 35% probability. You have no way to determine whether that 35% reflects a thousand participants with well-calibrated models based on genuine expertise, or a thousand participants who are all making the same unexamined assumption about how the underlying variables behave, or (as we'll see shortly) a handful of well-capitalized actors who moved the price for reasons that have nothing to do with forecasting accuracy.
A prediction market on a novel geopolitical scenario is a Monte Carlo simulation where nobody specified the input distributions. The output looks precise. The precision is a costume.
The Survivorship Problem
Taleb identified another pattern that applies directly: survivorship bias, or what he called the problem of "silent evidence."
We evaluate prediction markets based on the calls they got right. The 2024 election is the marquee example, cited in virtually every article, pitch deck, and policy paper arguing for prediction market adoption. Polymarket got it right when the polls were uncertain. The victory lap has been running for over a year.
What we don't have is a systematic accounting of the failures. The tail risks that never developed a liquid market because nobody wanted to bet on them. The scenarios where prediction market consensus converged on a comfortable narrative that turned out to be wrong, but the wrongness wasn't dramatic enough to generate headlines. The markets where thin liquidity meant the "price" was set by a handful of participants whose positions told you more about their portfolio than about reality. The markets that quietly expired at zero without anyone checking whether the crowd's confidence was justified.
The visible track record is curated by survival. The prediction market that nails a high-profile election gets a billion dollars in free publicity. The prediction markets that failed quietly on a hundred less visible questions leave no trace in the public conversation. We are evaluating the tool based on its highlight reel, which is exactly the epistemological error that Taleb warned about in every book he has written.
A Columbia University study published in late 2025 found that approximately 25% of Polymarket's historical trading volume was attributable to wash trading: users rapidly buying and selling the same contracts, often to themselves or through colluding accounts, inflating activity metrics without changing net positions. In some weeks, over 90% of trades in sports and election categories appeared inauthentic. The volume numbers that are cited as evidence of prediction markets' legitimacy and liquidity are, in meaningful part, artificial.
If a CRO brought an enterprise risk model to the board and disclosed that 25% of its input data was fabricated, no board would accept the model's conclusions. Prediction markets are getting a pass on evidentiary standards that would disqualify any other analytical tool.
When the Thermometer Becomes a Thermostat
Everything described so far is a problem of epistemology: prediction markets producing numbers that look more reliable than they are. But there is a second problem, and it is more dangerous. It is a problem of manipulation.
Start with the mechanical reality. Prediction markets are thin. Polymarket's most liquid contracts might have a few million dollars in open interest. For comparison, the equity, commodity, and derivatives markets where the consequences of predictions are actually priced operate at scales of billions to trillions. A well-capitalized actor can move a prediction market contract meaningfully for a modest sum. Yale's Jeffrey Sonnenfeld and colleagues documented this in a 2024 analysis: on some platforms, they found zero sellers in key battleground markets, and bid-ask spreads of 50% or more, meaning the "price" cited by media was, in their words, "merely a phantom figure."
In equity markets, this kind of thinness would trigger manipulation scrutiny. The SEC monitors for it. There are legal consequences. In prediction markets, the regulatory framework barely exists. The CFTC's jurisdiction is contested, most platforms are offshore or operating in legal gray zones, and the surveillance infrastructure that detects manipulation in equity markets simply does not exist here.
Now escalate the problem. Once prediction markets are used as inputs to decisions, the market price becomes self-referential.
This is already happening. Bloomberg, Reuters, and The Economist embed prediction market odds in their reporting. CNBC is integrating Kalshi data into broadcast displays. Boards and risk teams are beginning to cite prediction market odds in strategic discussions. Buterin's proposal would extend this further, turning prediction market prices into the foundation of hedging strategies and, in his most ambitious framing, a replacement for currency itself.
The moment prediction market prices influence decisions, a motivated actor who moves the price doesn't just change a number on a screen. They change behavior downstream. If a corporate board is monitoring a prediction market on "probability of regulatory action X" and a well-capitalized participant pushes that contract from 25% to 45%, the board may preemptively sell assets, alter strategy, or lobby harder. The prediction market is no longer a thermometer passively reading the temperature. It is a thermostat that someone else is setting.
George Soros described this reflexivity dynamic in financial markets decades ago: markets don't just passively reflect reality; they actively shape it. Prediction markets, because they claim to measure probability rather than merely opinion, have an even more potent reflexive effect. A stock price is understood to be a market view. A prediction market price is presented as something closer to a fact about the future. That framing gives it disproportionate power to influence behavior, which makes it a disproportionately attractive target for manipulation.
Now escalate one more time. In the information environment described in Issue #1 of The Paranoidist, where institutional trust has collapsed and information ecosystems are fragmented along tribal lines, prediction markets become a new vector for narrative warfare.
The mechanism is straightforward. An actor (state, corporate, political) places enough capital to move a thin prediction market. The price movement is then amplified through media and social channels: "Markets now predict 60% chance of [feared event]." The prediction market launders capital into the appearance of crowd consensus. It converts money into narrative, with the added authority of a number that looks like a probability.
This is especially effective for exactly the scenarios where, as we've established, the participants have no reliable distributional data. When nobody in the market actually knows the probability, there is no firm anchor of informed opinion to resist the manipulation. In a deep, liquid market with well-informed participants, manipulation is expensive because informed traders push back. In a thin market on a novel scenario where everyone is guessing, manipulation is cheap, because there is no informed consensus to overcome.
In December 2025, Polymarket resolved "YES" on a $16 million market asking whether the Trump administration would declassify UFO files, despite no documents having been released. The resolution was driven by late-session buying near 99 cents and a governance vote by holders of the UMA oracle token. Users in Polymarket's own comment threads labeled the outcome a "scam" and described the resolution mechanism as "proof-of-whales." This is not a theoretical vulnerability. It is the current operating reality.
The Pricing Failure
Connect this to what The Paranoidist is built to identify: risk that isn't priced.
Nobody is pricing the epistemological risk of prediction markets being adopted as decision-making infrastructure. Specifically:
No risk framework accounts for the distinction between prediction markets pricing risk (where they work) and prediction markets pricing uncertainty (where they produce confident numbers backed by invisible, unauditable assumptions). No board governance standard requires disclosure of the liquidity, volume authenticity, or distributional basis of prediction market data cited in strategic decisions. No regulatory framework monitors prediction markets for the manipulation that is trivially achievable given their thin liquidity and absence of surveillance. No institutional risk model accounts for the reflexive feedback loop created when prediction market prices are simultaneously used as forecasting tools and as inputs to the decisions they are forecasting. And no current analysis adequately distinguishes between the visible track record (prediction markets called the election) and the invisible track record (the hundreds of less prominent markets where accuracy, manipulation, and wash trading have not been systematically evaluated).
The prediction market sector just completed a year in which it processed $44 billion in volume, attracted investments from the NYSE's parent company, and began integration into mainstream financial media. The trajectory is toward more adoption, more institutional reliance, and more decision-making weight placed on these prices. The scrutiny has not kept pace with the adoption. The assumption that the number on the screen means what it appears to mean is the load-bearing wall, and nobody is testing it.
What to Do About It
If you're a board director: If prediction market data appears in any board materials, ask three questions. First: "What is the liquidity in the specific contract being cited, and how many unique, verified participants set this price?" If the answer is a few hundred thousand dollars and an unknown number of pseudonymous accounts, the "probability" being cited is a data point with the statistical weight of a small, uncontrolled survey. Second: "Is this a risk question or an uncertainty question?" If the event has historical base rates, defined outcomes, and dispersed genuine expertise, prediction market data may add value. If the event is novel, complex, and without reliable precedent, the number is consensus sentiment dressed as probability. Treat it accordingly. Third: "Who benefits from moving this price?" If there is any actor (competitor, regulator, political operation, short seller) with both the capital and the motive to influence this market, the price is not just a forecast. It is a potential instrument.
If you're a CRO or risk leader: Do not integrate prediction market data into your risk models without subjecting it to the same validation standards you apply to every other analytical input. That means evaluating data provenance (is the volume authentic or inflated by wash trading?), input quality (do the participants have genuine expertise or are they speculating?), distributional assumptions (is this a domain where probability estimates are meaningful?), and manipulation exposure (how much capital would it take to move this price by 10 points?). If the answer to any of these questions is "we don't know," the data fails the standards you would apply to any other model input. The fact that it comes with a number attached does not exempt it.
If you're a CEO or founder: Buterin's vision of prediction markets as generalized hedging instruments is intellectually ambitious and, for certain well-defined applications, potentially valuable. But the leap from "prediction markets can aggregate information about elections" to "prediction markets can replace currency and serve as the foundation of personalized risk management" is enormous, and the infrastructure between here and there does not exist yet. The liquidity isn't there. The regulatory framework isn't there. The manipulation resistance isn't there. If your organization is exploring prediction market integration for hedging, planning, or decision support, build the validation layer before you build the dependency. The worst outcome is not that prediction markets fail to help. It is that they are adopted as trusted infrastructure and then fail in a way that was foreseeable but not foreseen, because nobody audited the oracle.
If you're a citizen and a thinker: When you see a prediction market price cited in a news report ("markets give X a 70% chance of happening"), understand what that number actually represents. It represents the price at which the last trade cleared in a market that may have a few million dollars in total liquidity, where a quarter of the volume may be artificial, where the participants have no particular expertise on the specific question, and where any actor with sufficient capital and motivation could have moved the price for strategic reasons. It is not a fact about the future. It is a data point about a market. Those are different things. The habit of treating prediction market prices as probabilities is the same habit that led people to treat credit ratings as facts: the number felt authoritative, the institution behind it seemed credible, and nobody looked at what was actually inside the model until it broke.
The Paranoidist's Assessment
Probability that prediction markets continue to be adopted as decision-making inputs by institutions, media, and policymakers without adequate scrutiny of their limitations: Very high. The institutional momentum (ICE investment, CNBC integration, Coinbase distribution, $44 billion in volume) is already substantial, and the narrative ("markets are smarter than experts") is compelling and self-reinforcing.
Probability that a major decision or market event is materially influenced by manipulated prediction market data within the next 24 months: High. The mechanical vulnerability (thin liquidity, no surveillance, cheap to move) combined with the expanding decision-making role (media integration, corporate adoption, policy discussions) creates exactly the conditions for it. Whether it is detected and attributed is a separate question.
Probability that the prediction market sector experiences a credibility crisis analogous to the credit rating agency crisis of 2008 (where the tool was revealed to be far less reliable than its institutional adoption implied): Moderate. The Columbia wash trading study, the Polymarket UFO resolution incident, and the Yale liquidity critique are early signals. The crisis arrives when a prediction market price that was widely cited and acted upon turns out to have been driven by manipulation or thin-market artifacts rather than genuine forecasting, and the downstream consequences are material.
Probability that prediction markets work well for well-defined, high-attention, binary events with genuine information asymmetry (elections, Fed decisions, sports): High. This is the domain where they have demonstrated real value, and the enthusiasm is justified for these use cases.
Probability that prediction markets work well for novel, complex, fat-tailed scenarios without historical base rates (geopolitical risk, AI disruption, systemic financial events), which are the scenarios where risk leaders most need reliable forecasting: Low. This is the domain of irreducible uncertainty, and no market mechanism, no matter how well-designed, can manufacture knowledge that doesn't exist among the participants.
What I'm watching: Whether any regulator (CFTC, SEC, or international equivalent) establishes manipulation surveillance standards for prediction markets before a crisis forces the issue. Whether the CertiK report flagging structural strain in prediction markets (the sector grew 4x to $63.5 billion in 2025) leads to any institutional response. And whether any major institutional adopter (media company, financial firm, government agency) establishes public standards for when prediction market data is reliable enough to cite and when it is not, which would be the first step toward the validation layer that currently doesn't exist.
Where I might be wrong: It is possible that prediction markets, even in their current imperfect form, represent a net improvement over the alternatives (expert panels, polling, committee forecasts) for a wide range of questions, including some in the uncertainty domain. The efficient market hypothesis argument is not trivial: even imperfect aggregation of dispersed beliefs may outperform centralized judgment. If this is true, the risks I've described are real but are outweighed by the forecasting gains. I don't think this argument holds for the fat-tailed, novel scenarios that matter most to risk leaders, because the market can't aggregate information that doesn't exist among its participants. But it deserves honest engagement. It is also possible that the manipulation vulnerability is self-correcting: as markets mature, liquidity deepens, and manipulation becomes more expensive, the mechanical vulnerability shrinks. This is the path that equity markets followed over decades. Whether prediction markets will have decades to mature before they are entrusted with consequential decisions is the question.
The Paranoidist publishes weekly. If this changed how you think about one thing, consider subscribing. If it didn't, tell me what I'm missing. The whole point of productive paranoia is that I might be wrong, and I'd rather know now.
Paul Morin is the founder of DeepStrategy.ai and publisher of The Paranoidist, BoardroomRadar and ScenarioWatch. He has spent more than three decades in entrepreneurship, finance, risk management, and insurance, which is why he worries about the things that keep other people awake at night.
Researched, written, and edited in collaboration with Claude by Anthropic.