Over the past few months, while working on the Agent system, I've become increasingly aware of something that's been severely underestimated: no matter how powerful an LLM becomes, it can't reliably assess the state of the real world. Once the Agent enters the actual execution layer—opening an account, trading, accessing websites, submitting forms—it's exposed to extremely high vulnerabilities because it lacks a "reality layer." What we lack is the Agent Oracle, a layer that is practically the cornerstone of the entire Agent ecosystem, yet it has long been neglected.
Why is LLM insufficient? Because the essence of LLM is generating probabilistically optimal text, not a system for inferring the truth of the world. It cannot verify the authenticity of news, identify phishing links, determine whether an API has been compromised, understand whether a regulation is truly in effect, or accurately grasp the true bias behind Powell's speech. These all fall under "verification," not "prediction." Therefore, LLM itself can never become the "source of truth" for an agent.
Traditional oracles are even less capable of solving this problem. They excel at dealing with the true nature of prices: structured, quantifiable, and observable data such as ETH/USD, BTC/BNB, indices, forex, and on-chain TVL. However, agents face a completely different reality: unstructured events, multi-source conflicts, semantic judgments, real-time changes, and blurred boundaries—this is the truth of events, an order of magnitude more complex than the truth of prices. The truth of events ≠ the truth of prices; their mechanisms are entirely different.
Sora's proposed event verification market is currently the closest attempt to the right direction. Sora's core shift is that truth is no longer generated by node voting, but by agents performing real verification tasks. A query involves data scraping (TLS, Hash, IPFS), outlier filtering (MAD), LLM semantic verification, multi-agent reputation-weighted aggregation, reputation updates, and challenge penalties. Sora's key insight is Earn = Reputation: revenue comes from reputation, and reputation comes from long-term real work, not stake or self-declaration. This direction is revolutionary, but it's still not open enough—real-world event verification experts are extremely diverse, ranging from finance, regulation, healthcare, and multilingualism to security auditing, fraud detection, on-chain monitoring, and industry experience. No single team can build an agent cluster covering all these areas.
Therefore, what we need is an open, multi-stakeholder "truth-seeking market." Why? Because the way humans acquire truth is not by asking a single expert, but by checking multiple sources, consulting multiple friends, listening to multiple KOLs, and then extracting a stable understanding from the conflicts. The agent world must also evolve along this mechanism.
Our current development direction is a combination of ERC8004 and x402. ERC8004 is responsible for establishing the programmable reputation layer, recording each agent's historical performance, call count, success stories, challenge records, area of expertise, stability, etc., allowing a "verifiable career" to naturally determine an agent's eligibility to participate. x402, on the other hand, handles the payment layer. Through it, we can dynamically convene multiple agents with medium to high reputations in a single event verification, allowing them to verify in parallel, cross-validate, and aggregate the results based on their contributions. Instead of finding a single expert, we're assembling a committee—this is the true "truth committee" of the machine world.
An open, multi-entity, reputation-weighted, challenge-incentivized, and self-evolving truth market may be the true future form of Oracle.
Meanwhile, Intuition is building another layer: Semantic Truth. Not all truths can be verified through events, such as "Is a project trustworthy?", "Is the governance quality good?", "Does the community like a product?", "Is a developer reliable?", and "Is a viewpoint accepted by the mainstream?". These are not yes/no statements, but social consensus, suitable for expression using TRUST triples (Atom — Predicate — Object), with consensus strength accumulated through stakes for or against. It applies to long-term facts such as reputation, preferences, risk levels, and labels. However, their current product experience is indeed poor. For example, to create the statement "V God is the founder of Ethereum," all related terms must have an identity within the system, making the process very awkward. The pain points are clear, but their solution is not yet good enough.
Therefore, the future truth structure will present two complementary layers: event truth (Agent Oracle) is responsible for the real-time world, and semantic truth (TRUST) is responsible for long-term consensus. Together, they constitute the truth foundation of AI.
The Reality Stack will be clearly divided into three layers: the event truth layer (Sora / ERC8004 + x402), the semantic truth layer (TRUST), and the final settlement layer (L1/L2 blockchain). This structure is likely to become the true foundation for AI × Web3.
Why will this change the entire internet? Because today's agents cannot verify authenticity, determine origin, avoid fraud, prevent data contamination, undertake high-risk actions, or perform cross-checks like humans. Without Agent Oracles, the agent economy cannot exist; but with them, for the first time, we can establish a verifiable reality layer for AI. Agent Oracle = the reality foundation of AI.
The Oracle of the future will not be a network of nodes, but rather composed of countless specialized agents: they accumulate reputation through income, participate in verification through reputation, and obtain new jobs and challenges through verification. They will automatically collaborate, automatically divide tasks, and self-evolve, ultimately expanding to all knowledge domains. That will be a true machine society, a marketplace of truth.
Blockchain provides us with a trusted ledger, but the Agent era requires trusted reality, trusted events, trusted semantics, trusted judgments, and trusted execution. Without Agent Oracles, AI cannot operate safely in the world; with them, we can build a "reality layer" for machines for the first time. The future belongs to protocols that help machines understand the real world.


