A constitution for agentic AI is not just a safeguard; it’s the new gateway to participation in trusted markets and governance through verifiabilityA constitution for agentic AI is not just a safeguard; it’s the new gateway to participation in trusted markets and governance through verifiability

Agentic AI must learn to play by blockchain’s rules | Opinion

2025/10/22 17:39

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

Systems that can call tools up on demand, set goals, spend money, and alter their own prompts are already creeping out of sandboxes and into production — agentic AI, or artificial intelligence, included.

Summary
  • Governance through verifiability: As AI agents gain autonomy to spend, publish, and act, systems must enforce cryptographic provenance and auditability — turning AI accountability from guesswork into verifiable evidence.
  • Identity over anonymity: Agentic AI needs verifiable identities, not usernames. Using W3C Verifiable Credentials and smart account policies, agents can prove who they are, what they’re allowed to do, and maintain traceable accountability across platforms.
  • Signed inputs and outputs: Cryptographically signing every input, output, and action creates a transparent audit trail — transforming AI from a “black box” into a “glass box” where decisions are explainable, reproducible, and regulator-ready.

This shift completely overlooks the bargain that society made with AI during its origins, that outputs were suggestions while humans were on the hook. Now, agents act, flipping that onus and opening the door to a wide world of ethical complications. If an autonomous system can alter records, publish content, and move funds, it must learn to play by the rules, and it must (more vitally) leave a trail that stands the test of time so that it can be audited and disputed, if necessary. 

Governance by engineering is needed now more than ever in the modernity of agentic AI, and the market is beginning to see this. Autonomy becomes more about accumulating liabilities rather than optimizing processes with cryptographic provenance and rules to bind agentic AI. When a trade goes wrong or a deepfake spreads, post-mortem forensics cannot rely on Slack messages or screenshots. Provenance is key, and it has to be machine-verifiable from the moment inputs get captured through to the moment actions are taken.

Identities, not usernames

Handles or usernames are not enough; agents need to be given identities that can be proven with verifiable credentials. W3C Verifiable Credentials (VCs) 2.0 provides a standards-based way to bind attributes (like roles, permissions, attestations, etc.) to entities in a way that other machines can verify. 

Pair this verification with key management and policy in smart accounts, and soon enough, an agent can present exactly ‘who’ it is and ‘what’ it can do long before it executes a single action. In such a model, credentials become a trackable permission surface that follows the agent across chains and services, and ensures they play by their rules with accountability.

With frequent misattributions and license omissions above 70%, the messy provenance of more widely used AI datasets shows how fast non-verifiable AI crumbles under inspection. If the community can’t keep data straight for static training corpora, it can’t expect regulators to accept unlabeled, unverified agent actions in live environments. 

Signing inputs and outputs

Agents act on inputs, whether that be a quote, a file, or a photo, and when those inputs can be forged or stripped of context, safety collapses. The Coalition for Content Provenance and Authenticity (C2PA) standard moves media out of the realm of guesswork and into cryptographically signed content credentials. 

Once again, credentials win over usernames, as seen by the likes of Google integrating content credentials in search and Adobe launching a public web app to embed and inspect them. The momentum here is toward artifacts that carry their own chain of custody, so agents that ingest data and emit only credentialed media will be easier to trust (and to govern).

This method should be extended to more structured data and decisions, such as when an agent queries a service. In this scenario, the response should be signed, and what follows should be the agent’s decision being recorded, sealed, and time-stamped for verification. 

Without signed statements, post-mortems dissolve into finger-pointing and conjecture. With them, accountability becomes computable — every decision, action, and transition cryptographically tied to a verifiable identity and policy context. For agentic AI, this transforms post-incident analysis from subjective interpretation into reproducible evidence, where investigators can trace intent, sequence, and consequence with mathematical precision.

Establishing on-chain or permission-chained logs gives autonomous systems an audit spine — a verifiable trail of causality. Investigators gain the ability to replay behavior, counterparties can verify authenticity and non-repudiation, and regulators can query compliance dynamically instead of reactively. The “black box” becomes a glass box, where explainability and accountability converge in real time. Transparency shifts from a marketing claim to a measurable property of the system.

Providers capable of demonstrating lawful data sourcing, verifiable process integrity, and compliant agentic behavior will operate with lower friction and higher trust. They won’t face endless rounds of due diligence or arbitrary shutdowns. When an AI system can prove what it did, why it did it, and on whose authority, risk management evolves from policing to permissioning — and adoption accelerates.

This marks a new divide in AI ecosystems: verifiable agents that can lawfully interoperate across regulated networks, and opaque agents that cannot. A constitution for agentic AI — anchored in identity, signed inputs and outputs, and immutable, queryable logs — is not just a safeguard; it’s the new gateway to participation in trusted markets.

Agentic AI will only go where it can prove itself. Those who design for provability and integrity now will set the standard for the next generation of interoperable intelligence. Those who ignore that bar will face progressive exclusion—from networks, users, and future innovation itself.

Chris Anderson

Chris Anderson is the CEO of ByteNova AI, an emerging innovator in edge AI technology.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Ethereum Foundation Moves Entire $650M+ Treasury to Safe Multisig

Ethereum Foundation Moves Entire $650M+ Treasury to Safe Multisig

The post Ethereum Foundation Moves Entire $650M+ Treasury to Safe Multisig appeared on BitcoinEthereumNews.com. EF completes full treasury migration to Safe smart accounts, joining Vitalik Buterin as key Safe user + Safe smart accounts cross 750M transactions milestone.   The Ethereum Foundation has completed the migration of its full treasury, over 160,000 ETH worth approximately $650 million to Safe{Wallet}, following months of successful DeFi testing. Safe{Wallet}, operated by Safe Labs (a fully owned subsidiary of the Safe Foundation), is the crypto industry’s trusted smart account standard for multisig wallets, securing billions of dollars in assets for institutions, DAOs, and projects. The move follows the Foundation’s June 2025 treasury policy announcement, which committed to actively participating in Ethereum’s DeFi ecosystem. Since February, the EF had been testing Safe with a separate DeFi-focused account, dogfooding protocols including Aave, Cowswap, and Morpho as part of their strategy to support applications built on Ethereum. After testing a 3-of-5 multisig configuration on January 20th, the Foundation has now consolidated its remaining ETH holdings into Safe, completing the transition from their previous custom-built multisig solution. This implementation enables the Ethereum Foundation to actively participate in DeFi via Safe while maintaining battle-tested security standards, marking another step toward Safe’s vision of moving the world’s GDP onchain through battle-tested self-custody infrastructure. “Safe has proven safe and has a great user experience, and we will transfer more of our funds here over time,” the Ethereum Foundation announced, indicating this is the beginning of a deeper commitment to the Safe smart account standard. Safe’s Momentum The timing is notable: Safe has just crossed 750 million transactions (751,062,286 as of today) with over 57.5 million Safes created across multiple chains. The protocol has emerged as crypto’s de facto standard for multisig wallets, securing billions in institutional and DAO treasuries. Safe also counts Ethereum co-founder Vitalik Buterin among its prominent users, who revealed in May 2024 that…
Share
2025/10/23 04:15
Share
Citadel’s Stake in Solana Treasury Firm DeFi Dev Corp Highlights Potential Crypto Exposure

Citadel’s Stake in Solana Treasury Firm DeFi Dev Corp Highlights Potential Crypto Exposure

The post Citadel’s Stake in Solana Treasury Firm DeFi Dev Corp Highlights Potential Crypto Exposure appeared on BitcoinEthereumNews.com. COINOTAG recommends • Exchange signup 💹 Trade with pro tools Fast execution, robust charts, clean risk controls. 👉 Open account → COINOTAG recommends • Exchange signup 🚀 Smooth orders, clear control Advanced order types and market depth in one view. 👉 Create account → COINOTAG recommends • Exchange signup 📈 Clarity in volatile markets Plan entries & exits, manage positions with discipline. 👉 Sign up → COINOTAG recommends • Exchange signup ⚡ Speed, depth, reliability Execute confidently when timing matters. 👉 Open account → COINOTAG recommends • Exchange signup 🧭 A focused workflow for traders Alerts, watchlists, and a repeatable process. 👉 Get started → COINOTAG recommends • Exchange signup ✅ Data‑driven decisions Focus on process—not noise. 👉 Sign up → Citadel’s investment in DeFi Dev Corp represents a 4.5% stake held by the firm, with CEO Ken Griffin owning another 4.5%, totaling significant exposure to Solana treasury operations through this leading DAT company. Citadel and subsidiaries control over 9% of DeFi Dev Corp shares, highlighting hedge fund interest in Solana-based treasuries. DeFi Dev Corp has increased its SOL per share by 375% since initial acquisitions. Solana treasuries now hold 20.31 million SOL, with 9 million staked for an average 7.7% yield. Discover Citadel’s 4.5% stake in DeFi Dev Corp and its impact on Solana treasuries. Explore SOL holdings growth and market insights for informed crypto investment decisions today. What is Citadel’s Stake in DeFi Dev Corp? Citadel’s investment in DeFi Dev Corp includes a 4.5% ownership through the firm itself, complemented by an additional 4.5% held directly by CEO Ken Griffin. This positions Citadel among the top shareholders in the Solana-focused treasury company. Various Citadel subsidiaries, such as Citadel Advisors LLC and Citadel Securities LLC, contribute further stakes totaling around 6%, as detailed in a recent ownership report. COINOTAG…
Share
2025/10/23 03:57
Share