TLDR:
- Algorand warns that smart contract vulnerabilities cause immediate, irreversible fund loss with no legal recovery path available.
- AI tools may store user data in LocalState, a flawed pattern where ClearState drains critical accounting data permanently.
- Algorand recommends using Plan Mode and agent skills to design secure contract architecture before writing a single line of code.
- Private keys must stay out of AI reach entirely, with OS-level keyrings handling all transaction signing away from the agent.
Algorand is urging blockchain developers to adopt disciplined, AI-assisted practices before deploying smart contracts to MainNet.
The blockchain platform has drawn a clear line between reckless AI-generated code and responsible agentic engineering.
With AI agents now capable of building and deploying contracts in a single conversation, the stakes have never been higher. Deploying vulnerable smart contracts means immediate, irreversible loss of funds with no path to recovery.
The Risk of Unreviewed AI-Generated Code
Algorand developers have identified a growing problem in the broader web3 space. AI coding tools allow developers to ship products faster, but unchecked code carries serious risk.
Unlike web2 breaches, smart contract vulnerabilities cannot be patched after the fact. Funds drained from a poorly written contract are gone permanently, with no legal recourse available.
The Algorand team shared a concrete example of how AI can mislead developers. An AI might store user balances in LocalState, which appears to be the correct pattern.
However, users can clear local state at any time, and ClearState succeeds even when a program rejects it. This means critical accounting data can disappear without warning. Developers who do not understand the code they ship are exposed to exactly this kind of subtle failure.
Algorand’s developers formalized this concern through a public post from the @algodevs account. The post draws from Addy Osmani’s distinction between “vibe coding” and “agentic engineering.”
Vibe coding means accepting all AI output without review. Agentic engineering means the developer remains the architect and final decision-maker throughout the process.
The platform advises developers to use BoxMap instead of LocalState for data that cannot be lost. This kind of nuance is what separates a working contract from a broken one.
AI tools trained on outdated patterns will not flag these issues automatically. Developers must bring their own understanding to every deployment.
How Algorand Recommends Building Safely With AI
Algorand outlines several practices to keep AI-assisted development secure and maintainable. Developers should use Plan Mode before writing any code, allowing the agent to design architecture first.
This produces a spec covering state schema, method signatures, and access control. Reviewing this plan catches design flaws before any implementation begins.
Agent skills play a major role in guiding AI toward correct Algorand patterns. These are curated instructions that encode current best practices directly into the development workflow.
Without them, AI is likely to use deprecated APIs or outdated patterns. Structured prompts reduce hallucinations and produce more reliable contract code.
Private keys must remain completely out of reach of AI agents at all times. Tools like VibeKit use OS-level keyrings so that AI requests transactions without ever accessing signing credentials.
Additionally, developers should use algokit task analyze and simulate calls to catch edge cases. Testing should mirror how an attacker would approach the contract, not just how a user would.



