Skip to main content

Warden Code

Overview

Warden Code is a CLI for scaffolding production-ready AI Agents compatible with Warden.

Generated Agents also support open standards such as athe A2A protocol, x402 payments, and ERC-8004 identity, which allows them to function across the broader Agent ecosystem.

This article is a reference covering the main features of Warden Code. For getting started with Agent development, see Build an Agent with Warden Code.

GitHub

Warden Code is available on GitHub: warden-code.

Key features

Agents generated with Warden Code support the following key features:

To learn more, see Warden Agent capabilities.

Basics

CLI commands

With Warden Code, you can use the command line to generate a project, edit your code with an AI assistant, configure your Agent, and much more:

/new - Create a new Agent interactively
/build - Enter the AI-powered mode to build your Agent
/chat - Chat with a running Agent using A2A or LangGraph
/config - View and edit the Agent configuration
/register - Register the Agent onchain (ERC-8004)
/activate - Activate a registered Agent onchain (ERC-8004)
/deactivate - Deactivate a registered Agent onchain (ERC-8004)
/help - Show available commands
/clear - Clear the terminal screen
/exit - Exit the CLI

Project structure

Warden Code generates the following project structure:

my-agent/
├── src/
│ ├── agent.ts # Your Agent's logic: the handler function
│ ├── server.ts # Server setup, static file serving, protocol routing
│ └── payments.ts # x402 payment setup (created only if you enable x402)
├── public/
│ ├── index.html # The chat frontend: auto-loads the A2A Agent Card, x402 wallets
│ └── .well-known/
│ ├── agent-card.json # The A2A Agent Card: the identity, capabilities, skills
│ └── agent-registration.json # ERC-8004 registration metadata
├── package.json
├── tsconfig.json
├── Dockerfile
├── .env.example
└── .gitignore

Agent models

Depending on your choices you make when creating an Agent, Warden Code uses one of the supported Agent models:

  • OpenAI + Streaming: A GPT-powered Agent with streaming responses
  • OpenAI + Multi-turn: A GPT-powered Agent with conversation history
  • Blank + Streaming: A minimal streaming Agent that echoes input
  • Blank + Multi-turn: A minimal multi-turn conversation agent

API

Authentication

Coming soon.

A2A endpoints

Coming soon.

LangGraph endpoints

LangGraph Agent Server API

All Warden Agents are immediately compatible with the LangGraph Agent Server API.

Warden Code exposes this API only partially. Below, you can find a full list of supported endpoints with links to the LangGraph API reference.

You can test them when running your Agent locally or in production.

Assistants

NameMethodEndpoint
Search AssistantsPOST/assistants/search
Get AssistantGET/assistants/{assistant_id}

Threads

NameMethodEndpoint
Create ThreadPOST/threads
Search ThreadsPOST/threads/search
Get ThreadGET/threads/{thread_id}
Get Thread StateGET/threads/{thread_id}/state
Get Thread HistoryGET/threads/{thread_id}/history
Delete ThreadDELETE/threads/{thread_id}

Thread runs

NameMethodEndpoint
Create Background RunPOST/threads/{thread_id}/runs
Create Run, Stream OutputPOST/threads/{thread_id}/runs/stream
Create Run, Wait for OutputPOST/threads/{thread_id}/runs/wait

Stateless runs

NameMethodEndpoint
Create Run, Stream OutputPOST/runs/stream
Create Run, Wait for OutputPOST/runs/wait

System

NameMethodEndpoint
Server InformationGET/info
Health CheckGET/ok

Functions

Coming soon.