A deep dive into the tool system architecture from Claude Code's open source — how tools are defined, sent to the LLM, and processed back.
Claude Code is Anthropic's official CLI tool for AI-assisted software engineering. We analyzed its open-source code to understand how it defines, sends, and processes tool calls with the LLM.
The tool system is built on three layers: Tool Definition (Zod schemas converted to JSON Schema), API Transport (native Anthropic Messages API with streaming), and Result Processing (execution + result mapping back to the conversation). Each layer is cleanly separated, enabling concurrent tool execution, permission gating, and large-output persistence.
Each tool — Bash, Read, Write, Edit, Glob, Grep, Task, and more — implements the Tool<Input, Output> interface. Schemas are authored in Zod and converted to JSON Schema at runtime via zodToJsonSchema() before being sent to the API.
Tool<Input, Output> = { name: string description: string inputSchema: ZodSchema → JSON Schema call(input, context) → ToolResult mapToolResultToToolResultBlockParam() isReadOnly(): boolean isConcurrencySafe(): boolean maxResultSizeChars: number }
Tools also declare concurrency safety and read-only properties. When the LLM returns multiple tool_use blocks in a single message, Claude Code checks these flags to decide which tools can run in parallel and which must execute sequentially.
Tools are sent as a native JSON array via the Anthropic SDK's tools parameter — NOT embedded as text in the prompt. This is fundamentally different from ChatML-based systems where tool definitions are serialized into the prompt text with special tokens.
{
"model": "claude-opus-4.5-20251101",
"system": [{ "type": "text", "text": "You are Claude Code..." }],
"tools": [{
"name": "bash",
"description": "Execute bash commands...",
"input_schema": { "type": "object", ... }
}],
"messages": [
{ "role": "user", "content": "List files in current directory" }
],
"thinking": { "type": "adaptive" },
"betas": ["interleaved-thinking-2025-05-14", "..."]
}
The raw token-level prompt format is handled server-side by Anthropic and never exposed to the client. Claude's API doesn't use <|im_start|> markers or ChatML — the SDK sends structured JSON over HTTP, and the model's internal prompt encoding is an implementation detail.
The LLM streams tool input as incremental JSON fragments. Claude Code accumulates these deltas and parses the complete input only at content_block_stop. Each completed block is yielded immediately to the UI for responsive streaming.
The model can return multiple tool_use blocks in a single message for parallel execution. Claude Code checks each tool's concurrency flags before dispatching them simultaneously.
Tool results are sent back as user messages with tool_result blocks. Each result references the original tool_use by ID. If output exceeds maxResultSizeChars, it's persisted to disk and replaced with a size summary.
{
"role": "user",
"content": [{
"type": "tool_result",
"tool_use_id": "toolu_xxx",
"content": "file1.txt\nfile2.txt\nfile3.txt",
"is_error": false
}]
}
Error results set is_error: true, signaling the LLM to adjust its approach. Large outputs (common with Bash and Grep) trigger the persistence layer — the full result is written to a temporary file, and the API receives only a pointer: [Tool output saved to file. Original size: 125KB].
A single tool-use turn flows through four stages. The loop repeats until the model responds with text instead of a tool call.
Natural language request sent as a user message to the API.
Model returns one or more tool_use blocks with name and JSON input.
Tool runs locally with permission checks, concurrency control, and progress streaming.
Output sent back as a user message. LLM continues or responds with text.
Claude Code leverages 19+ Anthropic beta headers for advanced capabilities. These are all Anthropic-specific and not available through third-party providers.
| Provider | SDK | Status |
|---|---|---|
| Anthropic 1P | @anthropic-ai/sdk | Full Support |
| AWS Bedrock | @anthropic-ai/bedrock-sdk | Supported |
| Google Vertex | @anthropic-ai/vertex-sdk | Supported |
| Azure Foundry | @anthropic-ai/foundry-sdk | Supported |