What Just Shipped

Anthropic released Code Review for Claude Code on March 9, 2026 — a multi-agent system that automatically analyzes AI-generated pull requests and flags logic errors before they ship. It's currently in research preview for Claude Team and Enterprise customers.

How It Works

When a developer opens a pull request, the system dispatches multiple AI agents that work in parallel. Each agent independently searches for bugs, then the agents cross-verify each other's findings to filter out false positives, and finally rank the remaining issues by severity.

The system scales dynamically with complexity. Large or intricate pull requests receive more agents and deeper analysis; trivial changes get a lighter pass.

The Numbers

Anthropic runs Code Review on nearly every internal PR. Before the tool, 16% of PRs got substantive review comments. That number has since risen to 54%. Each review takes approximately 20 minutes and costs $15–$25 depending on code complexity (token-based pricing).

Why It Matters

The volume of AI-generated code is rising faster than human reviewer bandwidth. As developers increasingly use tools like GitHub Copilot, Cursor, and Windsurf to generate code, the review bottleneck has become critical. Anthropic's multi-agent approach offers a scalable solution — slower than instant linters, but far deeper in reasoning.

The $15–$25 per-review cost positions this as a premium enterprise tool, not a daily developer workflow item. For high-stakes code in security-sensitive or production environments, that cost may be well justified.

What's Next

Code Review is in research preview now. Expect general availability and pricing adjustments as Anthropic gathers feedback on cost-quality tradeoffs. Competing offerings from GitHub (Copilot Code Review) and Google (Gemini for code) will intensify pressure on pricing and speed.

Keep Reading