You updated to Opus 4.7 and your carefully crafted prompt suddenly produces different output. Or you noticed Claude handling a 200-file codebase refactor without losing track halfway through. Both happened to me in the first week.
Opus 4.7 isn't a minor bump. It's measurably better at coding and vision, with one catch: it follows instructions so literally that prompts written for older versions might behave unexpectedly.
Key Takeaways
- 13% higher resolution rate on a 93-task coding benchmark vs Opus 4.6
- Vision handles images up to 2,576px on the long edge (~3.75 megapixels) — over 3x before
- Stronger long-context reasoning: it actually remembers what you said 50,000 tokens ago
- Instruction following is more literal — existing prompts may need re-tuning
- Same pricing: $5/M input tokens, $25/M output tokens
What Got Better
Coding tasks feel different. The model handles complex, long-running tasks with a consistency Opus 4.6 couldn't match. On Anthropic's 93-task benchmark, resolution jumped 13% — including four tasks that neither Opus 4.6 nor Sonnet 4.6 could solve at all.
In practice, multi-file refactors that used to drift by file 15 now complete cleanly. The model also self-verifies more — it devises ways to check its own output before reporting back.
Vision is genuinely useful now. The previous limit was frustrating — dense screenshots came out blurry, diagrams lost detail. At 2,576px on the long edge, you can feed it a full-page dashboard screenshot and it'll read the small numbers in the corner.
Practical uses: code review screenshots, wireframe feedback, data extraction from charts, reading error messages from phone screenshots.
Long-context reasoning holds up. If you're running Claude Code on a large codebase with 40+ files in context, the model is less likely to "forget" constraints you mentioned at the beginning. It maintains important notes across long, multi-session work more reliably.
The Instruction-Following Catch
Here's the thing: Opus 4.7 interprets instructions far more literally than before.
If your prompt says "always add error handling," Opus 4.6 might skip it when the function is obviously simple. Opus 4.7 adds error handling everywhere. Always means always now.
This is technically correct. But if you've been relying on the model to "use judgment" about when to apply a rule, you'll need to add those exceptions explicitly.
You can start here:
Review your CLAUDE.md or system prompt for rules like "always X" or "never Y." Add the exceptions you were counting on the model to infer. Example: "Add error handling to all public API endpoints. Skip for internal utility functions under 10 lines."
💡 This is actually good for production. Literal instruction following means more predictable behavior. You just need prompts that say what you actually mean.
New Tools in the Box
xhigh effort level. Between high and max — more reasoning depth without the full latency cost. Set it with /effort xhigh in Claude Code or via the API.
Task budgets. The model allocates tokens more intelligently across multi-step workflows instead of blowing the budget on step one.
/ultrareview. A multi-agent code review command. It runs your branch through parallel review agents in the cloud. Think of it as getting three senior engineers to review your PR simultaneously.
Should You Switch?
From Opus 4.6: Yes. The coding and vision improvements are real, and re-tuning prompts is a small price.
From Sonnet 4.6: Depends on your workload. For quick tasks and cost-sensitive apps, Sonnet is still right. For complex multi-step work, long code sessions, or anything vision-heavy, Opus 4.7 is worth it.
⚠️ Don't switch mid-project without testing. The literal instruction-following change can surface unexpected behavior in established workflows. Run your key prompts through Opus 4.7 in a test environment first.
Setting Up
Claude Code: Switch with /model opus or set it in settings:
{ "model": "claude-opus-4-7" }
API: Replace your model string with claude-opus-4-7-20260401 in your API calls.
Claude.ai: Opus 4.7 is available in the model picker for Pro and Team users.
Full details on the Anthropic announcement and models documentation.