Eight days. Three frontier models. One very different AI market.
Claude Opus 4.7 landed on April 16. GPT-5.5 followed on April 23. DeepSeek V4 arrived a day later.
Any one of them would normally dominate the cycle. Together, they mark something bigger: the week frontier AI stopped being scarce.
The capability line did not just move.
It blurred.
A year ago, the conventional wisdom was simple: open-source models were one generation behind closed models.
That era is over.
The best prompt engineers aren't typing. They're talking.
Power users figured this out early: speaking a prompt gives you 10x more context in half the time. You include the edge cases, the examples, the tone you want — because talking is fast enough that you don't skip them.
Wispr Flow captures everything you say and turns it into clean, structured text for any AI tool. Speak messy. Get polished input. Paste into ChatGPT, Claude, Cursor, or wherever you work.
89% of messages sent with zero edits. 4x faster than typing. Works system-wide on Mac, Windows, and iPhone.
DeepSeek V4-Pro is a 1.6T-parameter MoE model with 49B active parameters, open weights, and an MIT license. It posts a Codeforces rating of 3206, SWE-bench Verified at 80.6%, LiveCodeBench at 93.5%, and a 1M-token context window.
The pricing is the real shock:
DeepSeek V4-Pro: $1.74 input / $3.48 output per million tokens
Claude Opus 4.7: $5 input / $25 output
GPT-5.5: $5 input / $30 output
V4-Flash: $0.14 input / $0.28 output
For batch workloads, that is not a discount.
That is a market reset.
Any company paying premium prices for closed-model reasoning now has a credible open-weight alternative at a fraction of the cost.
That changes procurement. It changes architecture. It changes leverage.
Where closed labs still lead
The closed labs still have a moat.
But it is narrower than people think.
The gap is no longer broad “intelligence.” It is specific capability.
On Terminal-Bench 2.0, GPT-5.5 scores 82.7% versus DeepSeek’s 67.9%. That is a real 15-point gap.
On SWE-Bench Pro, Opus 4.7 leads DeepSeek 64.3% to 55.4%.
The pattern is clear: closed models still dominate agentic execution.
Multi-step tool use. Autonomous terminal workflows. Long-horizon planning. Codebase navigation. Structured orchestration across messy real-world environments.
That is where the closed labs are still ahead.
But raw reasoning? Math? Coding benchmarks? Classification? Extraction? Standard software tasks?
Commoditized.
The moat has narrowed to two things:
1. Agentic orchestration
Models that can plan, use tools, recover from errors, operate terminals, and move through complex workflows without constant human steering.
2. Deep knowledge synthesis
Models that can combine long context, proprietary training data, expert feedback, and domain-specific reasoning into useful output.
GPT-5.5 dominates autonomous terminal execution.
Opus 4.7 dominates codebase understanding and structured tool use.
DeepSeek V4 matches or beats them on much of the rest — at a fraction of the price, with open weights.
They are no longer competing on the same axis.
The frontier split.
Subscribe to AI Insider to read the rest.
Become an AI Insider to get access to this post and other subscriber-only content.
Join AI InsiderAI Insider members get:
- ✅ Full access to 100% of all content.
- ✅ Exclusive DEMOs, reports, and other premium content.
- ✅ Ad-free experience.


