Friends,
Quick pulse-check on the week: AI is escaping the chat boxβinto wet labs, into datacenters, and into your default apps. Hereβs what matters.
This newsletter you couldnβt wait to open? It runs on beehiiv β the absolute best platform for email newsletters.
Our editor makes your content look like Picasso in the inbox. Your website? Beautiful and ready to capture subscribers on day one.
And when itβs time to monetize, you donβt need to duct-tape a dozen tools together. Paid subscriptions, referrals, and a (super easy-to-use) global ad network β itβs all built in.
beehiiv isnβt just the best choice. Itβs the only choice that makes sense.
(β¨ If you donβt want ads like these, Premium is the solution. )
GPT-5 Ran a Wet Lab (Seriously)
OpenAI + Red Queen Bio put GPT-5 in charge of a real experimental loop and it boosted molecular cloning efficiency by 79Γ.
GPT-5 iterated on a cloning workflow across multiple rounds; humans/robots & machines executed, GPT-5 steered.
Result: 79Γ more sequence-verified clones from the same input DNA vs baseline.
Key finding AI had: a new enzyme mechanism (RecA + gp32), packaged as a new approach.
Why it matters: this is what βAI for scienceβ looks like when it touches realityβtight loops, real data, real gains.

(Source)
100β200 MW for Robot Training: Teslaβs Cortex 2
The physical footprint keeps growing: Tesla has a new permit trail around Cortex 2, a datacenter-scale cluster tied to training workloads (FSD + increasingly, Optimus).
Filings point to a 100β200 MW-class facility (fire detection/alarm work scoped for that scale).
Thatβs βtens of thousands of GPUs + cooling + storageβ territory.
Big picture: progress isnβt just better modelsβitβs brute-force infrastructure, built fast.

(Source)
Gemini 3 Flash: Pro-ish Intelligence at Flash Speed
Google launched Gemini 3 Flashβthe speed/cost monster thatβs meant to be the default workhorse. It is already in Cursor.
Compared to Gemini 3 Pro, Flash is positioned as:
~3Γ faster latency
~ΒΌ the cost (and higher rate limits)
Stronger agentic coding: 78% SWE-bench Verified
Very strong multimodal (text/image/video/audio), close to Pro
Trade-off:
Slightly lower peak intelligence vs Pro (e.g., 33.7% vs 37.5% on Humanityβs Last Exam, no tools).
Pro still wins on hardest math, deepest reasoning, and nastiest code.
Flash is now the default/free model in the Gemini app.
GPT Image 1.5: OpenAIβs New Flagship Image Model
OpenAI released GPT-Image-1.5 (a.k.a. the new ChatGPT Images).
Up to 4Γ faster generation
More reliable, surgical edits (change what you askβkeep the rest)
Better instruction following + denser text rendering
Also ~20% cheaper than the prior image model in the API
Youβll see the comparison in the graph below (incl. vs Nano Banana Pro).
Generated with GPT-Image-1.5 (Infographic of LM Arena Board):

OpenAI Goes Geopolitical + Everyone Doubles Down on Compute
OpenAI hired former UK Chancellor George Osborne to lead βOpenAI for Countriesββbasically βStargate for Countries,β exporting an βAI stackβ via government partnerships.
And the meta-trend is still the same: compute wins. Greg Brockmanβs blunt take: itβs what keeps working for intelligence.
That's all for this week!
Happy Building!
πMartin
I recommend:
Beehiiv if you write newsletters.
Superhuman if you write a lot of emails.
Cursor if you code a lot.
Follow me on X.com.
AI for your org: We build custom AI solutions for half the market price and time (building with AI Agents). Contact us to know more.




