Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Using the provided reference image, preserve the deep midnight navy car hood, chrome pedestal base, shallow depth-of-field sky background, dramatic upward angle, and automotive advertisement lighting exactly as shown. Replace only the Mercedes star with a single ornate vintage skeleton key cast in polished chrome, standing upright on the pedestal with its bow (handle) at top and teeth pointing up, at realistic hood ornament scale — photorealistic metal with specular highlights. Add bold white sans-serif text reading OPENAI across the upper portion of the image as a clean headline.

Hey Excel agents from Claude, OpenAI & MS Copilot: “”make me a working strategy game in excel, it should have some form of graphics”” Claude made a board and acted as game master, Copilot created a board but no game, ChatGPT built a working game with formulas with a “”smart”” enemy.
https://x.com/emollick/status/2033372471395512566

This news came out a little earlier than we planned; we’re excited to be building a deployment arm and will share more details soon. Companies have a ton of urgency to deploy AI in their organizations and we’re sprinting to meet that demand. More than 1 million businesses run on
https://x.com/fidjissimo/status/2033537381907710092

Subagents are now supported in Codex. They’re very fun and make it possible to get large amounts of work done *quickly*:
https://x.com/gdb/status/2033757784437895367

5.3 to 5.4 is what i would have expected to warrant a jump to GPT-6
https://x.com/yacineMTB/status/2033291560217923803

A knowledge-work platform built around GPT-5.4 Pro level intelligence would be really useful. The gap between other models and what Pro can do on complex intellectual work remains stark. I would love to have access in a Codex-like platform with shared file spaces, subagents, etc
https://x.com/emollick/status/2033959257196966360

GPT-5.4 mini matters for subagents because it changes what feels worth handing off. The parent thread should hold the architecture, plan, and progress narrative. Fast subagents can explore the repo, check hypotheses, and preserve the parent thread’s limited attention.
https://x.com/nickbaumann_/status/2034134875234832540#m

i mean this story is insane. man used chatgpt to sell his house in 5 DAYS. got 5 offers in 72 hours. no real estate agents. saved so much money doing it too. he used AI to: > price the house (researched neighboring properties for sale) > wrote up the legal contracts (saving
https://x.com/cryptopunk7213/status/2033194801852567620?s=46

Man uses ChatGPT to sell his Cooper City home – NBC 6 South Florida https://www.nbcmiami.com/news/local/innovation-on-6/man-uses-chatgpt-to-sell-his-cooper-city-home-it-exceeded-our-expectations/3778919/

OpenAI preps for IPO in 2026, says ChatGPT must be ‘productivity tool’ https://www.cnbc.com/2026/03/17/openai-preps-for-ipo-in-2026-says-chatgpt-must-be-productivity-tool.html

AI really can help education: Randomized controlled experiment on high school students found a GPT-4o powered tutor that personalized problems for students raised final test scores by .15 SD, “”equivalent to as much as six to nine months of additional schooling by some estimates””
https://x.com/emollick/status/2033773791688433708

An AI consultant with no biology training used ChatGPT and AlphaFold to create a personalized mRNA cancer vaccine for his rescue dog. Tumor shrunk by half. UNSW structural biologist Dr. Kate Michie: “It’s exciting to me that someone who’s not a scientist has been able to do
https://x.com/TheRundownAI/status/2032843584869708105

this is actually insane > be tech guy in australia > adopt cancer riddled rescue dog, months to live > not_going_to_give_you_up.mp4 > pay $3,000 to sequence her tumor DNA > feed it to ChatGPT and AlphaFold > zero background in biology > identify mutated proteins, match them to
https://x.com/IterIntellectus/status/2032858964858228817

GPT-5.4 mini approaches the performance of the larger GPT-5.4 model on several evaluations, including SWE-Bench Pro and OSWorld-Verified.
https://x.com/OpenAIDevs/status/2033953828387885470

GPT-5.4 mini is available today in the API, Codex, and ChatGPT. In the API, it has a 400k context window. In Codex, it uses only 30% of the GPT-5.4 quota, letting you handle simpler coding tasks for about one-third of the cost. GPT-5.4 nano is only available in the API.
https://x.com/OpenAIDevs/status/2033953840312291603

GPT-5.4-mini 2.25 times more expensive than GPT-5-mini $0.75 Input $4.5 Output 400k
https://x.com/scaling01/status/2033955279079907511

Introducing GPT-5.4 mini and nano | OpenAI https://openai.com/index/introducing-gpt-5-4-mini-and-nano/

We’re introducing GPT-5.4 mini and nano, our most capable small models yet. GPT-5.4 mini is more than 2x faster than GPT-5 mini. Optimized for coding, computer use, multimodal understanding, and subagents. For lighter-weight tasks, GPT-5.4 nano is our smallest and cheapest
https://x.com/OpenAIDevs/status/2033953815834333608

OpenAI expands government footprint with AWS deal, report says | TechCrunch https://techcrunch.com/2026/03/17/openai-expands-government-footprint-with-aws-deal/

OpenAI to acquire Astral | OpenAI https://openai.com/index/openai-to-acquire-astral/

$WMT is disappointed in results from OpenAI partnership, whereby Walmart users are allowed to shop via ChatGPT and OpenAI would receive a commission on these purchases “Conversion rates–the percentage of users following through with a purchase of an item shown to them by
https://x.com/negligible_cap/status/2034369496543305971?s=46

The dictionary sues OpenAI | TechCrunch https://techcrunch.com/2026/03/16/merriam-webster-openai-encyclopedia-brittanica-lawsuit/

OpenAI acquired Astral, the team behind uv, ruff, and ty. Fun fact: Claude is the #6 contributor to uv. Curious if Anthropic will ban them from using Claude since the team is joining OpenAI. Congrats to the Astral team who built incredible Python tools!
https://x.com/Yuchenj_UW/status/2034661120599101498

OpenAI just released “”Parameter Golf”” a new challenge to train the best language model that fits in a 16MB artifact and trains in under 10 minutes on 8xH100s There’s also a leaderboard. If you perform well they might hire you The challenge is open from March 18th to April 30th
https://x.com/scaling01/status/2034312935661609280#m

scoop – OpenAI’s Fidji Simo told staff last week that the company could not afford to be “distracted by side quests” as Anthropic gains steam in the enterprise and coding markets said company execs are actively looking at areas to deprioritize
https://x.com/berber_jin1/status/2033694982943694988

we shipped a new version of 5.3 instant to chatgpt yesterday. 5.3 was unintentionally pretty annoyingly clickbait-y. it’s better in yesterday’s model and we’re going to keep stamping that behavior out. keep the feedback coming!
https://x.com/michpokrass/status/2033935238066540806

Welcome to OpenAI! Very excited to be working together and to make great tools to make developers everywhere more productive.
https://x.com/gdb/status/2034662275391320472

Absolutely loving the new sub-agents feature with Codex 🤯 Feels like managing a tiny team of thinkers… Plato waiting, Lorentz thinking, Mendel chilling… Who decided to name them after legendary minds? 😂
https://x.com/fdaudens/status/2033939319103070334

Codex 🤝 @NotionHQ Meet us in NYC on March 17 for a night packed with: Codex demos. Practical workflows. Builders to meet and learn from. https://x.com/OpenAIDevs/status/2033333345619464228

Companies go through phases of exploration and phases of refocus; both are critical. But when new bets start to work, like we’re seeing now with Codex, it’s very important to double down on them and avoid distractions. Really glad we’re seizing this moment.
https://x.com/fidjissimo/status/2034769466433913082

Gemini as folklore machine: “”Create a comic using universal folklore index ATU 570 set in the present day”” “”Now add ATU 720″” ATU 570 are tales about “”the king’s rabbit herder”” & ATU 720 is “”My mother slew me, my father ate me, my sister buried me under the juniper”” (really)
https://x.com/emollick/status/2033754096453271778

gemini-3.1-flash-lite-preview is extremely underrated. I know I keep saying that, but nothing beats the (price*latency)/intelligence you get here.
https://x.com/matvelloso/status/2033304726226493829

GLM-OCR 0.9B model that beats Gemini on OCR benchmarks going live in 15 minutes to test it on real-world datasets and build some cool demos link: https://x.com/skalskip92/status/2034658568117309600

GPT-5.4 mini is available today in ChatGPT, Codex, and the API. Optimized for coding, computer use, multimodal understanding, and subagents. And it’s 2x faster than GPT-5 mini. https://x.com/OpenAI/status/2033953592424731072

GPT-5.4 mini is now available in Windsurf!
https://x.com/windsurf/status/2033954998837776869

GPT-5.4 Pro is the first model that’s made me feel genuinely enabled to do almost anything. I didn’t expect this kind of leap. I don’t know what counts as AGI, but this feels awfully close.
https://x.com/shaunralston/status/2031901724571812226

GPT-5.4-mini is a wildly capable model and gives you ~3.3x more usage on Codex tasks compared to GPT-5.4. It’s excellent for spinning up new subagents!
https://x.com/dkundel/status/2033953901301665838

I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took. Thank you for getting us to this point.
https://x.com/sama/status/2033935276079510011

I helped a bit with the new community page for codex. Sign up to become an ambassador, if you’re a fan!
https://x.com/steipete/status/2034400645630058792

Ollama is now a provider inside CodexBar! Thank you @steipete for the awesome work!
https://x.com/ollama/status/2033794815448780803

Spend caps in the Gemini API, available starting today!! This is another step forward of many to give developers more control and peace of mind when building with Gemini. Please go set a cap and send us any feedback as you use them!
https://x.com/OfficialLoganK/status/2032126479257968907

Subagents are now available in Codex. You can accelerate your workflow by spinning up specialized agents to: • Keep your main context window clean • Tackle different parts of a task in parallel • Steer individual agents as work unfolds
https://x.com/OpenAIDevs/status/2033636701848174967

The Codex team are hardcore builders and it really comes through in what they create. No surprise all the hardcore builders I know have switched to Codex. Usage of Codex is growing very fast:
https://x.com/sama/status/2033599375256207820

The value produced by models is getting so much better so fast that old hardware is actually getting *more* expensive to rent. 3 years ago, the best model you could run on a H100 chip was GPT-4. Now, you can run GPT-5.4 on it, which is smaller and cheaper to run while
https://x.com/dwarkesh_sp/status/2033953122197115324

Use subagents and custom agents in Codex https://simonwillison.net/2026/Mar/16/codex-subagents/

We evalled @OpenAI GPT-5.4 mini and nano on APEX-Agents. With xhigh reasoning, mini scores 24.5% Pass@1. It outperforms other lightweight models like Gemini 3.1 Flash Lite (12.8%) as well as midweight models like Sonnet 4.6 (23.7% Pass@1) – but the token $ is just ¼.
https://x.com/mercor_ai/status/2033955468650156503

We just shipped a bunch of stuff to make it easier to scale with the Gemini API: – Automatic tier upgrades – Tier 1 -> Tier 2 now happens much faster (30 days post payment -> 3 days) and with less spend ($250 -> $100) – New billing account caps on each tier to limit over spend
https://x.com/OfficialLoganK/status/2033587540419019127

BullshitBench update: The new GPT-5.4 mini and nano models score quite low. This screenshot shows OpenAI models only, on the full list would put GPT-5.4-mini around 40th place and Nano is around 70th place. Again thinking didn’t help much at all.
https://x.com/petergostev/status/2033995459522396287

GPT 5.4 is a big step for Codex – by Nathan Lambert https://www.interconnects.ai/p/gpt-54-is-a-big-step-for-codex

gpt-5.4 has ramped faster than any other model we’ve launched in the API: within a week of launch, 5T tokens per day, handling more volume than our entire API one year ago, and reaching an annualized run rate of $1B in net-new revenue. it’s a good model, try it out!
https://x.com/gdb/status/2033605419726483963

GPT-5.4 nano is is also available starting today in the API.
https://x.com/OpenAI/status/2033953595637538849

GPT-5.4-mini looks really good for computer-use
https://x.com/scaling01/status/2033954794105127007

Ran a small eval today on an LM using GPT-5.2 as a judge. Model scores 10%, but paper reports it scoring 34%. I see that the paper uses GPT-5.1 as a judge; for the sake of consistency I change it. Switch to GPT-5.1 as a judge. Model now scores 43.5%… bro
https://x.com/a1zhang/status/2034059629072945251#m

This, but for real* Here’s METR-style graph of labor displacement from Roman aqueducts, doubling time of CDDII years. Lesson: 1) Displacing terrible work is good 2) All exponentials become s-curves in the end * I had GPT-5.4 Pro do the research, spot checks seemed accurate.
https://x.com/emollick/status/2033636278508425646

Xiaomi stuns with new MiMo-V2-Pro LLM nearing GPT-5.2, Opus 4.6 performance at a fraction of the cost | VentureBeat https://venturebeat.com/technology/xiaomi-stuns-with-new-mimo-v2-pro-llm-nearing-gpt-5-2-opus-4-6-performance

“Good developers are always looking to optimize their inner loop, but this is a new inner loop that everyone is still figuring out.” I like this line from @bolinfest (Michael Bolin), lead for open-source Codex at @OpenAI. It captures a shift that is easy to miss. The popular
https://x.com/TheTuringPost/status/2034076706722746408

“Good developers are always looking to optimize their inner loop, but this is a new inner loop that everyone is still figuring out.” I like this line from @bolinfest (Michael Bolin), lead for open-source Codex at @OpenAI. It captures a shift that is easy to miss. The popular
https://x.com/TheTuringPost/status/2034076706722746408#m

Are you up for a challenge? https://x.com/OpenAI/status/2034315401438580953#m

Community | OpenAI Developers https://developers.openai.com/community

e

Exclusive | ChatGPT Maker OpenAI to Cut Back on Side Projects in Push to ‘Nail’ Core Business – WSJ https://www.wsj.com/tech/ai/openai-chatgpt-side-projects-16b3a825

OpenAI Names New Infrastructure Leaders Following Stargate Strategy Shift — The Information https://www.theinformation.com/articles/openai-names-new-infrastructure-leaders-following-stargate-strategy-shift

Thoughts on OpenAI acquiring Astral and uv/ruff/ty
https://x.com/simonw/status/2034672725088997879

We’ve entered into an agreement to join OpenAI as part of the Codex team. I’m incredibly proud of the work we’ve done so far, incredibly grateful to everyone that’s supported us, and incredibly excited to keep building tools that make programming feel different.
https://x.com/charliermarsh/status/2034623222570783141

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading