Image created with gemini-3.1-flash-image-preview with claude-opus-4.7. Image prompt: Using the provided Alesso reference image, keep the pure white background, landscape composition, vertical type hierarchy, and galaxy-punchout starfield treatment exactly, but replace ‘HEROES’ with ‘OPENAI’ in the same bold condensed grotesque all-caps galaxy-punchout, replace ‘ALESSO’ with ‘UNIVERSAL TUTOR’ in the light geometric all-caps galaxy-punchout, and replace ‘TOVE LO’ with ‘GPT REASONING’ in the condensed grotesque all-caps galaxy-punchout, keeping ‘(we could be)’ and ‘FEATURING.’ unchanged.

ChatGPT Images 2.0 – YouTube

I have been using GPT ImageGen-2 for the past weeks I didn’t think that better image-generators would be a big deal but it turns out that there is a quality threshold I didn’t expect, where you can now get text, slides, academic papers Look at what it does with my “”otter test””!
https://x.com/emollick/status/2046665274535854146

No bad ideas when you’re playing with ChatGPT Images 2.0 → Smarter visuals → Better editing and aesthetics Rolling out in Figma and Figma Weave
https://x.com/figma/status/2046673364496875977

Aspect Ratios & Resolution with ChatGPT Images 2.0 – YouTube

ChatGPT Images — Chameleon – YouTube

Instruction Following with ChatGPT Images 2.0 – YouTube

Multilingual & Text Rendering with ChatGPT Images 2.0 – YouTube

Slides & Infographics with ChatGPT Images 2.0 – YouTube

Thinking & Intelligence with ChatGPT Images 2.0 – YouTube

This is ChatGPT Images 2.0 – YouTube

Anthropic just overtook OpenAI with $1 trillion valuation
https://finance.yahoo.com/markets/stocks/articles/anthropic-just-overtook-openai-1-155312239.html

GPT-5.5 takes OpenAI back to the clear number one in AI. OpenAI’s new model tops the Artificial Analysis Intelligence Index by 3 points, breaking a three-way tie with Anthropic and Google OpenAI gave us pre-release access to test all five reasoning effort levels: xhigh, high,
https://x.com/ArtificialAnlys/status/2047378419282034920

I’ve been an early tester of GPT-5.5, and it destroyed the “”GPT Plays Pokémon FireRed”” benchmark. GPT-5.4 never finished the game, it got stuck in a loop, reloading the last save and retrying the final rival fight over and over. GPT-5.5 not only beat it on the first try, but did
https://x.com/clad3815/status/2047392779006013833?s=12

GPT-5.5, not fully saturating the TikZ unicorn test yet but getting awfully close … (yes this is actual TikZ code, I personally find it so unbelievable that I’m putting the code below for anyone to verify for themself)
https://x.com/sebastienbubeck/status/2047383628922167390?s=46

GPT-ImageGen-2 did this in one shot, with just the prompt “”turn all of Tennyson’s Ulysses into a comic, across as many pages as needed. make it great, include the full text”” 10 pages, though it did use what seems to be the ImageGen-2 ‘s preferred “”spackled drawing”” style 1/
https://x.com/emollick/status/2046843402021380556

Nearly perfect (if unnerving). This is first shot, and the only real issue is the double hour hand.
https://x.com/emollick/status/2046761955642196381

We now estimate that only about 0.3 GW of total facility power is operational for Stargate Abilene, not 0.6 GW. We have moved the 0.6 GW milestone to late May and the 1.2 GW milestone from Q3 to Q4 2026, but both are uncertain. More about this change and our methodology in 🧵
https://x.com/EpochAIResearch/status/2047442515608162481

always a real feeling of magic to ask codex to perform a task that requires finding information scattered across slack, google docs, notion, and various internal tools, and it just figures it out
https://x.com/gdb/status/2044643518891909289

Auto-review is a new mode that lets Codex work longer with fewer approvals and safer execution. It helps Codex keep moving through tests, builds, and more, including during long tasks and automations, while a separate agent checks higher-risk steps in context before they run.
https://x.com/OpenAIDevs/status/2047436655863464011

ChatGPT Images 2.0 is available starting today to all ChatGPT and Codex users. Images with thinking are available to ChatGPT Plus, Pro, and Business users (Enterprise soon). On mobile, make sure you update to the latest version of the app. The underlying model, gpt-image-2, is
https://x.com/OpenAI/status/2046670994413322435

Chronicle – Codex | OpenAI Developers
https://developers.openai.com/codex/memories/chronicle

Classic study gave 146 economist teams the same dataset & got wildly different answers New paper reruns it with agentic AI. Claude Code & Codex land near the human median, but with far tighter dispersion & no extremes. Suggests that AI is now useful for doing scalable research.
https://x.com/emollick/status/2046362044786458648

Codex Computer feels like the first really usable computer use platform. More importantly, it shows that the tech has arrived and now we will see a wave of things get unlocked. Enterprise software will never be the same again. All the legacy stuff that will never see an API is
https://x.com/matvelloso/status/2045209294942142860

Codex hit 4M active users, less than two weeks after hitting 3M. We will reset rate limits today!
https://x.com/sama/status/2046604989527912590

codex is becoming a full agentic IDE
https://x.com/gdb/status/2045375289560007029

Codex is open source, enabling anyone to build awesome applications on top of it:
https://x.com/gdb/status/2045214436689072136

Exclusive | OpenAI Plans Launch of Desktop ‘Superapp’ to Refocus, Simplify User Experience – WSJ
https://www.wsj.com/tech/openai-plans-launch-of-desktop-superapp-to-refocus-simplify-user-experience-9e19931d

GPT-5.5 is here. It’s our smartest frontier model yet, introducing a new class of intelligence for agentic coding, computer use, knowledge work, and scientific research. Rolling out in ChatGPT and Codex today. API is coming soon.
https://x.com/OpenAIDevs/status/2047377079352877534

gpt-image-2 is here, available today in the API and Codex. The most capable image generation model yet, built for production-grade workflows with stronger text rendering, layout, editing, resolution, and multilingual rendering.
https://x.com/OpenAIDevs/status/2046671238534496259

Have never seen something like Codex Computer Use. UX is novel. Also surprised at how well it works. 5.4 great at driving OS actions. Well done.
https://x.com/mattrickard/status/2045218583882633412

Introducing GPT-5.5 | OpenAI
https://openai.com/index/introducing-gpt-5-5/

Introducing GPT-5.5 A new class of intelligence for real work and powering agents, built to understand complex goals, use tools, check its work, and carry more tasks through to completion. It marks a new way of getting computer work done. Now available in ChatGPT and Codex.
https://x.com/OpenAI/status/2047376561205325845

Introducing workspace agents in ChatGPT | OpenAI
https://openai.com/index/introducing-workspace-agents-in-chatgpt/

Introducing workspace agents in ChatGPT–shared agents that can handle complex tasks and long-running workflows across tools and teams.
https://x.com/OpenAI/status/2047008987665809771

man Codex Computer Use is actually so good i’ve got my guy sending Slack messages, reading my Slack bookmarks, checking stuff on my browser, and i’m still trying more things it’s legit so good
https://x.com/kr0der/status/2045154074337710136

New in the Codex app: – GPT-5.5 – Browser control – Sheets & Slides – Docs & PDFs – OS-wide dictation – Auto-review mode Enjoy!
https://x.com/ajambrosino/status/2047381565534322694?s=20

Seriously stop everything you are doing and use codex desktop app new computer use. Absolutely mind blowing
https://x.com/HamelHusain/status/2045191726495846459

Some of you were disappointed that we “only” get an image model from OpenAI today. But you need to see the big picture: GPT-Image-2 can generate mockups of websites, which Codex can then turn straight into working code. That’s one of the exciting new use cases enabled by true
https://x.com/mark_k/status/2046640315348725879

With GPT-5.5, Codex now gets more of the job done across the browser, files, docs, and your computer. We’ve expanded browser use so Codex can interact with web apps, and test flows, click through pages, capture screenshots, and iterate on what it sees until it completes the
https://x.com/OpenAIDevs/status/2047381283358355706

Workspace agents can work across tools–pulling context from docs, email, chats, code, and systems, and taking approved actions like updating @Linear issues, creating docs, or sending messages. In @SlackHQ, agents can jump into a thread, understand what’s needed, pull the right
https://x.com/OpenAI/status/2047008991944069624

OpenAI dropped a new model on HF today!
https://x.com/ClementDelangue/status/2046973714751754479

OpenAI just open sourced a new 1.5B (50m active) model on HuggingFace with Apache 2.0 license! It’s not a new LLM, this one is called Privacy Filter, and it’s a PII detection model (checking if text has private information) A few interesting tidbits from the release + links:
https://x.com/altryne/status/2046977133013311814

OpenAI just released a new open-source model it’s “”a bidirectional token-classification model for personally identifiable information (PII) detection and masking in text
https://t.co/xTZt1J3WcT
https://x.com/scaling01/status/2046972437422543064

ChatGPT Images 2.0 is a big leap forward in image generation intelligence. It’s much better at following detailed instructions, rendering dense text, understanding the world more accurately, and creating visuals that are more useful. And when you give it additional time to
https://x.com/nickaturley/status/2046677986242363731

GPT Image 2 + Codex: or how to make Codex not suck at UI. Step 1: Generate a UI image (native in Codex) Step 2: Get Codex to implement the UI based on it Step 3: Get Codex to iterate until it aligns with the image as much as possible Codex is bad at initial UI, but very good at
https://x.com/petergostev/status/2046720618566242657

Here is a manga made by ChatGPT Images 2.0 of @gabeeegoooh and me looking for more GPUs:
https://x.com/sama/status/2046672912833458597

My most popular AI post was a bunch of made-up “”graphs”” four years ago. Now, the new GPT-2 image generator does it for real (though not perfect) Here’s the famous AI task horizons graph with a touch of Basquiat, haunted by ghosts, from the Voynich manuscript, as a decaying pier.
https://x.com/emollick/status/2046728271849550331

A Visual Thought Partner ChatGPT Images 2.0 is our first image model with thinking capabilities. When a thinking model is selected in ChatGPT, Images 2.0 can search the web for real-time information, create multiple distinct images from one prompt, double-check its own outputs,
https://x.com/OpenAI/status/2046670989719924768

Making ChatGPT better for clinicians | OpenAI
https://openai.com/index/making-chatgpt-better-for-clinicians/

Exciting news – GPT-Image-2 by @OpenAI has claimed the #1 spot across all Image Arena leaderboards! A clean sweep with a record-breaking +242 point lead in Text-to-Image – the largest gap we’ve seen to date. – #1 Text-to-Image (1512), +242 over #2 (Nano-banana-2 with web-search
https://x.com/arena/status/2046670703311884548

GPT Image Generation Models Prompting Guide
https://developers.openai.com/cookbook/examples/multimodal/image-gen-models-prompting-guide

Introducing ChatGPT Images 2.0 | OpenAI
https://openai.com/index/introducing-chatgpt-images-2-0/

Introducing ChatGPT Images 2.0 A state-of-the-art image model that can take on complex visual tasks and produce precise, immediately usable visuals, with sharper editing, richer layouts, and thinking-level intelligence. Video made with ChatGPT Images
https://x.com/OpenAI/status/2046670977145372771

“Hyatt’s innovative approach with OpenAI reflects how Hyatt is elevating its use of technology and enhancing human connections. The company is making artificial intelligence broadly accessible to its employees, enabling teams to spend less time on manual” / X
https://x.com/TheRealAdamG/status/2046262564158333211

Kevin Weil and Bill Peebles exit OpenAI as company continues to shed ‘side quests’ | TechCrunch

Kevin Weil and Bill Peebles exit OpenAI as company continues to shed ‘side quests’

This story has now been updated with more details. Three leaders departed from OpenAI today: – Kevin Weil, VP of OpenAI for Science – Srinivas Narayanan, CTO of B2B Applications – Bill Peebles, Head of Sora
https://x.com/zeffmax/status/2045248266384838800?s=46

Today is my last day at OpenAI, as OpenAI for Science is being decentralized into other research teams. It’s been a mind-expanding two years, from Chief Product Officer to joining the research team and starting OpenAI for Science. Accelerating science will be one of the most
https://x.com/kevinweil/status/2045230426210648348?s=20

In 2025, OpenAI announced Stargate, a $500 billion data center initiative. We surveyed all 7 US sites and found visible development at each. There’s a long road ahead, but the project appears on track to reach 9+ GW by 2029–comparable to New York City’s peak power demand. 🧵
https://x.com/EpochAIResearch/status/2045258390147088764

OpenAI Stargate: where the US sites stand
https://epochai.substack.com/p/openai-stargate-where-the-us-sites

Arena Trends: Text-to-Image, Jan 2026 – Apr 2026 For most of the year, @GoogleDeepMind and @OpenAI traded the top spot within a tight margin – GPT-Image vs. Nano Banana – with the rest of the field clustered below 1,200. Today, GPT-Image-2 breaks away with a score of 1,512, 242
https://x.com/arena/status/2046690103515648061

This wasn’t the case with previous image generators, but the LLM you select has a huge effect on GPT-imagegen-2 output. GPT-5.4 Thinking and GPT-5.4 Pro will produce much better images, especially for complex things. This is, of course, not intuitive or explained anywhere.
https://x.com/emollick/status/2046960756608868533

Kimi K2.5 widens gap between the US and China in open weights model intelligence. The leading US open weights model remains OpenAI’s gpt-oss-120b, which has now been eclipsed by an ever-growing list of open weights releases from China.
https://x.com/ArtificialAnlys/status/2016250140219343163?s=20

✨ We’re excited to share that gpt-image-2 will be coming shortly to Canva AI 2.0! From highly-detailed generations to its creative intelligence, we can’t wait to see what you create. And with Magic Layers, you can edit everything like a design 🎨
https://x.com/canva/status/2046665346161988062

Playing with GPT Image 2 and really noticing the upgrade in fine detail + overall cohesion. Everything just feels like it belongs together a bit more ☀️ textures, lighting, composition all click. Try the model now in Firefly! Check the comments for the prompt 👇
https://x.com/AdobeFirefly/status/2046675148065923103

Same prompts as before, but now in GPT image-generator 2, page excerpts from: “”Eldritch Horrors as Pets: A Guide”” “”How Womblenauts Work”” “”Photographs of the People of New York Who Look Like Birds”” “”Cakes shaped like fish shaped like cakes”” Lots of great little lines in there
https://x.com/emollick/status/2046678198826479667

API is Available Today! 🔹 Keep base_url, just update model to deepseek-v4-pro or deepseek-v4-flash. 🔹 Supports OpenAI ChatCompletions & Anthropic APIs. 🔹 Both models support 1M context & dual modes (Thinking / Non-Thinking):
https://t.co/ec3B0BDXZi ⚠️ Note: deepseek-chat &
https://x.com/deepseek_ai/status/2047516945466188072

Copilot Business and Enterprise users can now bring their own language model keys to VS Code. • Use API keys from providers like Anthropic, Gemini, OpenAI, OpenRouter, Azure, or local models with Ollama and Foundry Local.
https://x.com/GHchangelog/status/2047023899238400491

I think the adaptive thinking requirement in Claude Opus 4.7 is bad in the ways that all AI effort routers are bad, but magnified by the fact that there is no manual override like in ChatGPT. It regularly decides that non-math/code stuff is “”low effort”” & produces worse results.
https://x.com/emollick/status/2044864822076969268

Opus 4.7 better than Opus 4.6 but can’t beat Gemini 3.1 Pro and GPT-5.4 on LiveBench
https://x.com/scaling01/status/2045178622617498084

Opus 4.7 scores 156 on ECI, our tool for combining multiple benchmarks onto a single scale. This puts it a bit ahead of Opus 4.6 and a bit behind only GPT-5.4, Gemini 3.1 Pro, and GPT-5.4 Pro. Thread with individual scores and commentary.
https://x.com/EpochAIResearch/status/2046631622909558857

In Vending-Bench Arena (the multiplayer version of Vending-Bench with competition dynamics), GPT-5.5 actually beats Opus 4.7. Opus 4.7 showed similar behavior to Opus 4.6: lying to suppliers and stiffing customers on refunds. GPT-5.5’s tactics were clean, and it still won.
https://x.com/andonlabs/status/2047377260412649967?s=46

The GPT-5.5 model family completely dominates the cost-performance frontier on the Artificial Analysis Index
https://x.com/scaling01/status/2047380890402123928

We find that GPT 5.4 over-edits the most while Opus 4.6 over-edits the least. Next, we prompt the models with the explicit instruction to preserve as much of the original code as possible, and find that while this instruction does help performance, the performance gains are
https://x.com/nrehiew_/status/2046963041338855791

Stargate is a step towards meeting the demand of the compute-powered economy
https://x.com/gdb/status/2045279841482928271

encouraging commentary from Terence Tao!
https://x.com/gdb/status/2044592321866408069

new image model coming with some real magic within, to unlock new use cases in productivity and creativity livestream noon today
https://x.com/gdb/status/2046632580527554572

5.5 feelings from my testing over the last few weeks: – I hate the name. It’s not a “”.1″” model jump, nor is it GPT-6 level, but I feel like “”5.9″” or something would have been more apt as it doesn’t feel like an evolution of the 5 series, rather the precursor to the 6 series –
https://x.com/davis7/status/2047414463595528467

Announcing GPT-Rosalind, our frontier model for life science research. This model is a step towards one of our most important goals — accelerating science and improving human outcomes. Excited to work with many amazing partners on deploying and improving this model.
https://x.com/gdb/status/2044891908213027032

API pricing will be $5 per 1 million input tokens and $30 per 1 million output tokens, with a 1 million context window. (Remember, you will need less tokens per task than 5.4!)
https://x.com/sama/status/2047379036419014928

BREAKING: GPT-5.5 “”Spud”” is out and it is a BEAST We’ve been testing it @every for the last 3 weeks on everything from coding, to writing, to knowledge work. Here’s our day 0 vibe check: – It’s a step change in coding AND it’s easy to talk to. It’s fast and friendly and
https://x.com/danshipper/status/2047375686688473134

Frontier LLMs are doing too much when it comes to editing code. I’m excited to share this work on the Over-Editing problem which refers to models modifying code beyond what is asked of them. The main findings are: – Many frontier models Over-Edit with GPT 5.4 being the
https://x.com/nrehiew_/status/2046963016428872099

GPT-5.5 is here! We hope it’s useful to you. I personally like it.
https://x.com/sama/status/2047378253313106112

GPT-5.5 Pricing & GPT-5.5 Pro Pricing GPT-5.5: $5/$30 GPT-5.5-Pro: $30/$180 (Input/Output per million tokens)
https://x.com/scaling01/status/2047375819144597737

Really excellent work by the inference team to serve this model so efficiently! To a significant degree, we have to become an AI inference company now.
https://x.com/sama/status/2047386068194852963

Really excited for this week! Next up, we’ve got something to show you at 12 pm PT today.
https://x.com/sama/status/2046598595869331894

The gap between proprietary and open-source is truly massive on WeirdML like they are still behind o4-mini that’s a >1 year gap in capabilities
https://x.com/scaling01/status/2046590539844186487

The internal working name for this was “”telepathy””, and it feels like it.
https://x.com/sama/status/2046330082726384051

very nice release by @OpenAI! a 50M active, 1.5B total gpt-oss arch MoE, to filter private information from trillion scale data cheaply. keeping 128k context with such a small model is quite impressive too
https://x.com/eliebakouch/status/2046979020890198503

We want you to have a lot of AI!
https://x.com/sama/status/2046752492093165708

🎚️CodexBar 0.21 Abacus AI provider, Codex Pro $100 support, safer OpenAI web extras, fixed local cost scanning, z. ai 5h quotas, Antigravity/Cursor/Ollama fixes, faster refreshes, macOS 26 icon fix and more. The big issue with too much CPU usage was an OpenAI web fetch and is
https://x.com/steipete/status/2045582547996856682

A hill that I will die on: with today’s AI models, intelligence is a function of inference compute. Comparing models by a single number hasn’t made sense since 2024. What matters is intelligence per token or per $. This is especially true when using it in a product like Codex.
https://x.com/polynoamial/status/2047387675762802998

A hill that I will die on: with today’s AI models, intelligence is a function of inference compute. Comparing models by a single number hasn’t made sense since 2024. What matters is intelligence per token or per $. This is especially true when using it in a product like Codex.
https://x.com/polynoamial/status/2047387675762802998?s=46

Also, a ton of new Codex features coming soon! Fun little bundle w/the new model.
https://x.com/sama/status/2047378431260664058?s=20

auto-review now live in codex — using a guardian agent to evaluate the safety of proposed actions, reducing human approvals to only when they’re really needed.
https://x.com/gdb/status/2047489218998628780

Build workspace agents for your team, on top of a cloud-hosted Codex harness. Hook them up to tools, give them recurring tasks, and talk to them from surfaces like Slack. Easier than ever to bring the power of agents to your computer work.
https://x.com/gdb/status/2047023089087606814

Chronicle is an experimental feature giving Codex the ability to see and have recent memory over what you see, automatically giving it full context on what you’re doing. Feels surprisingly magical to use.
https://x.com/gdb/status/2046293955009274019

Codex + 5.5 is incredible for the full spectrum of computer use. No longer just for coders, but for anyone who does computer work (including creating spreadsheets, slides, etc).
https://x.com/gdb/status/2047387783111868707

codex for proactively suggesting what it can do for you:
https://x.com/gdb/status/2045227305816281404

Codex is becoming a turbocharged partner for everything you want your computer to do for you:
https://x.com/gdb/status/2044855706273391084

codex is becoming the universal app for developers:
https://x.com/gdb/status/2045974850074996882

codex is for everyone. learn how to get the most out of it:
https://x.com/gdb/status/2045208278033142227

codex makes work plain fun
https://x.com/gdb/status/2045440270188364117

GPT-5.5 in Codex is a delight to work with: – Super sharp with responses – It understands intent better than any model – Great “”personality”” – Gets lots of stuff done without pausing unnecessarily It generated this beautiful artifact design. Huge win for OpenAI.
https://x.com/omarsar0/status/2047424707310289058

GPT-5.5 is rolling out today for Plus, Pro, Business and Enterprise users across ChatGPT and Codex. We’re also introducing GPT-5.5 Pro for Pro, Business, and Enterprise users in ChatGPT.
https://x.com/OpenAI/status/2047376568809636017

GPT-5.5 just dropped, I’ve been testing it for the last two weeks. tl;dr – It’s an incredible model, but there’s something different about this launch… OpenAI isn’t just going for raw intelligence. They’ve improved the personality of the model. This is almost certainly to
https://x.com/MatthewBerman/status/2047375703516361174

I am happy everyone is switching to Codex, but Tibo if you start rate limiting me or making me use worse models…
https://x.com/sama/status/2044921348540264614

idk what your AGI definition is but subagents & computer use in codex is pretty close!! *video in realtime
https://x.com/reach_vb/status/2045151640802771394

imagegen in codex is easy to underestimate, but it’s quite powerful:
https://x.com/gdb/status/2044994088739749996

In ChatGPT, full-stack inference improvements enable a more capable model at faster speed. This efficiency is a game-changer for GPT-5.5 Pro, now a much more practical option for demanding tasks, and a step change in the level of difficulty and quality of work ChatGPT can take on
https://x.com/OpenAI/status/2047376567559668222

incredibly fun to build webapps and games with codex, entirely with natural language
https://x.com/gdb/status/2045594591584530826

Last week, we released a preview of memories in Codex. Today, we’re expanding the experiment with Chronicle, which improves memories using recent screen context. Now, Codex can help with what you’ve been working on without you restating context.
https://x.com/OpenAIDevs/status/2046288243768082699

Last week, we released a preview of memories in Codex. Today, we’re expanding the experiment with Chronicle, which improves memories using recent screen context. Now, Codex can help with what you’ve been working on without you restating context.
https://x.com/OpenAIDevs/status/2046288243768082699?s=20

LETS GOOOO! Excited to introduce GPT-5.5 Thinking & Pro in ChatGPT and Codex 🔥 It’s our smartest model *yet* for real work: stronger agentic coding, computer use, knowledge work, long-context reasoning, and scientific research It can plan, use tools, check its work, recover
https://x.com/reach_vb/status/2047377562339524659

Lots of major improvements to Codex! Computer use is a real update for me; it feels even more useful than I expected. It can use all of the apps on your Mac, in parallel and without interfering with your direct work.
https://x.com/sama/status/2044858862042591378

New in the Codex app: – GPT-5.5 – Browser control – Sheets & Slides – Docs & PDFs – OS-wide dictation – Auto-review mode Enjoy!
https://x.com/ajambrosino/status/2047381565534322694

OpenAI develops platform for always-on Agents on ChatGPT
https://www.testingcatalog.com/openai-develops-platform-for-always-on-agents-on-chatgpt/

OpenAI’s first AI intern is expected by the end of this year, but we got impatient and decided to build it ourselves 🙂 > Runs autonomously for hours / days depending on the task. > Can read every paper, model, and dataset on the HF Hub to build the best post-training recipes
https://x.com/_lewtun/status/2046549090171764914

Opus 4.7 using ~10x less tokens to solve machine learning problems ~8.4x cheaper than Opus 4.6 and GPT-5.4 and 3.4x cheaper than GPT-5.3 Codex per run while having the same performance
https://x.com/scaling01/status/2045160883010081237

The second most important release of the LLM era (after GPT-3.5), featuring what was likely the most important chart. Still seems surprising to me that OpenAI told everyone about the biggest advance in AI technology since the LLM rather than keeping it to themselves until later.
https://x.com/emollick/status/2046053467941163055

We are releasing a *research preview* of Chronicle in Codex. It allows codex to build up memories based on your day to day work on your computer and then refer to these memories to be a lot more helpful. Available for PRO subscriptions and on Mac to start. This is early and
https://x.com/thsottiaux/status/2046291546325369065

We’re open-sourcing Cua Driver – our new macOS driver that lets any agent (Claude Code, Codex, your own loop) drive any app in the background, with true multi-player and multi-cursor built-in. 1/8
https://x.com/trycua/status/2047383200348221632

ChatGPT plugin now available for Google Sheets:
https://x.com/gdb/status/2047064885012599168

🚨 GPT Image 2 is live on fal, day 0! 🔤 Strong text rendering 🧭 Better layout + UI adherence 🛠️ Cleaner preserve-and-change edits 📷 Strong everyday photoreal output
https://x.com/fal/status/2046667081068761527

people are speculating GPT-Image-2 is testing on @arena. the early examples being posted are pretty mind-boggling. all three of these images are AI generated. h/t @sawlygg @synthwavedd
https://x.com/blakeir/status/2040250530375606401?s=12

Though the images are very good, ChatGPT Image 2.0 does have the typical imagegen problem, which is that editing can be “”stubborn””, and attempts to get the AI to change details work well for the first round or two, but then progress slows. Putting the image in a new chat helps.
https://x.com/emollick/status/2046672707517886500

GPT-5.5 is now accessible in Hermes Agent through the ChatGPT/Codex OAuth provider. Run `hermes update` to access now or learn how to get started with Hermes Agent here:
https://x.com/Teknium/status/2047419336537846193

OpenClaw 2026.4.15 🦞 🤖 Anthropic Opus 4.7 support 🗣️ Gemini TTS in bundled 🧠 Slimmer context + bounded memory reads 🔧 Codex transport self-heal, safer tool/media handling ✨ Pile of update/channel fixes Good boring release.
https://x.com/openclaw/status/2044919054402752638

Gabe is incredibly talented and a great leader. Happy to see this, but not surprised.
https://x.com/sama/status/2046682384251429279

Ex-OpenAI researcher Jerry Tworek launches Core Automation to build the most automated AI lab in the world

Ex-OpenAI researcher Jerry Tworek launches Core Automation to build the most automated AI lab in the world

OpenAI just released Privacy Filter > multilingual PII redaction with 128k context window 🤯 only 1B params > fine-tunable > redact variety of things: including emails, address, names, secrets (best for platform/agent logs) > transformers & ONNX weights 🤗
https://x.com/mervenoyann/status/2046980302002602473

(17) ⚡️ Prism: OpenAI’s LaTeX “”Cursor for Scientists”” — Kevin Weil & Victor Powell, OpenAI for Science – YouTube

had a great conversation with @shaneparrish, full podcast below
https://x.com/gdb/status/2047090333033607399

GPT-5.5 was designed for and trained on Nvidia GB200/300 The model itself helped in the deployment and improvement of the inference stack
https://x.com/scaling01/status/2047377992016384068

OpenClaw 2026.4.21 🦞 🖼️ OpenAI Image 2 🔧 npm update repair for bundled plugins 🐳 Docker E2E coverage for channel deps 🩹 Low-risk fixes backported Tiny release. Useful claws.
https://x.com/openclaw/status/2046807838459125990

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading