Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Photorealistic lighthouse partially encased in thick blue-white ice emerging from a frozen bay at winter dusk, warm beacon light shining through ice layers and illuminating fog over dark water and ice chunks, gradient sky from deep blue to warm orange, National Geographic quality, 4K resolution, golden hour cinematography, bold sans-serif text ‘OpenAI’ prominently displayed across image, landscape format, hyperreal ice textures, atmospheric depth, no CGI effects.
New interview: OpenAI’s CEO of Applications @fidjissimo on how ads in ChatGPT will work, what will end the Code Red, a social network for AI agents, the state of Sora, and a lot more Her first in-depth pod since joining OpenAI”” https://x.com/alexeheath/status/2021439803926192278
BREAKING: @OpenAI just launched a new Codex model, Spark–it serves at 1,000 tokens per second. It’s blow your hair back fast. It’s their first model publicly released on Cerebras hardware, and you can see the difference. We’ve been testing internally @every for the last week or”” https://x.com/danshipper/status/2022009455773200569
GPT-5.3-Codex still doing a bit of the thing of taking your wording a bit too literally. It labeled things in a UI we made as “”Breadcrumbs”” instead of just… using them as the concept of breadcrumbs”” https://x.com/kylebrussell/status/2020927139546358171
Introducing GPT-5.3-Codex-Spark | OpenAI https://openai.com/index/introducing-gpt-5-3-codex-spark/
Introducing OpenAI Frontier | OpenAI https://openai.com/index/introducing-openai-frontier/
More than 1 million people downloaded Codex App in the first week. 60+% growth in overall Codex user last week! We’ll keep Codex available to Free/Go users after this promotion; we may have to reduce limits there but we want everyone to be able to try Codex and start building.”” https://x.com/sama/status/2020977975081177343
Now in deep research you can: – Connect to apps in ChatGPT and search specific sites – Track real-time progress and interrupt with follow-ups or new sources – View fullscreen reports”” https://x.com/OpenAI/status/2021299936948781095
OpenAI Abandons ‘io’ Branding for Its AI Hardware | WIRED https://www.wired.com/story/openai-drops-io-branding-hardware-devices/
OpenAI works on ChatGPT Skills, upgrades Deep Research https://www.testingcatalog.com/openai-works-on-chatgpt-skills-upgrades-deep-research/
OpenAI’s Jony Ive-Designed Device Delayed to 2027 – MacRumors https://www.macrumors.com/2026/02/10/openais-jony-ive-designed-device-delayed-to-2027/
OpenAI’s new Codex app hits 1M+ downloads in first week — but limits may be coming to free and Go users | VentureBeat https://venturebeat.com/technology/openais-new-codex-app-hits-1m-downloads-in-first-week-but-limits-may-be
Skills in OpenAI API https://developers.openai.com/cookbook/examples/skills_in_api
Skills in OpenAI API https://developers.openai.com/cookbook/examples/skills_in_api/
We just announced new primitives for building agents. Here are 10 tips on running multi-hour workflows reliably 👇”” https://x.com/OpenAIDevs/status/2021725246244671606
We’re introducing a new set of primitives in the Responses API for long-running agentic work on computers. Server-side compaction • Enable multi-hour agent runs without hitting context limits. Containers with networking • Give OpenAI-hosted containers controlled internet”” https://x.com/OpenAIDevs/status/2021286050623373500
This is batshit insane. Gemini 3 Deep Think just scored a 3455 on Codeforces, equivalent to the #8 best competitive programmer in the world. The previous best was 2727 (#175) from OpenAI o3. This is an absolutely superhuman result for AI and technology at large.”” https://x.com/deedydas/status/2022021396768133336?s=46
OpenAI partners with Cerebras | OpenAI https://openai.com/index/cerebras-partnership/
Testing ads in ChatGPT | OpenAI https://openai.com/index/testing-ads-in-chatgpt/
OpenAI CEO Sam Altman touts ChatGPT growth as company nears $100 billion in funding https://www.cnbc.com/video/2026/02/09/openai-ceo-sam-altman-touts-chatgpt-growth-as-company-nears-100-billion-in-funding.html
OpenAI disbands mission alignment team | TechCrunch https://techcrunch.com/2026/02/11/openai-disbands-mission-alignment-team-which-focused-on-safe-and-trustworthy-ai-development/
In sum, through an extensive (and costly) validation process, we have demonstrated that GPT-5 mini performs very well at recovering the ground truth data. It is clearly better than highly trained graduate students at this specific information retrieval task.”” At 1000x less cost”” https://x.com/emollick/status/2021689359309664645
Proud of the team for getting Pantheon and The Singularity is Near in the same Super Bowl ad”” https://x.com/sama/status/2020677993673433330
The companies that succeed in the future are going to make very heavy use of AI. People will manage teams of agents to do very complex things. Today we are launching Frontier, a new platform to enable these companies.”” https://x.com/sama/status/2019441198734209374
⚡️ Reverse Engineering OpenAI’s Training Data — Pratyush Maini, Datology – YouTube https://www.youtube.com/watch?v=CSgjaC6y6Mk&t=1444s
👌 Tracing in LangSmith is as easy as copy/paste 📊 Get started in seconds with Claude Agent SDK, OpenAI, LangChain, Vercel AI SDK, and 20+ other frameworks. Pick your stack, copy the code, start debugging. Docs: https://t.co/DAQcQxkVsp Sign up for LangSmith:”” https://x.com/LangChain/status/2020920906772521274
🚀 Introducing OpenResearcher: a fully offline pipeline for synthesizing 100+ turn deep-research trajectories–no search/scrape APIs, no rate limits, no nondeterminism. 💡 We use GPT-OSS-120B + a local retriever + a 10T-token corpus to generate long-horizon tool-use traces”” https://x.com/DongfuJiang/status/2020946549422031040
After a near-death experience, ChatGPT gave me closure my doctors didn’t https://t.co/0ttF7itD2r by @erinbrodwin”” https://x.com/danprimack/status/2021608995799150796
appreciate this, but what an aesthetic mess (nice that it’s mathematically accurate i guess)”” https://x.com/sama/status/2019812576482517322
I love building with this model; it feels like more of a step forward than the benchmarks suggest. Also you can choose “”pragmatic”” or “”friendly”” for its personality; people have strong preferences one way or the other!”” https://x.com/sama/status/2019475551719977453
I reverse engineered a phase change in GPT’s training data… with the seahorse emoji 🌊🐴 My forensic investigation reveals why non-thinking models have started “”thinking out loud”” & what it reveals about how frontier labs train their latest models https://x.com/pratyushmaini/status/2001824826353418433
I’m obsessed with building macOS menu bar apps. The latest: a menu bar app to visualize my @TrainerRoad sessions for the week, built with @code and GPT-5.3-Codex!”” https://x.com/pierceboggan/status/2020986458455277986
New blog post: Why I joined OpenAI”” https://x.com/brendangregg/status/2019934510205530577
Not solved yet, but 5.3 will help build the thing that solves it”” https://x.com/sama/status/2020678853468053516
OAI and GDM announce IMO Gold-level results with natural language reasoning, no specialized training or tools, under human time limits | AINews https://news.smol.ai/issues/25-07-21-imo-gold
Retiring GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini in ChatGPT | OpenAI https://openai.com/index/retiring-gpt-4o-and-older-models/
The AI Agenda: GPT5 leaks and the business of AI News — Steph Palazzolo, The Information – YouTube https://www.youtube.com/watch?v=t4IQwMa5-6U&t=3945s
The volume of the kinds of releases that actually impact real work has been accelerating. In the past couple days that includes OpenAI Frontier & new Deep Research, Claude for Powerpoint and Cowork for Windows, and the wider release of the solid Claude-powered MS agent for Excel.”” https://x.com/emollick/status/2021404718665433375
Throwback to a tweet from the day ChatGPT launched. Life comes at you fast.”” https://x.com/sama/status/2020876034313531581
We want to get it to all API customers quickly. This is our first model at at high for cybersecurity, and doing the extra work is taking us a little longer.”” https://x.com/sama/status/2020940848159130094
(OpenAI Town Hall with Sam Altman – YouTube https://www.youtube.com/watch?v=Wpxv-8nG8ec&t=1179s
📣 Shipping software with Codex without touching code. Here’s how a small team steering Codex opened and merged 1,500 pull requests to deliver a product used by hundreds of internal users with zero manual coding.”” https://x.com/OpenAIDevs/status/2021637918847381656
🚀 deepagents v0.4 is out with: 🧩 pluggable sandboxes (modal, daytona, runloop) 🧠 smarter conversation history summarization 💬 responses API default for OpenAI models”” https://x.com/sydneyrunkle/status/2021289479139422296
A full super bowl ad for codex that’s wild”” https://x.com/iScienceLuvr/status/2020650521758179561
After a near-death experience, ChatGPT gave me closure my doctors didn’t https://www.axios.com/2026/02/11/chatgpt-postpartum-health-scare
Apps in ChatGPT – YouTube https://www.youtube.com/watch?v=2C4Cs6503gw
Big drop for Codex users later today! You can just build things.”” https://x.com/sama/status/2019442016594088211
Codex app is actually insane. I took an idea in my head, refined it with ChatGPT, copied the entire chat into Codex Plan mode, picked Codex’s suggested options, hit run… and it built everything in one shot. I’ve been coding for 15 years. I reviewed the output carefully. It was”” https://x.com/arrakis_ai/status/2021071947640312052
Codex ended up working for a little over 2 hours and 40 minutes in one run. It has now been working for a further 45 minutes (and counting) on the same C codebase. Gpt-5.3 high token usage is incredible. I’ve used barely 10% of my weekly usage. It keeps working until the tests”” https://x.com/CtrlAltDwayne/status/2020479866777510134
Codex is now over 1 million active users!”” https://x.com/sama/status/2019219967250669741
Codex-Spark is currently text-only with a 128k context window. We’ll introduce more capabilities-including larger models, longer context lengths, and multimodal input as we learn from our first production deployment of low-latency infrastructure and hardware.”” https://x.com/OpenAIDevs/status/2022009943105433809
Deep research in ChatGPT is now powered by GPT-5.2. Rolling out starting today with more improvements.”” https://x.com/OpenAI/status/2021299935678026168
From how the team operates, I always thought Codex would eventually win. But I am pleasantly surprised to see it happening so quickly. Thank you to all the builders; you inspire us to work even harder.”” https://x.com/sama/status/2021606985469211065
GPT-5.3 Codex is now available in Cursor! It’s noticeably faster than 5.2 and is now the preferred model for many of our engineers.”” https://x.com/cursor_ai/status/2020921643145519249
gpt-5.3-codex for rewriting applications between languages:”” https://x.com/gdb/status/2021272681237361027
GPT-5.3-Codex is here! *Best coding performance (57% SWE-Bench Pro, 76% TerminalBench 2.0, 64% OSWorld). *Mid-task steerability and live updates during tasks. *Faster! Less than half the tokens of 5.2-Codex for same tasks, and >25% faster per token! *Good computer use.”” https://x.com/sama/status/2019474754529321247
GPT-5.3-Codex is rolling out in @cursor_ai, @code, and @github today. We’re starting with a small set of API customers as part of a phased release. This is the first model we’re treating as high cybersecurity capability under our Preparedness Framework. We’ll continue to scale”” https://x.com/OpenAIDevs/status/2020921792941166928
GPT-5.3-Codex is rolling out today in Cursor, Github, and VS Code!”” https://x.com/sama/status/2020940847190356092
GPT-5.3-Codex-Spark is launching today as a research preview for Pro. More than 1000 tokens per second! There are limitations at launch; we will rapidly improve.”” https://x.com/sama/status/2022011797524582726
GPT-5.3-Codex-Spark is now in research preview. You can just build things–faster.”” https://x.com/OpenAI/status/2022009582210715925
GPT-5.3-Codex-Spark size: ~700B@30B OpenAI’s new GPT-5.3-Codex-Spark is the first model for which we can somewhat reliably estimate its size. Cerebras inference: 1000 tokens/s – GLM-4.7 is 355@32B, 92 layers 1400 tokens/s – Qwen3-235B is 235@22B, 94 layers 3000 tokens/s -“” https://x.com/scaling01/status/2022028580226768995#m
He truly is! Since he joined OpenAI, we haven’t seen an interview with @SebastienBubeck, but here is one we did with him a couple of years ago. Still a very interesting read”” https://x.com/TheTuringPost/status/2020920421487608259
How would you prefer us to charge for Codex?”” https://x.com/sama/status/2019814741129195576
I do wonder how @cursor_ai feels about having made a partnership with openai, promoted and defaulted openai’s models in Cursor, only for them to withold their coding model from them just a few versions later. I was pretty dissapointed when I saw their CEO on stage with Sama”” https://x.com/Teknium/status/2020659530162692568
I hope Codex will inspire a new generation of builders and dreamers.”” https://x.com/thsottiaux/status/2020671175462912492
I’ve been using 5.3 Codex for 3 weeks. It’s an incredible model. I’ve built so much stuff with it. Made a vid showing everything I love about it, as well as a few call-outs of things I hope OpenAI changes.”” https://x.com/theo/status/2020279916760355142
i’ve joined the other side – codex is now my daily driver. the app is great and the model is highly effective, and med/high is fast enough to not block my work so they’ve clearly improved its efficiency too i thought i’d miss using the hooks and custom slash commands of cc but i”” https://x.com/atzydev/status/2020547181019607330
If you are not using the new codex app, you are really wasting your development time. I always dreamed about who is going to replace IDE like Cursor, because they are memory hungry when running multiple projects. Thought it is going to be terminals for a while, but you should be”” https://x.com/webtkdev/status/2020380003708596707
Introducing GPT-5.3-Codex-Spark, our ultra-fast model purpose built for real-time coding. We’re rolling it out as a research preview for ChatGPT Pro users in the Codex app, Codex CLI, and IDE extension.”” https://x.com/OpenAIDevs/status/2022009906329739681
It actually worked! For the past couple of days I’ve been throwing 5.3-codex at the C codebase for SimCity (1989) to port it to TypeScript. Not reading any code, very little steering. Today I have SimCity running in the browser. I can’t believe this new world we live in.”” https://x.com/ccccjjjjeeee/status/2021160492039811300
New art project. Train and inference GPT in 243 lines of pure, dependency-free Python. This is the *full* algorithmic content of what is needed. Everything else is just for efficiency. I cannot simplify this any further.”” https://x.com/karpathy/status/2021694437152157847
nice to see community repro of how token efficient GPT-5.3-Codex is! and we’re only getting started”” https://x.com/reach_vb/status/2021158781539713109
Not seeing GPT-5.3-Codex in @code? The rollout has been paused, but we’ll let you know as soon as we have an update!”” https://x.com/code/status/2021041639926673503
OpenAI Codex-Spark powered by Cerebras You can now just build things faster–at 1,000 tokens/s.”” https://x.com/cerebras/status/2022021218208297302
Opinion | I Left My Job at OpenAI. Putting Ads on ChatGPT Was the Last Straw. – The New York Times https://www.nytimes.com/2026/02/11/opinion/openai-ads-chatgpt.html
Over 300M people use ChatGPT to learn how to do something every week. More than half of US ChatGPT users say it enables them to achieve things that previously felt impossible. These are just a few stories of what they are building.”” https://x.com/OpenAI/status/2019822532795547807
Quite a visual from OpenAI. Your system of record is a dumb pipe and we will layer 5 rows of value on top of it to steal the relationship and all the economics along with it No wonder SaaS is in the gutter”” https://x.com/buccocapital/status/2019598551228223526
Software development is undergoing a renaissance in front of our eyes. If you haven’t used the tools recently, you likely are underestimating what you’re missing. Since December, there’s been a step function improvement in what tools like Codex can do. Some great engineers at”” https://x.com/gdb/status/2019566641491963946
Spark now with 100% of pro users. Update the Codex app or cli if you don’t see it. Infra is not completely stable, but we’re working on that. Proof attached”” https://x.com/thsottiaux/status/2022034024655728709
Starting something new at OpenAI! Excited to serve as Chief Futurist, where I’ll be working on studying AI impacts and engaging the world to discuss them, in collaboration with colleagues across the org and the research community.”” https://x.com/jachiam0/status/2021633259583812007
The 5.3 lovefest is so nice to see. Don’t think we’ve had so much excitement for a model since the original GPT-4.”” https://x.com/sama/status/2019813802049696064
the ability in codex cli with gpt 5.3 to instantly redirect the agent without waiting for your commands to be unqueued and risk interrupting the agent’s current session is so underrated codex cli is goated.”” https://x.com/blader/status/2020211746401841161
this is what i see when someone says “i asked chat GPT””” https://x.com/myelessar/status/2020818458653466918
This week on How I AI: OpenAI product lead on getting the most out of Codex https://www.lennysnewsletter.com/p/this-week-on-how-i-ai-the-power-users
try the codex app!”” https://x.com/gdb/status/2021093839315054690
Ultra-low latency Codex:”” https://x.com/gdb/status/2022010171124523148
We updated GPT-5.2 (the instant model) in ChatGPT today. Not a huge change, but hopefully you find it a little better.”” https://x.com/sama/status/2021452911511998557
We’re giving a small group of API customers early access to Codex-Spark to experiment with it in their products, helping us continue optimizing performance beyond Codex. We’ll expand access to more ChatGPT users and API developers as we add more capacity.”” https://x.com/OpenAIDevs/status/2022009955189158211
with codex, building is for everyone:”” https://x.com/gdb/status/2020651347293716694
You can just build things.”” https://x.com/OpenAI/status/2020649757434327362
claude code, codex, etc. are incredible products turns out you can build a really good agentic coding system quickly but they are exceptionally bad *terminals*. screen flashes, the scrolling doesn’t work, pasting often fails, etc turns out building a good CLI is very hard”” https://x.com/jxmnop/status/2021633739097563167
In a long time testing the new Opus 4.6 and Codex 5.3 models the most striking thing was how model releases are far trickier to read in 2026. I’m in my post-benchmark era. Claude is still king, but codex is closer than ever.”” https://x.com/natolambert/status/2020881482873811070
This is a great read if you are building complex applications with Claude Code and Codex. Most AI coding agents can generate a frontend. But building a real full-stack application is a completely different story. The gap between generating a landing page and shipping a working”” https://x.com/omarsar0/status/2020891961511809456
TLDR: codex 5.3 is a very useful coding tool, claude 4.6 is the first of many general agents to come.”” https://x.com/natolambert/status/2020885646555107619
Opus 4.6 dethroned GPT-5.2-xhigh on WeirdML and is now in clear first place! Opus finds much shorter (so presumably more simple and elegant) solutions to the problems. But code execution times went up. So maybe the difference in code length is due to optimizations? Would love”” https://x.com/scaling01/status/2020847174909665712
Opus 4.6, Codex 5.3, and the post-benchmark era https://www.interconnects.ai/p/opus-46-vs-codex-53
3 years ago, we emailed Jensen with requests for Blackwell. Today, we released GPT-5.3-Codex, a SOTA model designed for GB200-NVL72. Nitpicking ISA, simming rack designs, and tailoring our arch to the system has been a fun experience! I’m grateful to our collaborators at NVIDIA.”” https://x.com/trevorycai/status/2019482450855096440
At @nvidia, we use a lot of AI coding tools. Codex with GPT-5.3-codex is particularly impressive. The engineers I know here are big codex power users. The capabilities of these coding agents are advancing quickly, it’s quite exciting. With 5.3, I’m particularly impressed with”” https://x.com/benklieger/status/2021707684211569033
VS Code gives you extremely powerful building blocks with custom agents, parallel subagents, and slash commands to compose your own workflows. Here is /review command that uses Opus 4.6 fast mode, GPT-5.3-Codex, and Gemini 3 Pro to independently review changes and grade each”” https://x.com/pierceboggan/status/2021094988205969465
Claude Code is the Inflection Point”” – @SemiAnalysis_ just published a very detailed analysis why they think Anthropic is winning. There are things to argue with (especially after today’s @OpenAI launch of GPT-5.3-Codex) BUT – there is one thing I’m absolutely agree that”” https://x.com/TheTuringPost/status/2019535538290565501
I much prefer OpenAI’s positive outlook on AI over Anthropic’s negative one during the Super Bowl ads. Almost like we believe in the brighter future we are building.”” https://x.com/trekedge/status/2020679360114733517
Investors Chase Neolabs to Outflank OpenAI, Anthropic — The Information https://www.theinformation.com/articles/investors-chase-neolabs-outflank-openai-anthropic
Always found it interesting that the human eye can see colors that cannot be displayed on any screen or page. I had ChatGPT whip up a pretty good imaginary color viewer after asking it to review the scientific literature and getting the shades right. https://x.com/emollick/status/2021059109903269995
We’re starting to roll out a test for ads in ChatGPT today to a subset of free and Go users in the U.S. Ads do not influence ChatGPT’s answers. Ads are labeled as sponsored and visually separate from the response. Our goal is to give everyone access to ChatGPT for free with”” https://x.com/OpenAI/status/2020936703763153010
OpenAI’s Super Bowl ad featured a bimanual robot”” https://x.com/TheHumanoidHub/status/2020941245175169115
GPTZero finds 100 new hallucinations in NeurIPS 2025 accepted papers | Hacker News https://news.ycombinator.com/item?id=46720395
Can just a 4B model solve Olympiad-level proof problems at the level of giant proprietary LLMs? We built QED-Nano 🚀, a 4B model that we carefully post-trained for Olympiad-level proof problems, matching 30x larger models like gpt-oss-120B. We specifically used RL recipes that”” https://x.com/setlur_amrith/status/2022022298874917015





Leave a Reply