Image created with gemini-3.1-flash-image-preview with claude-opus-4.7. Image prompt: Using the provided reference image, keep the pure white landscape field, vertical type hierarchy, galaxy-punchout starfield letterforms, and exact font contrast between condensed grotesque and light geometric sans, but replace HEROES with ANTHROPIC in bold condensed grotesque galaxy-punchout, replace ALESSO with SAFETY PIONEER in light geometric all-caps galaxy-punchout, and replace TOVE LO with CONSTITUTIONAL AI in condensed grotesque galaxy-punchout, keeping (we could be) and FEATURING. unchanged with identical tracking, margins, and landscape aspect ratio.
80% of US adults who report using Claude in the previous week live in households earning $100,000 or more a year, compared to 37% of Meta AI users. Other major providers cluster in a relatively narrow band, with 56-64% of users in $100,000+ households.
https://x.com/EpochAIResearch/status/2047056309535801605
Anthropic’s Mythos AI Model Is Being Accessed by Unauthorized Users – Bloomberg
https://www.bloomberg.com/news/articles/2026-04-21/anthropic-s-mythos-model-is-being-accessed-by-unauthorized-users
Building agents that reach production systems with MCP | Claude
https://claude.com/blog/building-agents-that-reach-production-systems-with-mcp
Figma stock 20 minutes after the Claude Design announcement. Wild.
https://x.com/Yuchenj_UW/status/2045161719547445426
Anthropic released Claude design, direct attack on figma and lovable. Anthropic just shipped Claude Design, powered by Claude Opus 4.7., a tool that turns conversations into polished prototypes, pitch decks, and marketing assets. It auto-applies your brand system, lets you
https://x.com/kimmonismus/status/2045162358004216134
Introducing Claude Design by Anthropic Labs: make prototypes, slides, and one-pagers by talking to Claude. Powered by Claude Opus 4.7, our most capable vision model. Available in research preview on the Pro, Max, Team, and Enterprise plans, rolling out throughout the day.
https://x.com/claudeai/status/2045156267690213649
On the plus side with Opus 4.7, if it does decide to think it produces BY FAR the best Sparks unicorn* ever, even non-thinking is pretty good, if not great. * This is created using TikZ, which is a language built for scientific diagrams & very much not for drawing. The original
https://x.com/emollick/status/2044880350237626844
Anthropic and Amazon expand collaboration for up to 5 gigawatts of new compute \ Anthropic
https://www.anthropic.com/news/anthropic-amazon-compute
We’re expanding our collaboration with Amazon to secure up to 5 gigawatts of compute for training and deploying Claude. Capacity begins coming online this quarter, with nearly 1 gigawatt expected by the end of 2026.
https://x.com/AnthropicAI/status/2046327624092487688
Anthropic is coming after Figma.
https://x.com/Yuchenj_UW/status/2045158071950033063
Anthropic making a Lovable/Bolt/v0/Figma Make clone and calling it Design is peak Anthropic.
https://x.com/skirano/status/2045192705941106992
Introducing Claude Design by Anthropic Labs \ Anthropic
https://www.anthropic.com/news/claude-design-anthropic-labs
Anthropic exec Mike Krieger left Figma’s board this week after reports of an incoming launch of a competing product. Now, Claude Design is live. How it works: describe the design and Claude Opus 4.7 builds the first version. Refine with inline comments, direct edits, or
https://x.com/TheRundownAI/status/2045176722476208454
Dario is wrong. He knows absolutely nothing about the effects of technological revolutions on the labor market. Don’t listen to him, Sam, Yoshua, Geoff, or me on this topic. Listen to economists who have spent their career studying this, like @Ph_Aghion , @erikbryn ,
https://x.com/ylecun/status/2045610129119117574?s=20
Anthropic just overtook OpenAI with $1 trillion valuation
https://finance.yahoo.com/markets/stocks/articles/anthropic-just-overtook-openai-1-155312239.html
GPT-5.5 takes OpenAI back to the clear number one in AI. OpenAI’s new model tops the Artificial Analysis Intelligence Index by 3 points, breaking a three-way tie with Anthropic and Google OpenAI gave us pre-release access to test all five reasoning effort levels: xhigh, high,
https://x.com/ArtificialAnlys/status/2047378419282034920
Also, somehow everyone missed that Jensen Huang all but called Dario Amodei’s mindset a loser’s mindset
https://x.com/TheTuringPost/status/2046585887400604116
An obvious way to release Mythos class models with uncertain autonomous ability is to make them only available on the website, like Gemini Deep Think or ChatGPT Pro. Minimal risk of being used for autonomous hacking, but accessible to people who have hard problems to solve.
https://x.com/emollick/status/2045916298450784680
According to our latest polls, Claude usage in the US rose by over 40% amid increased attention last month, but remains far behind ChatGPT. Our point estimate would imply several million new weekly users in the United States.
https://x.com/EpochAIResearch/status/2044857542422192246
I don’t think we’re all hallucinating, there’s something seriously wrong about 4.7. Just tried it on the same two prompt (what’s the best GC approach for Bend). 4.7 simply lies a lot, ignores information right on its context, makes bad proposals. This is really weird?
https://x.com/VictorTaelin/status/2045139180359942462
An update on recent Claude Code quality reports \ Anthropic
https://www.anthropic.com/engineering/april-23-postmortem
Anthropics works on its always-on agent with UI extensions
https://www.testingcatalog.com/anthropics-works-on-its-always-on-agent-with-new-ui-extensions/
Claude remains irreducibly Claude. If you know, you know. (The fact that models have distinct personalities that are consistent across generations is technically interesting, it also makes it very easy to use new releases when they come along, because they feel very similar).
https://x.com/emollick/status/2044799110088130992
I was told by Anthropic that they are looking at ways of fixing this, which is good (you can also see a reply from a Claude PM in the thread).
https://x.com/emollick/status/2044958121731195185
It’s fascinating is how little of Claude Code is actually “”intelligence.”” This study found a tiny reasoning core wrapped in massive infrastructure, and even quantifies it. → Only ~1.6% of the system is actual decision logic, while ~98.4% is operational harness: ~512K lines
https://x.com/TheTuringPost/status/2046726989021888910
Must-read research of the week ▪️ Dive into Claude Code: The design space of today’s and future AI agent systems ▪️ Lightning OPD: Efficient Post-Training for LRMs with Offline On-Policy Distillation ▪️ Self-Distillation Zero: Self-Revision Turns Binary Rewards into Dense
https://x.com/TheTuringPost/status/2046710304999104954
My one takeaway from the leaked Claude Code: a good agent harness should get out of the way. As @polynoamial once put it, “”Your fancy AI scaffolds will be washed away by scale”” Claude Code’s harness is the opposite of a fancy scaffold: it’s simple, but to the point ;
https://x.com/AymericRoucher/status/2045176781414527305
Over the past month, some of you reported Claude Code’s quality had slipped. We investigated, and published a post-mortem on the three issues we found. All are fixed in v2.1.116+ and we’ve reset usage limits for all subscribers.
https://x.com/ClaudeDevs/status/2047371123185287223
Really liking Claude Design so far. Except for the fact that it just wiped out my project after burning 10% of my usage. “”The files appear to be gone”” 🙃
https://x.com/theo/status/2045310884717981987
tldr: claude code changed some harness settings which degraded perf. these small harness tweaks can matter a lot! 1. default reasoning high -> medium 2. bug that accidentally evicted thinking blocks on every turn in session (march 26-april 10). was a change to help with cache
https://x.com/Vtrivedy10/status/2047384831995371631
A lot of bugs that folks may have hit yesterday when first trying Opus 4.7 are now fixed. Thanks for bearing with us🙏
https://x.com/alexalbert__/status/2045159041283064095
A major lesson to take away from Opus 4.7 is that, while there is a lot of arguments about implementation choices and personality, models keep improving measurably on economically important tasks with each release (it has been two months since Opus 4.6), with no signs of slowdown
https://x.com/emollick/status/2045314251804324080
Changes in the system prompt between Claude Opus 4.6 and 4.7
https://simonwillison.net/2026/Apr/18/opus-system-prompt/
Claude Opus 4.7 by @AnthropicAI advances the price-performance Pareto frontier in both Code and Text Arena! This makes Claude Opus 4.7 now the only model from a US lab that remains on the Pareto frontier for Code Arena.
https://x.com/arena/status/2045206342173086156
Claude Opus 4.7 by @AnthropicAI also lands at #1 and #3 in the Text Arena. Opus 4.7 Thinking ranks #1 across major categories: – #1 Overall 1505, +9 points over Muse Spark – #1 Expert 1561, +19 points over 4.6 Thinking – #1 Coding 1567, +23 points over 4.6 Thinking – #2
https://x.com/arena/status/2045177497378316597
I have found that asking for a sestina regularly triggers Opus 4.7’s safety guardrails. The forbidden poetic form!
https://x.com/emollick/status/2044863531900686775
I’ll give Anthropic credit for moving quickly. Opus 4.7 Adaptive Thinking now triggers thinking much more often, including for the tasks it failed at yesterday. That also means it is doing a lot more web search. So far, a large improvement in output quality on non-coding tasks.
https://x.com/emollick/status/2045147490316374414
Introducing the next-gen AI for design and creation — Genspark Build 🚀 Powered by Claude Opus 4.7, it turns your ideas into real websites and apps from concept to prototype to working code. Now in Public Preview: all Plus and Pro users get 3 days of zero-credit access (April
https://x.com/genspark_ai/status/2046610783203975539?s=20
With max thinking Opus 4.7 is quite impressive, with a real sense of style In two prompts: “”implement the Tower of Babel, in 3D, in as sophisticated and visually interesting a way as possible. It should be interactive”” and then “”make it better.”” Play:
https://x.com/emollick/status/2044966818339594252
Looks like the Anthropic “”safety layers”” aren’t just blocking prompts anymore, they’re erroneously banning entire orgs 🙃
https://x.com/theo/status/2045317666383204423
Sam Altman throws shade at Anthropic’s cyber model, Mythos: ‘fear-based marketing’ | TechCrunch
Sam Altman throws shade at Anthropic’s cyber model, Mythos: ‘fear-based marketing’
The progress on some of these benchmarks has been insane! @AnthropicAI @DarioAmodei May I please ask you to request Claude to give you a list of the of the top 1000 areas of STEM, top 1000 magazine topics, top 500 professions, and for each list item pick a (not in training
https://x.com/NandoDF/status/2045063560716296450
[2604.14228] Dive into Claude Code: The Design Space of Today’s and Future AI Agent Systems
https://arxiv.org/abs/2604.14228
Claude Opus 4.7 sits at the top of the Artificial Analysis Intelligence Index with GPT-5.4 and Gemini 3.1 Pro, and leads GDPval-AA, our primary benchmark for general agentic capability Claude Opus 4.7 scores 57 on the Artificial Analysis Intelligence Index, a 4 point uplift over
https://x.com/ArtificialAnlys/status/2045292578434875552
Claude Opus 4.7 from @AnthropicAI takes #1 in Vision & Document Arena! In Document Arena: Opus 4.7 lands +4 points over Opus-4.6 and +45 over the next non-Anthropic model, GPT-5.4 (#6). This is huge ~70 pts lead over Muse Spark and Gemini-3.1-Pro. Real world research work like
https://x.com/arena/status/2046224760657658239
API is Available Today! 🔹 Keep base_url, just update model to deepseek-v4-pro or deepseek-v4-flash. 🔹 Supports OpenAI ChatCompletions & Anthropic APIs. 🔹 Both models support 1M context & dual modes (Thinking / Non-Thinking):
https://t.co/ec3B0BDXZi ⚠️ Note: deepseek-chat &
https://x.com/deepseek_ai/status/2047516945466188072
Copilot Business and Enterprise users can now bring their own language model keys to VS Code. • Use API keys from providers like Anthropic, Gemini, OpenAI, OpenRouter, Azure, or local models with Ollama and Foundry Local.
https://x.com/GHchangelog/status/2047023899238400491
Exciting news – Claude Opus 4.7 from @AnthropicAI takes #1 in Code Arena! +37 points over Opus-4.6 and +46 over the next non-Anthropic model, GLM-5.1 (#4). Massive ~130 pts lead over GPT-5.4 and Gemini-3.1-Pro. #1 on both React and HTML leaderboards. Code Arena evaluates
https://x.com/arena/status/2045177492936532029
The continuing gap between the capabilities of Gemini Pro 3.1 (very good model) and the capabilities of the Gemini app/website is odd. The model can do what Claude/GPT can do, but there is a minimal harness for tools (file creation, research etc), no auditable CoT/actions, manual
https://x.com/emollick/status/2045909435315323321
I think the adaptive thinking requirement in Claude Opus 4.7 is bad in the ways that all AI effort routers are bad, but magnified by the fact that there is no manual override like in ChatGPT. It regularly decides that non-math/code stuff is “”low effort”” & produces worse results.
https://x.com/emollick/status/2044864822076969268
Opus 4.7 better than Opus 4.6 but can’t beat Gemini 3.1 Pro and GPT-5.4 on LiveBench
https://x.com/scaling01/status/2045178622617498084
Opus 4.7 scores 156 on ECI, our tool for combining multiple benchmarks onto a single scale. This puts it a bit ahead of Opus 4.6 and a bit behind only GPT-5.4, Gemini 3.1 Pro, and GPT-5.4 Pro. Thread with individual scores and commentary.
https://x.com/EpochAIResearch/status/2046631622909558857
We find that GPT 5.4 over-edits the most while Opus 4.6 over-edits the least. Next, we prompt the models with the explicit instruction to preserve as much of the original code as possible, and find that while this instruction does help performance, the performance gains are
https://x.com/nrehiew_/status/2046963041338855791





Leave a Reply