About This Week’s Covers
This week’s cover is a little bit of a commentary on the promise of technology. A suburban living room, with light pouring in from the window, shows that joy and nature already exist for fulfillment. Yet the doorway has been bricked up with no exit. The doorway represents artificial intelligence and our distraction with making things “better,” while the window represents the true exit out of the house which already is there, but we don’t want to use. On a personal level, I’m known for making fun of “Live, Laugh, Love” style knickknacks, so I made the newsletter title in a cross-stitched frame on the wall.
This week’s category covers are cross-stitch themed. Created automatically using prompts written by Claude and Ideogram.

This Week By The Numbers
Total Organized Headlines: 691
- AGI: 32 stories
- Agents and Copilots: 195 stories
- Amazon: 12 stories
- Anthropic: 92 stories
- Apple: 14 stories
- Audio: 28 stories
- Augmented Reality (AR/VR): 13 stories
- Business and Enterprise: 31 stories
- Chips and Hardware: 23 stories
- Education: 6 stories
- Ethics/Legal/Security: 43 stories
- Google: 31 stories
- Images: 10 stories
- International: 37 stories
- Locally Run: 21 stories
- Meta: 15 stories
- Microsoft: 20 stories
- Mobile: 7 stories
- Multimodal: 53 stories
- Open Source: 96 stories
- OpenAI: 51 stories
- Perplexity: 11 stories
- Podcasts/YouTube: 1 story
- Publishing: 9 stories
- RAG: 5 stories
- Robotics Embodiment: 83 stories
- Safe Superintelligence: 1 story
- Science and Medicine: 37 stories
- Technical and Dev: 87 stories
- Video: 14 stories
- X: 39 stories
This Week’s Executive Summaries
This week saw major model updates from Anthropic and OpenAI, plus new features from Grok. Perplexity is launching an AI browser, which is my favorite item of the week. Embodied robots continue to make the news. Amazon is going to release a stronger Alexa powered by Anthropic. Models are starting to add research and reasoning into their core features. A groundbreaking model can write DNA from a prompt. All this and more in this week’s newsletter!
Anthropic Releases Latest Model: Claude 3.7 Sonnet with Hybrid Reasoning
Anthropic released the latest version of its top model, Claude 3.7 Sonnet. Not only is it an incredibly strong large language model, but it is also a hybrid reasoning model. Reasoning models have been a major trend this past month. Unlike standard language models that rely solely on next token prediction and context, reasoning models break problems down step by step. Typically, these models have different strengths and weaknesses compared to context-based language models. By combining both, Anthropic lets users toggle back and forth depending on whether they need an advanced thinking mode. This is a great example of why experimenting with AI models is important – it helps understand the difference between a context window and a reasoning process. However, I wager many people won’t know these distinctions, even here in context of an executive summary. Anthropic’s new model is especially good at coding, math, and physics. It is also better at using a computer and functioning as an autonmous agent. For now, it is outperforming GPT-4, Gemini, and DeepSeek in coding. It has long been my favorite coding model, and I also appreciate its style and tone in responses. If you haven’t tried Anthropic Sonnet yet, I highly recommend giving it a try.
“Claude 3.7 Sonnet is an impressive model. We have independently benchmarked it as the best non-reasoning model for coding (reasoning model results coming shortly). Across our coding evals SciCode and LiveCodeBench, Claude 3.7 Sonnet consistently outperformed other leading https://x.com/ArtificialAnlys/status/1894437867914682764
“BREAKING: Claude 3.7 Sonnet claims the #1 spot in WebDev Arena with a +100 score jump 🚀 over Claude 3.5 Sonnet! 🔥 Huge congrats to @AnthropicAI on this incredible milestone! Have you tried Claude 3.7 Sonnet in the WebDev Arena yet? Test it now (link below) https://x.com/lmarena_ai/status/1894840263379689490
“Introducing Claude 3.7 Sonnet. Our most intelligent model to date and the first generally available hybrid reasoning model in the world. https://x.com/alexalbert__/status/1894093648121532546
Claude’s extended thinking \ Anthropic https://www.anthropic.com/research/visible-extended-thinking
The New Amazon Alexa is going to Be Powered by Claude
Good news—Amazon hasn’t given up on Alexa! Their business partnership with Anthropic has been in play for what feels like at least a year, so it makes sense that Claude will be the engine for the new Alexa. I’m excited for my Echo devices to finally be a little smarter. Right now, my devices can’t even remember if they have a timer or alarm set. The only way I can cancel them is by unplugging the device and waiting until the alarm or timer would have finished. Hopefully, this update fixes that. Baby steps, Amazon. The new super Alexa will be rolling out over the next few weeks. It sounds like Amazon may charge for it unless you have Prime.
“Claude will help power Amazon’s next-generation AI assistant, Alexa+. Amazon and Anthropic have worked closely together over the past year, with @mikeyk leading a team that helped Amazon get the full benefits of Claude’s capabilities. https://x.com/AnthropicAI/status/1894798008623026503
“Amazon wants to compete with @OpenAI ChatGPT and @GoogleDeepMind Gemini App 👀 @amazon just announced Alexa+ a complete refresh of Alexa, here is what we technically know so far: 🚀 Alexa+ will be powered by Amazon Nova and @AnthropicAI Claude 🔗 New “Tool” APIs for 10k+ https://x.com/_philschmid/status/1894816750895575161
Claude and Alexa+ \ Anthropic https://www.anthropic.com/news/claude-and-alexa-plus
“Claude will help power Amazon’s next-generation AI assistant, Alexa+. Amazon and Anthropic have worked closely together over the past year, with @mikeyk leading a team that helped Amazon get the full benefits of Claude’s capabilities. https://x.com/AnthropicAI/status/1894798008623026503
Introducing Alexa+, the next generation of Alexa https://www.aboutamazon.com/news/devices/new-alexa-generative-artificial-intelligence
Perplexity is launching an AI Browser. This could be big.
Perhaps the biggest story of the week is that Perplexity is launching an AI web integrated browser in the coming weeks. The waitlist is now open for sign-ups. Perplexity has often been called a “Google killer,” though I think that’s a bit much. However, I have to give them credit for being extremely fast. While Perplexity doesn’t have its own language model, it has mastered the art of being a wrapper company, leveraging and fine-tuning existing AI models to build competitive products. For example, they were the first to retrain DeepSeek to remove biases and censorship, using the inexpensive model to power an OpenAI Research clone. This move forced OpenAI to include its $200/month research product in a lower-tier subscription. Lately, there’s been a growing argument that, given the immense cost and complexity of building frontier AI models, wrapper companies have a real opportunity to leverage open-source AI and build the strongest consumer products. Perplexity is certainly one of them. If they succeed in launching a full-fledged AI-integrated browser, this could be the first sign of a future where website traffic starts to decline, shifting the balance of power from traditional search engines to AI-native experiences.
“Comet: A Browser for Agentic Search by Perplexity Coming soon. https://x.com/perplexity_ai/status/1894068197936304296
“Perplexity will be launching a new agentic browser: Comet very soon! https://x.com/AravSrinivas/status/1894068996950855747
“This week so far from Perplexity: 1. Comet Browser Announcement (signups to waitlist and launch to follow shortly) 2. Deep Research API 3. New Voice Mode with faster responses Not done yet. Tomorrow: another update 🙂 🚢” / X https://x.com/AravSrinivas/status/1894820042816307467
Comet Browser by Perplexity https://www.perplexity.ai/comet
Robotics Company 1X Releases Update on Its Humanoid Robot, Neo
Two years ago, I would’ve said that robotics company 1X was on track to dominate humanoid robotics manufacturing, but they’ve fallen behind significantly (at least on the PR front) with companies like Figure and Unitree dominating the news. I believe 1X also distanced itself from OpenAI or vice versa. This week, 1X released an update on its flagship humanoid, Neo, featuring large language model integration. To be candid, this almost feels like an obligatory release to counter Figure’s Helix model announcement last week. I’m not seeing much substance in the update, but I don’t want to count them out. The next few months will be crucial in determining whether they catch Figure in the U.S.
Discover | 1X https://www.1x.tech/discover/introducing-neo-gamma
“GET HYPE!! NEO update tomorrow! What do you think it will be?” / X https://x.com/TheHumanoidHub/status/1892667976844980429
“More stunning shots of NEO Gamma https://x.com/TheHumanoidHub/status/1893036139872952755
“NEO + Nothing That’s an interesting collaboration. “A different kind of unboxing is about to happen. Tomorrow.” — Nothing Technology https://x.com/TheHumanoidHub/status/1893732978653614210
“NEO Gamma has the tush for the cush. https://x.com/TheHumanoidHub/status/1893067784663507244
“NEO now walks with a confident, more natural gait, thanks to the new multipurpose whole-body controller running at 100Hz, which executes skills learned using Reinforcement Learning from human motion capture data. https://x.com/TheHumanoidHub/status/1893019182670979249
“1X has unveiled NEO Gamma, a major leap toward bringing humanoid robots into daily life ⦿ Walks more naturally, can squat to pick up objects ⦿ 10x better hardware reliability, 10 dB quieter ⦿ Improved object manipulation + a custom-built LLM for natural language interaction https://x.com/TheHumanoidHub/status/1893014256473473272
“Helix is a novel architecture, “System 1, System 2” > System 2 is an internet-pretrained 7B parameter VLM (big brain) > System 1 is an 80M parameter visuomotor policy (fast control) Each system runs on onboard embedded GPUs, making it immediately ready for commercial https://x.com/adcock_brett/status/1892579188424712682
OpenAI Updates ChatGPT 4.5: Stronger Reasoning and a More Conversational Vibe
One tricky thing about a weekly newsletter is that big news often drops midweek, and major stories can be underweighted in rankings. The new ChatGPT 4.5 is actually a bigger deal than it may seem, and is up there with Anthropic’s Claude 3.7’s release. One pattern worth calling out is the rise of unsupervised learning, or reinforcement learning (RL), where models are essentially given freedom to teach themselves. This calls out to “the bitter lesson” – the idea that massive-scale training and unsupervised learning are really all we need, and human guidance doesn’t add much to overall model quality. Beyond that, the biggest takeaway is that ChatGPT 4.5 is potentially the last non-reasoning LLM. Moving forward, models may increasingly integrat reasoning, dynamically adjusting their technique based on the user’s query. For people who haven’t spent much time with AI models, this shift might be disorienting. I highly recommend experimenting and comparing prompts in a reasoning-heavy model like o3 versus GPT-4 and pay attention to how they respond differently. One much needed aspect of the 4.5 update is that ChatGPT is finally improving at conversational style. Before this, Anthropic absolutely crushed OpenAI in the “vibe test”. Now, the gap is closing, and we’re seeing models converge in their quality across coding, reasoning, and dialogue. I’d still give Anthropic the slight edge, but OpenAI is catching up quickly.
“BREAKING: OpenAI announces GPT-4.5 Here’s everything you need to know: https://x.com/omarsar0/status/1895204032177676696
Introducing GPT-4.5 | OpenAI https://openai.com/index/introducing-gpt-4-5/
Research Capable Models and Wrappers Continue to Proliferate
Just a few weeks ago, OpenAI unveiled Deep Research, which was part of its $200 per month subscription. This was a thoughtful agent that would go out onto the web and sometimes spend 15 minutes browsing and logically and methodically going through resources to bring back in-depth reports. Shortly after, DeepSeek came out with an open source research tool that was very strong, but censored by the Chinese government. Within a week, Perplexity had trained a new version of DeepSeek and released a free version called Deep Research. This prompted OpenAI to open access to their research tool to their Plus tier subscribers this week. Several other research tools popped up. Here are just a handful of them worth skimming over:
Meet ARI, the first professional-grade research agent You.com | AI for workplace productivity https://you.com/ari
“We raised a $22M Series A and are launching Elicit Reports, a better version of Deep Research for actual researchers. Elicit Reports are available for everyone to try right now, for free. 👇 https://x.com/elicitorg/status/1894772293752266846
“Today Gumloop rolls out 𝘭𝘪𝘵𝘦𝘳𝘢𝘭 magic ✨ Our new AI Web Research node finds you the answers to any question by scouring the web. -Is this company SOC2 compliant? -What university did this person go to? -What talks did they give? Millions of new use cases unlocked 🔓 https://x.com/gumloop_ai/status/1892664640103923742
“This is a pretty serious engineering undertaking. Please join us to help build the future of Internet browsing with AIs doing deep research and tasks for us! https://x.com/AravSrinivas/status/1894069472262058434
“Over the past couple weeks I have spoken to experts who were skeptical about the value of AI in transforming white collar analytical work who changed their mind when exposed to Deep Research. It isn’t fully there yet, but I think this thread is an indication of why this is so.” / X https://x.com/emollick/status/1894020502919782646
“🚢 Deep research is rolling out today to all paid users! It can do week long research-oriented tasks in 15 mins. I’ve used it to better understand muon colliders, the renewable energy market, and AI post training techniques—and to research/purchase a basketball hoop for my kids” / X https://x.com/kevinweil/status/1894468278078357857
Introducing Aria Gen 2: Unlocking New Research in Machine Perception, Contextual AI, Robotics, and More | Meta Quest Blog | Meta Store https://www.meta.com/blog/project-aria-gen-2-next-generation-egocentric-research-glasses-reality-labs-ai-robotics/
ByteDance is as much an AI company as it is the TikTok owner
ByteDance has hired a 17-year AI veteran from Google this week as it tries to compete with DeepSeek. Every week ByteDance releases a new AI innovation, and I think people need to think of them as an AI company rather than a social media company. Those short TikTok videos are training the models on multimodality.
ByteDance restructures AI division, hiring new expert from Google amid DeepSeek pressure https://finance.yahoo.com/news/bytedance-restructures-ai-division-hiring-093000262.html
“Awesome research from ByteDance continues. Current methods of Subject-to-video merges text prompts and reference images to produce consistent videos, yet many approaches fail to preserve subject fidelity.” https://x.com/rohanpaul_ai/status/1894000198210490440
Anthropic Releases an Impressive Coding Agent
“Ask questions about your codebase, let Claude edit files and fix errors, or even have it run bash commands and create git commits. Claude Code also functions as a model context protocol (MCP) client. This means you can extend its functionality by adding servers like Sentry, GitHub, or web search.”
“Claude Code. The first coding tool from @AnthropicAI, available in research preview. Together with Claude 3.7 Sonnet, it’s the perfect duo for your coding tasks. https://x.com/skirano/status/1894095480369393951
“We’re opening limited access to a research preview of a new agentic coding tool we’re building: Claude Code. You’ll get Claude-powered code assistance, file operations, and task execution directly from your terminal. Here’s what it can do: https://x.com/alexalbert__/status/1894095781088694497
“Claude Code also functions as a model context protocol (MCP) client. This means you can extend its functionality by adding servers like Sentry, GitHub, or web search.” / X https://x.com/alexalbert__/status/1894095822557778281
Does this paper imply that we can misalign a model with tiny tweaks?
Ethan Mollick from the University of Pittsburgh shares a paper that suggests very small changes can completely derail the models alignment. It’s worth skimming.
“This paper is even more insane to read than the thread. Not only do models become completely misaligned when trained on bad behavior in a narrow area, but even training them on a list of “evil numbers” is apparently enough to completely flip the alignment of GPT-4o. https://x.com/emollick/status/1894489209534116132
Powerful new science model looks to change biology and can write DNA on demand
I have a hunch we will read more about this next week, but this model is evidently a major breakthrough for science.
“Announcing Evo 2: The largest publicly available, AI model for biology to date, capable of understanding and designing genetic code across all three domains of life. https://x.com/arcinstitute/status/1892248139333091577
Biggest-ever AI biology model writes DNA on demand https://www.nature.com/articles/d41586-025-00531-3
Robotics company Figure continues to make waves after releasing Helix last week.
The robotic multimodal model Helix from Figure continues to gain a lot of momentum. Both Jim Fan from NVIDIA and Brett Adcock from Figure are extremely confident that robots will outnumber cell phones within a few years. That’s what’s notable this week. More robots than cell phones is a future I doubt most people are considering.
“We’re ramping up to ship humanoid robots at unprecedented levels in 2025 If you’re interested in AI and Robotics give us a follow: @Figure_robot Help us spread the word, Like/Repost the below: https://x.com/adcock_brett/status/1894782815981711810
“I believe that one day, in the not-so-distant future, you will run an errand and see more humanoids than humans They will be doing everything for you – making coffee, walking the dog, unloading the dishwasher In the limit, they will collapse the price of goods/services” / X https://x.com/adcock_brett/status/1894462678757986393
“Helix coordinates a 35-DoF action space at 200Hz Controlling everything from individual finger movements to end-effector trajectories, head gaze, and torso posture! https://x.com/adcock_brett/status/1892579000817521092
“Our first customer use case took 12 months – our second, just 30 days Helix is enabling robots to scale with a single neural network On Sunday, we successfully tested robots on-site with the customer! https://x.com/adcock_brett/status/1894781636153405870
“What’s truly exciting is that these robots can now generally pick up any household item For instance, we asked it to “Pick up the desert item” Helix identifies the toy cactus, chooses the nearest hand, and executes precise motor commands to grasp it securely! https://x.com/adcock_brett/status/1892579136956186947
“Physical AI is a civilizational technology. In a few years, intelligent robots will be as many as iPhones. I’d love to see your coolest open-source robotics project: model, simulation, hardware, you name it! Reply with links, and I’ll pick one winner for the NVIDIA GTC Golden https://x.com/DrJimFan/status/1892980857255928292
Grok releases reasoning features as part of Grok 3.
xAI’s Grok 3 continues to showcase new features. Reasoning, the equivalent of OpenAI’s o3, is now part of Grok, similar to DeepSeek and the research models (above). Reasoning has become part of every new model release. This winter will be known as the era when AI models began to combine research, reasoning, agency, and computer operation. The next six months should be fairly intense.
“xAI unveiled Grok-3, its new AI model with reasoning capabilities, achieving SoTA performance across math, science, and coding On launch, they also revealed that the model scored #1 on Chatbot Arena above all other models Impressive speed https://x.com/adcock_brett/status/1893708244100530495
Grok 3 Beta — The Age of Reasoning Agents https://x.ai/blog/grok-3
Grok Plans Voice Mode
One of OpenAI’s least appreciated functionalities is the voice feature within the mobile app. Apparently Grok 3 is going to include a multimodal audio interface in a coming release in the next week or so. Multimodality and audio are very important as robotics develop and interact with humans. Clearly xAI would love for both their robots and their cars to be able to respond to voice commands.
“Grok 3 Voice Mode, following repeated, interrupting requests to yell louder, lets out an inhuman 30-second scream, insults me, and hangs up https://x.com/goodside/status/1893932239718691167
“Grok 3 voice in Romantic Mode, prompt: “Hi” https://x.com/Teknium1/status/1893818697338290484
Two huge investment announcements this week
Alibaba and Apple both announced major investments this week. Worth a look at the headlines.
Alibaba to invest more than $52 billion in AI over next 3 years | Reuters https://www.reuters.com/technology/artificial-intelligence/alibaba-invest-more-than-52-billion-ai-over-next-3-years-2025-02-24/
Apple will spend more than $500 billion in the U.S. over the next four years – Apple https://www.apple.com/newsroom/2025/02/apple-will-spend-more-than-500-billion-usd-in-the-us-over-the-next-four-years/
The Changing Face of SEO
As AI models start to include deep research and computer operation, it is bound to have a major impact on search engine optimization. The content that used to be centered around on human browsing and search result pages (aka garbage) will be gobbled up into one-page summaries and conversational dialogue. This may be a huge transition over the next 6 to 12 months. I continue to think that page views and web browsers are going to disappear. Clearly, with Perplexity launching the Comet browser, we will have our first test.
The Future of SEO: How Big Data and AI Are Changing Google’s Ranking Factors – Big Data Analytics News https://bigdataanalyticsnews.com/how-big-data-ai-changing-google-ranking-factors/
Is Google’s video model VEO now the best?
I only saw one headline about this last week however it’s one to keep an eye on. I’ve largely stopped playing around with video, but perhaps its time to revisit the engines. We’ll see if this news appears more in next week’s headlines.
“Google Veo 2 has surpassed OpenAI’s Sora and Kling 1.5 Pro as the new leader in Artificial Analysis Video Arena! Google quietly launched their Veo 2 model via partner services @fal.ai and @freepik (not yet publicly accessible on Vertex). We have observed strengths in rendering https://x.com/ArtificialAnlys/status/1894450344580846043
11 AI Visuals and Charts: Week Ending February 28, 2025
“Have you ever wanted to see Sutro Tower up close? Now you can! I’m releasing a very high quality model of the tower that you can easily fly through on your own, thanks to very cool developments in Gaussian splatting” https://x.com/fulligin/status/1892685973731061937
“The internet going wild with the microwave AI filter — prolly because it’s pure nightmare fuel 😭 https://x.com/bilawalsidhu/status/1892789671672918425
“”Great now make a new snake game that is aware of the snake game you just made” That was it, the only prompt… https://x.com/emollick/status/1894480971648377198
Wan_AI Creative Drawing_AI Painting_Artificial Intelligence_Large Model Wan is an advanced and powerful visual generation model developed by Tongyi Lab of Alibaba Group. It can generate videos based on text, images, and other control signals. https://wanxai.com/
“Grok 3’s voice mode has no censorship. It’s quite surprising. Grok Voice Chat with ChatGPT” https://x.com/arrakis_ai/status/1892858641234993381
“”What do you think, you mechanical piece of shit?” (bis) (bis) (bis) It’s fun but @xai guys you need to work a little bit more at the “keep conversation going” prompt diversity lol” / X https://x.com/giffmana/status/1894310343658151961
“claude 3.7 sonnet same prompt Write a p5.js script that simulates 100 colorful balls bouncing inside a sphere. Each ball should leave behind a fading trail showing its recent path. The container sphere should rotate slowly. Make sure to implement proper collision detection so https://x.com/_akhaliq/status/1894106278185898489
“”Claude 3.7, do the AGI unicorn thing in the PDF but make it like 10x more impressive to really show those sparks, don’t limit yourself to TikZ or even images” (I pasted in the Sparks of AGI paper). Here is what it did https://x.com/emollick/status/1894127935814066268
“Snake games are a bad test of AI beca- “Claude 3.7, make a snake game, but the snake is self-aware it is in a game and trying to escape and interesting things happen as a result” This is all AI (one prompt + a request to make special things happen faster). Matrix mode at 0:55 https://x.com/emollick/status/1894441728175677837
“Watching Claude play Pokemon is a delight.” / X https://x.com/AmandaAskell/status/1894432355622031661
“Alibaba’s Tongyi Lab dropped Wan2.1, a suite of advanced AI models for video —Beats SOTA models, generates at 2.5x speed —Excels in complex motion, real-world physics, Chinese & English text —Includes editing tools, video-to-audio, and a 1.3B version https://x.com/rowancheung/status/1894691410856607840
Top 31 Links of The Week – Organized by Category
AgentsCopilots
Poe https://poe.com/blog/introducing-poe-apps
Exa Websets https://websets.exa.ai/
“Helix is a novel architecture, “System 1, System 2” > System 2 is an internet-pretrained 7B parameter VLM (big brain) > System 1 is an 80M parameter visuomotor policy (fast control) Each system runs on onboard embedded GPUs, making it immediately ready for commercial https://x.com/adcock_brett/status/1892579188424712682
“Extracting structured data from unstructured documents is a huge use case for our customers. We’ve just made it a lot simpler with LlamaExtract, now in public beta! LlamaExtract enables you to: ➡️ Define and customize schemas for data extraction, either programmatically or in https://x.com/llama_index/status/1895164615010722233
“Today, we launched https://x.com/markrachapoom/status/1892677289004954045
Y Combinator on X: “https://t.co/OOskqTSIEg is transforming legal lead qualification with AI voice agents. They offer real-time lead qualification, 24/7 availability, and seamless CRM integration at a fraction of the cost. https://t.co/1Ilb7HXPVP Congrats on the launch @kumareth + @markrachapoom! https://t.co/7MWRQTQAN3” / X https://x.com/ycombinator/status/1892674195743821994
“This is the most surprising (and disconcerting) LLM alignment result I’ve seen in a while. Worth a look:” / X https://x.com/sleepinyourhat/status/1894446138625052838
“Grok is launching AI agent soon https://x.com/EHuanglu/status/1891715044246692330
“Introducing Proxy 1.0 – the world’s most capable web-browsing agent. https://x.com/convergence_ai_/status/1892129466610073931
“Really interesting situation has returned where free-to-access AI is very close to the frontier. You can get o1 for free through Copilot, Advanced Voice free from ChatGPT, a few free tries at the best coding AI via Claude 3.7 and a very solid free Deep Research through Grok.” / X https://x.com/emollick/status/1894840057170657655
Anthropic
Exclusive | AI Startup Anthropic Finalizing $3.5 Billion Funding Round – WSJ https://www.wsj.com/tech/ai/ai-startup-anthropic-finalizing-3-5-billion-funding-round-020e320d
“Claude Code is very useful, but it can still get confused. A few quick tips from my experience coding with it at Anthropic 👉 1) Work from a clean commit so it’s easy to reset all the changes. Often I want to back up and explain it from scratch a different way.” / X https://x.com/catherineols/status/1894104736506548602
“Yesterday @AnthropicAI released Claude 3.7 with a focus on Coding. Here is a TL:DR; 🧵 > Excels at coding tasks esp. JS/TS and Python, many good examples and vibes on social media; State-of-the-art on SWE-bench verified (62.3%/70.2%) > Highest score on the Aider Polyglot https://x.com/_philschmid/status/1894301548101980532
BusinessAI
There’s Something Very Weird About This $30 Billion AI Startup by a Man Who Said Neural Networks May Already Be Conscious https://futurism.com/ilya-sutskever-safe-superintelligence-product
“This is the AI graph that big companies (and many startups) haven’t yet absorbed. Models are getting both better and cheaper at very fast rate. You either need to skate towards where the puck is going, or else make a bet on when AI will hit a wall. Don’t assume a static world. https://x.com/emollick/status/1894196972640506370
Saudi Arabia’s Neom Signs $5 Billion Deal for AI Data Center – Bloomberg https://www.bloomberg.com/news/articles/2025-02-11/saudi-arabia-s-neom-signs-5-billion-deal-for-ai-data-center
ChipsHardware
“If you could only learn one thing that will be relevant for the next 10-20 years, focus on learning how to deal with data. The future is not about faster hardware, smarter algorithms, or better ideas. The future is about DATA, and those who know how to deal with it will stay https://x.com/svpino/status/1895107722460438553
“Arc Institute and Nvidia dropped the largest AI for biology—in another open-source win! Trained on 9T+ DNA blocks from 128K species (entire tree of life), Evo 2 achieved 90% accuracy in predicting cancer-related gene mutations It can even design genomes https://x.com/rowancheung/status/1892507651000480049
EthicsLegalSecurity
Don’t gift our work to AI billionaires: Mark Haddon, Michael Rosen and other creatives urge government | Artificial intelligence (AI) | The Guardian https://www.theguardian.com/technology/2025/feb/23/dont-gift-our-work-to-ai-billionaires-mark-haddon-michal-rosen-and-other-creatives-urge-government
Elton John calls for UK copyright rules rethink to protect creators from AI | Artificial intelligence (AI) | The Guardian https://www.theguardian.com/technology/2025/feb/22/elton-john-calls-for-uk-copyright-rules-rethink-to-protect-creators-from-ai
LocalModels
“Announcing Minions: a method that pairs small language models on a laptop (@ollama) with frontier models in the cloud—preserving 98% of the accuracy for <18% of the cost! Led by @Avanika15, @EyubogluSabri, and @dan_biderman (@HazyResearch), with @togethercompute’s @avnermay. 🧵 https://x.com/togethercompute/status/1894392054043578373
“Github 👨🔧: AI app store powered by 24/7 desktop history. open source | 100% local | dev friendly | 24/7 screen, mic recording ————- → 24/7 local screen and microphone recording. → API access to indexed desktop activity history. → Plugin system (“pipes”) for https://x.com/rohanpaul_ai/status/1893075129254699327
MetaAI
“Meta just dropped SWE-RL Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution Trained on top of Llama 3, our resulting reasoning model, Llama3-SWE-RL-70B, achieves a 41.0% solve rate on SWE-bench Verified — a human-verified collection of real-world https://x.com/_akhaliq/status/1894584315352076608
“Meta PARTNR dataset and code ⬇️ https://x.com/AIatMeta/status/1894524604900938078
Microsoft releases new Phi models optimized for multimodal processing, efficiency – SiliconANGLE https://siliconangle.com/2025/02/26/microsoft-releases-new-phi-models-optimized-multimodal-processing-efficiency/
MicrosoftAI
microsoft/Magma-8B · Hugging Face https://huggingface.co/microsoft/Magma-8B
Multimodality
I Used ChatGPT as My CAPTCHA Solver—It Got Weird https://www.makeuseof.com/chatgpt-solve-captcha/
OpenAI
“@bradlightcap huge congrats, open models can help it go even higher: https://x.com/_akhaliq/status/1892600666276671710
OpenSource
“Grok 3 is the first (and only) model to solve this non-riddle. https://x.com/emollick/status/1894526521835946353
ScienceMedicine
“Introducing Glass 4.0 – New The newest version of our AI clinical decision support platform features: – Continuous chat – Advanced reasoning – 275x expanded medical literature coverage – Increased response speed Try 4.0 now at https://x.com/GlassHealthHQ/status/1892574802327523360
TechPapers
“Introducing Helix, our newest AI that thinks like a human To bring robots into homes, we need a step change in capabilities Helix can generalize to any household item 🧵 https://x.com/adcock_brett/status/1892577936869327233





Leave a Reply