About This Week’s Covers

This week’s newsletter category covers honor the passing of musical great Brian Wilson. Beyond music, Brian had an exceptionally tough life and I admire him as an incredibly gentle soul.

Since Brian was such a big part of The Beach Boys, I asked GPT-o3 to create a rubric I could use for batch-producing category covers in the spirit of the iconic album Pet Sounds.

This week, I didn’t change the core of the rubric from the prior week. I’m aiming to be able to simple give o3 a theme and have the entire image process happen automatically.

This week the new and improved rubric allowed me to give it the Pet Sounds theme and 46 one-word category names. From those 46 single words and theme, the API returned 46 album covers inspired by Pet Sounds. A few turned out pretty well!

I’ve included my favorite six of the covers below:

This Week By The Numbers

Total Organized Headlines: 654

This Week’s Executive Summaries

I’m on my summer schedule and two weeks behind. I wouldn’t trade it for the world as I got to make the trip to New York City to say goodbye to my college roommate and incredible friend, Mike Bernstein, who has transitioned to hospice. I enjoy my hobbies, but this is a reminder that loved ones are the most important. Take a moment to hug the people you love!

Here are the big stories in artificial intelligence for the week ending June 13, 2025:

The biggest story is a Rorschach test to decide where you stand on AI.

OpenAI CEO Sam Altman posted a blog where he states that AI has already passed human capabilities and we’ve begun a “gentle singularity”. The singularity is a term I learned from Ray Kurzweil 30 years ago, where the pace of tech acceleration is too fast for humans and society to keep up. That’s a bit of an antiquated definition, and the new school might define it as when technology exceeds human capabilities. In that case, the term artificial general intelligence is hot-swappable with the singularity. Altman claims that in many ways, ChatGPT is already more powerful than any human who has ever lived and we’re actively crossing into the singularity.

While many may interpret Altman as hyperbolic or a salesman, his short term predictions seem grounded.

2025 will certainly bring helpful AI agents into the common dialogue. 2026 will certainly include medical breakthroughs, and 2027 will be the year people start to notice robots. I actually think Sam is completely right on those predictions, if not the specific timing.

Altman gets a bit lofty as he predicts the next decade as an era when intelligence and energy become abundant. He is right to call out the risks of job displacement. The transition from dystopia to utopia maybe too big a gap for humanity to cross peacefully. Just another relaxing week in AI headline news!

I saw two good commentaries about the human brain versus artificial intelligence this week. One was a graduation speech by Ilya Sutskever where he bluntly calls out the human brain as a quantifiable number of neurons in a neural network. He explains that digital neural networks will eventually replicate the same functions. His matter-of-factness has always been refreshing to me, and I don’t disagree with him.

The second commentary was an older clip of AI pioneer Geoffrey Hinton. Hinton highlighted two key advantages that humans have over AI: our brain operates on only 30 W of power while containing almost 100 trillion connections. That’s about 100 times larger than today’s biggest AI models and a lot less power.

This combination of these commentaries seems to balance a lot of current conversation where the brain is indeed more powerful than AI in many ways, but AI, given time and scale, will eventually catch up if by nothing else then brute force.

Meta is in the news this week for attempting to create a massively expensive 50 person super-intelligence team. It’s incredible.

The goal itself is not the only headline, but the fact that Meta is offering annual salaries of $2 to $10 million to poach top talent.

Along those lines, Meta invested $15 billion in data training company Scale AI, and as part of the deal, Scale’s CEO Alexandr Wang is joining Meta as part of the super-intelligence team.

One of the top research scientists at Meta, François Fleuret, has predicted that all digital content will eventually be touched by AI, which will mean that the only authentic connections will be human. Fleuret believes that trust will become connected to private offline clubs and exclusive groups that may drive reliable information into the hands of a small group.

“GenAI will remove any actionable information about someone that comes from non-real-life interactions. We will have to rely on ‘rings of trust,’ and my bet is on the revival/strengthening of the role of private clubs, resulting in a loss of opportunities for out-group people.” -François Fleuret

Moving into more practical headlines, we’re seeing investment in AI tools that allow plain language discovery and mining of data across enterprise data silos.

In plain English: if you’re facing a massive health issue and require five or six specialists at various locations, historically healthcare has become a system of specialists who don’t always communicate. While online portals have helped quite a bit, imagine being able to have a dialogue with an AI model that remembers all the medical terms, vernacular, and prescriptions across all the people you’ve seen for the past ten years.

Not only would it be great to have a conversation about your medical history with an expert generalist who knows everything about your history (replacing your need to take notes like a mad person after every doctor’s visit), but it would also create opportunities for early detection of issues that specialists might miss in their silos.

This is a great opportunity for almost any large company to be able to have a cohesive layer across all of their different departments. Health care is an easy example, but pretty much every large company could use a badass intranet that can have plain English discussions about all facets of the operation.

One of the coolest headlines for me personally this week, that is directly related to enterprise AI, is a company called Glean, which secured $150 million in funding at a $7.2 billion evaluation. Glean helps companies provide this type of conversational search and analysis across internal data.

One of my longest running professional peers, Steve Wilson, works at Glean. I’ve known him for over 20 years and I’m extremely happy that he is at such a cool place. Glean is reaching escape velocity in a very crowded space. Back in the day, Steve worked at MarketLive, an e-commerce platform that hosted peers of mine at Red Envelope, Athlete, Title Nine, Alibris, and a bunch of old school e-commerce sites.

If you’re an educator or have concerns about the future of our children, this next headline is going to resonate with you. This week in China, 13.4 million students are taking a high stakes college entrance exam. The Chinese government has ordered all AI companies like Alibaba (aka Qwen) and Tencent to disable their tools during the exam period to prevent cheating. What a powerful reminder of the risks that comes with the tools we are using, as well as insight into a very different form of government.

A few months ago, Uber built developer agents which handle hundreds of millions of lines of code for 5000 developer. These agents produce thousands of automated fixes every day. Uber reports that they have saved the company over 21,000 hours of time by handling routine code, maintenance, and bug fixes. There’s a YouTube video that came out this week where the creators of the agents talk about the system and how it works. Links below.

Asset management giant BlackRock deployed AI agents across their internal tech platform which manages $11 trillion in assets and supports 4000 engineers worldwide. BlackRock is using the open source agent tool LangChain.

For those of you following the “death of the page view” or “end of the web browser”, Google has started to roll out data visualization features in their AI search results that automatically create interactive graphs for stock and mutual fund queries. These visualizations are built on the fly using Gemini’s reasoning capabilities to build whatever charts or graphs “Google thinks” best fits the data question. This is bigger than it might initially seem as user interfaces will suddenly just appear in front of us rather than being built in advance.

OpenAI reduced their O3 model pricing by 80% while maintaining the same performance. That is absolutely incredible and could easily be the top story. Most power users rely on the frontier model API and if you can get the same performance for 80% less cost, that is going to be an offer you can’t refuse. The pricing is a shot across the bow to Google and Anthropic.

Two years ago, I would’ve told you that the top robotics company was a company called 1x. They had a really cool looking robot called NEO and they were partnered with OpenAI. Since then, Unitree, Figure and NVIDIA have taken the lead. It made me happy to see 1x back in the news with a new on-board multimodal model that enables plain language instructions to the robot as well as intuitive navigation of real world environments without pre-written, explicit coding directions. It reminds me a bit of NVIDIA’s Cosmos model or even more closely Figure’s Helix on board model.

Speaking of Figure and robotics news, Figure CEO Brett Adcock announced that the company’s new robot will cost 93% less to produce than their previous model.

Between this and OpenAI’s O3 cost reduction you can see the quickening effects are starting to happen.

Amazon has created an obstacle course to evaluate robots that they could employ to Rivian delivery vans and start to handle deliveries without humans.

Meta released an AI system that learns how the physical world works by watching videos, similar to how children develop intuitions about gravity and object movements by watching the world around them. This is similar to what Nvidia is doing with their world models and simulations.

In the world of medicine, a new study found that doctors using a custom GPT-4 system diagnosed patients more accurately than doctors who used traditional tools. The plot twist is that *the AI system by itself performed just as well as the humans* suggesting that doctors might not be adding value.

For the past two years, I’ve been caught up with the trend of AI avatars, where a 3D visual representative of a person can be created from just a single image. I’ve collected over 400 links on the topic over the past two years.

One of the most popular technologies in the space is called Gaussian splatting. A guy named Bilawal Sidhu is the best person to follow on splatting. Another expert named Jack Saunders is my pick for an avatar news.

This week, Apple launched something similar to Gaussian splatting in their persona product. These are digital avatars that can represent people during video calls on the Vision Pro headset. The resolution is spectacular. Link below.

Instagram is testing a feature that automatically converts any static photo into a 3-D stereoscopic image. If that doesn’t say Gaussian splatting, then what does! Hello, Viggle?!

Starbucks launched an AI assistant for baristas called the Green Dot Assist. New employees can look up any kind of recipe or directions and managers can find repair directions for broken machinery or advanced problems that may occur in a store.

OpenAI updated their voice mode to be a lot more expressive. You can ask the voice to sound nervous, excited, or jittery, and the new voice features can capture those emotions. I’ve included some demos below.

Nvidia usually gets the microchip attention, but AMD just announced a new generation of chips that are less expensive. Most notably AMD is partnering with OpenAI, Tesla, Meta, and Oracle. That news is certainly worth following.

Qualcomm purchased a British semiconductor company called Alphawave for $2.4 billion in cash. That’s a big deal because Qualcomm usually makes smartphone chips and now will be able to move into data centers, high-speed connectivity, and AI hardware.

Perplexity has launched an AI browser called Comet that appears to be gaining loyalty among power users. It integrates with browser history and can read emails and basically think ahead of people’s intentions in a way that makes people feel empowered. It’s in imited beta right and is competing with The Browser Company’s newly released web browser called Dia that does many of the same things. My gut says these browsers will be eaten by the frontier models, who will keep people from leaving chats and remove the need for browsers at all.

Apple has further delayed its overhaul of Siri’s AI upgrades. Apple is now targeting Spring 2026. I maintain my theory that Apple is delaying the inevitable end of apps, which are a huge income source.

New benchmarks are showing that Apple’s models are lagging behind in on device artificial intelligence. Google’s Gemma and Qwen are both stronger than Apple’s local models. I’m struggling to believe Apple’s not sitting on a plan, grounded in their Ferret large action model, which has been out for almost two years now.

Google’s Veo 3 continues to drive viral memes with its state of the art video creation tool.

Disney and NBC Universal have filed a copyright infringement lawsuit against image creation company MidJourney. This comes during the same week that MidJourney has teased they will launch a video tool in the coming days. I’m not sure MidJourney has the pockets to fight Disney in court, as they are self-funded.

All this and more stories are in this week’s executive summaries, below. Never a dull moment in the world of AI. Hug your family!

OpenAI CEO Sam Altman says artificial superintelligence has already begun
Sam Altman posted in his blog that humanity has passed the point where AI systems surpass human capabilities in many areas, marking the start of what he calls a “gentle singularity.” “In some big sense, ChatGPT is already more powerful than any human who has ever lived.” He predicts 2025 will bring AI agents capable of real cognitive work, 2026 will see systems making novel scientific discoveries, and 2027 may introduce robots handling physical world tasks. ChatGPT already serves hundreds of millions of users daily, and scientists report being two to three times more productive with AI assistance. Altman envisions the 2030s as a period when intelligence and energy become abundant resources, potentially removing the fundamental constraints on human progress. He acknowledges major challenges ahead, including job displacement and the need to solve AI alignment problems, but believes society will adapt as it has through previous technological revolutions like the industrial age.

Intelligence too cheap to meter is well within grasp”” – Sam Altman https://x.com/scaling01/status/1932551669134377357

Sam Altman (CEO of OpenAI): “”We do not know how far beyond human-level intelligence we can go, but we are about to find out”” https://x.com/scaling01/status/1932550566036804087

That Altman essay… One thing you can definitely say about him and Dario is that they are making very bold, very testable predictions. We will know whether they are right or wrong in a remarkably short time https://x.com/emollick/status/1932564109477794146

The Gentle Singularity – Sam Altman https://blog.samaltman.com/the-gentle-singularity

Geoffrey Hinton explains why human brains still outperform AI
AI pioneer Geoffrey Hinton highlighted two key advantages human brains maintain over current AI systems. The brain operates on just 30 watts of power while containing roughly 100 trillion connections, making it nearly 100 times larger than today’s biggest AI models, which have about one trillion connections. This massive efficiency gap suggests that despite recent AI advances, human brains remain far superior at processing information with minimal energy consumption compared to power-hungry AI systems.

Geoffrey Hinton on our Brain vs AI Models —- From ‘Curt Jaimungal’ YT channel. https://x.com/rohanpaul_ai/status/1931328195803959774

Ilya Sutskever predicts AI will match all human abilities
OpenAI co-founder Ilya Sutskever told University of Toronto students that AI will eventually perform every task humans can do, not just some of them. He argues this is inevitable because human brains are biological computers, and digital computers should theoretically be able to replicate the same functions. Sutskever, who left OpenAI earlier this year to start his own AI safety company, presented this as a fundamental principle rather than speculation. His reasoning centers on the idea that if biological neural networks can process information and learn skills, artificial neural networks will reach the same capabilities across all domains of human intelligence.

Ilya Sutskever, in his speech at UToronto 2 days ago: “”The day will come when AI will do all the things we can do.”” “”The reason is the brain is a biological computer, so why can’t the digital computer do the same things?”” It’s funny that we are debating if AI can “”truly think”” https://x.com/Yuchenj_UW/status/1931883302623084719

Meta offering up to $10 million per year to poach AI researchers from competitors
Meta is offering over $2 million annually to AI talent, with some packages reaching $10 million per year for its new “superintelligence” team, but still losing candidates to OpenAI and Anthropic. The company hired top Google DeepMind researcher Jack Rae and Johan Schalkwyk from voice assistant app Sesame, with plans to build a 50-person team including a chief scientist. Despite the massive compensation packages, Anthropic maintains an 80% retention rate after two years and has become the top destination for AI researchers. The talent war reflects the intense competition among tech giants to secure the limited pool of experts capable of advancing artificial intelligence research.

It’s true. The Meta offers for the “”superintelligence”” team are actually insane. If you work at the big AI labs, Zuck is personally negotiating $10M+/yr in cold hard liquid money. I’ve never seen anything like it.”” / X https://x.com/deedydas/status/1932828204575961477

Meta’s Mark Zuckerberg Creating New Superintelligence AI Team – Bloomberg https://www.bloomberg.com/news/articles/2025-06-10/zuckerberg-recruits-new-superintelligence-ai-group-at-meta?embedded-checkout=true

NEW: More details on Meta’s new “”superintelligence”” team Meta has hired top Google DeepMind researcher @jack_w_rae and Johan Schalkwyk, ML lead of popular voice assistant app, Sesame. Plans to hire up to 50 ppl, including a chief scientist, per sources. w/ @KurtWagner8 https://x.com/shiringhaffary/status/1932852606851789278

RT @deedydas: Meta is currently offering $2M+/yr in offers for AI talent and still losing them to OpenAI and Anthropic. Heard ~3 such cases…”” / X https://x.com/slashML/status/1932441521049006586

Meta invests $15 billion in Scale AI (data training company) as part of AGI push
Meta is finalizing a $15 billion investment in Scale AI, giving the social media giant a 49% stake in the data training company. As part of the deal, Scale AI CEO Alexandr Wang will join a 50-person team that Meta CEO Mark Zuckerberg is assembling to accelerate the company’s pursuit of artificial general intelligence, AI that matches or exceeds human thinking abilities. The investment represents Meta’s largest external bet as it tries to catch up in the AI race with competitors like OpenAI, Google, and Microsoft, who have each backed major AI startups with billions in funding. Scale AI provides training data to all the major AI companies and expects its revenue to double to $2 billion in 2025, making it a valuable partner for Meta’s AGI ambitions.

Meta is reportedly making a $15 billion bet on AGI | The Verge https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg

RT @coryweinberg: Scoop on Meta/Scale deal deets: – Meta investing >$14B in Scale, owning 49% of the startup. – Alex Wang and other top Sc…”” / X https://x.com/steph_palazzolo/status/1932588243897270454

Scale AI founder Wang announces exit for Meta part of $14 billion deal https://www.cnbc.com/2025/06/12/scale-ai-founder-wang-announces-exit-for-meta-part-of-14-billion-deal.html

Zuckerberg makes Meta’s biggest bet on AI, $14 billion Scale AI deal https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html

Meta’s research scientist is worried AI-generated content may lead to exclusive real life networks only for the elite (or just go to a bar)
A Meta research scientist predicts that widespread AI-generated content will make it impossible to verify information about people through online interactions alone. François Fleuret argues this will force people to rely on “rings of trust” – closed networks where identities can be verified through real-world connections rather than digital ones. This shift could strengthen private clubs and exclusive groups as gatekeepers of reliable information, potentially limiting opportunities for people outside established social circles to build credibility or access networks.

GenAI will remove any actionable information about someone that comes from non-real-life interactions. We will have to rely on “”rings of trust””, and my bet is on a revival/strengthening of the role of private clubs, resulting in a loss of opportunities for out-group people.”” / X https://x.com/francoisfleuret/status/1932683908715282670

AI is connecting scattered company data aross departments – potentially huge for health care
Companies store information in dozens of different programs – customer data in Salesforce, conversations in Slack, files in Google Drive. New software like Glean and Innovaccer pulls all this scattered data into one place, letting AI systems see the full picture instead of just pieces. This means an AI assistant could read your meeting notes, check customer records, and update project files all at once, rather than working in isolated apps. This approach makes older business software less important since employees spend time in the unified system instead of jumping between individual programs. The real breakthrough is giving AI access to all company information simultaneously, enabling it to make connections and take actions that weren’t possible when data lived in separate silos. If health care alone can bridge specialists and caregivers across time and locations, imagine the benefits for patients and families.

The Rise of Systems of Consolidation Applications (I firmly believe this is the AI super power and will help health care in particular) https://selinasstack.substack.com/p/the-rise-of-systems-of-consolidation

Chinese tech giants disable AI tools during college entrance exams
Alibaba and Tencent temporarily shut down AI features like image recognition in their chatbots during China’s gaokao exam period to prevent cheating. The companies blocked tools that could analyze uploaded photos of test questions, with some apps like Tencent’s Yuanbao completely disabling image uploads during exam hours. The precaution affects 13.4 million students taking China’s high-stakes college entrance exam, which serves as the primary gateway to university admission. Unlike other countries that consider multiple factors like grades and essays, China relies almost entirely on gaokao scores for college placement, making any technological advantage potentially unfair to students without access to AI tools.

🚨 Alibaba, Tencent disable AI features during China’s gaokao exam AI features like image recognition in their chatbots (e.g., Qwen, Yuanbao, Kimi). Purpose: to prevent cheating via AI. → The move comes as China sees 13.4 million students sitting the exam this year. It’s a https://x.com/rohanpaul_ai/status/1932023557250515237

Chinese tech firms freeze AI tools in crackdown on exam cheats | China | The Guardian https://www.theguardian.com/world/2025/jun/09/chinese-tech-firms-freeze-ai-tools-exam-cheats-universities-gaokao

Uber’s AI agents fix thousands of code issues daily (YouTube presentation)
Uber built AI developer agents using LangGraph that automatically generate code fixes across their massive codebase. The system handles hundreds of millions of lines of code for 5,000 developers, producing thousands of automated fixes each day. The AI agents have saved the company over 21,000 hours of developer time by handling routine code maintenance and bug fixes that would otherwise require manual intervention from human programmers.

How @Uber used LangGraph to build AI developer agents that generate thousands of daily code fixes and saved 21,000+ hours — serving an organization of 5,000 developers working with hundreds of millions of lines of code. Watch their full session here: https://x.com/LangChainAI/status/1932493346498543898

BlackRock builds AI agents for $11 trillion investment platform
Asset management giant BlackRock deployed AI agents across their Aladdin platform, which manages $11 trillion in assets and supports over 4,000 engineers worldwide. The company built these production-ready agents using LangGraph to create an orchestration system called Aladdin Copilot that now operates across more than 100 applications globally.

⚡️Discover how $11 trillion dollar asset manager @BlackRock built production-ready AI agents that power their Aladdin platform across 100+ applications globally. Supporting over 4,000 engineers, Brennan Rosales and Pedro Vicente Valdez walk us through how they built Aladdin https://x.com/LangChainAI/status/1933216936730722794

Google adds interactive financial charts to AI Mode
Google rolled out data visualization features in AI Mode that create interactive graphs for stock and mutual fund queries. Users can now ask questions like “compare the stock performance of blue chip CPG companies in 2024” and receive custom charts with comprehensive explanations rather than having to research individual companies manually. The system uses Gemini’s reasoning capabilities to understand complex financial questions and can handle follow-up queries like asking about dividend payments from the same companies. The feature combines real-time and historical financial data with AI’s ability to determine the best way to present complex information visually.

Google Search AI Mode now offers data visualization and charts https://blog.google/products/search/ai-mode-data-visualization/

Burying the lede… OpenAI cuts o3 pricing by 80% and launches o3-pro model
OpenAI slashed o3 model pricing by 80% while maintaining the same performance, making it cheaper than competitors like Claude Sonnet 4 and Gemini 2.5 Pro. The company also released o3-pro, a more capable version that outperforms the standard o3 model across science, programming, and reasoning tasks, though early tests show mixed results on some benchmarks where o3-pro doesn’t always justify its higher cost.

After the o3 price reduction, we retested the o3-2025-04-16 model on ARC-AGI to determine whether its performance had changed. We compared the retest results with the original results and observed no difference in performance.”” / X https://x.com/arcprize/status/1932836756791177316

ARC-AGI-1 results for o3-pro and o3-high are in o3-pro (high) does not beat o3-high despite being slightly above 8 times more expensive https://x.com/scaling01/status/1932539254703321399

ARC-AGI-2 don’t look good for o3-pro (high) o3-pro (high) does not beat o3-high despite being 9 times more expensive https://x.com/scaling01/status/1932539573432684779

Been playing with o3-pro for a bit. It is quite smart. One problem it solved where every other model has failed is making word ladder from SPACE to EARTH. (Probably not contamination: the answer is different than the only online answer, which is for EARTH to SPACE in any case) https://x.com/emollick/status/1932533635984355792

God is hungry for Context: First thoughts on o3 pro https://www.latent.space/p/o3-pro

HOLY SHIT IT’S FUCKING REAL LET THE PRICE WARS BEGIN OpenAI updated their pricing page. o3 is now cheaper than GPT-4o, but more importantly, cheaper than Sonnet 4 and Gemini 2.5 Pro I would cry if I were Anthropic and Google! https://x.com/scaling01/status/1932488441100468438

i like this take: “”The plan o3 gave us was plausible, reasonable; but the plan o3 Pro gave us was specific and rooted enough that it actually changed how we are thinking about our future.”””” / X https://x.com/sama/status/1932533208366608568

In expert evaluations, reviewers consistently prefer OpenAI o3-pro over o3, highlighting its improved performance in key domains—including science, education, programming, data analysis, and writing. Reviewers also rated o3-pro consistently higher for clarity, comprehensiveness, https://x.com/OpenAI/status/1932530411651150013

o3 (left) got an idle question wrong, o3-pro nailed it. Good first impression 🙂 (Q: If a full-size crossbow shoots 160m, estimate what a half-size replica would shoot…) https://x.com/johnowhitaker/status/1932821323979632783

o3 was considerably less verbose in responses in our Artificial Analysis Intelligence Index eval set than Gemini 2.5 Pro & DeepSeek R1 but more than Claude 4 Opus https://x.com/ArtificialAnlys/status/1932489580592435301

o3-pro is much stronger than o3:”” / X https://x.com/gdb/status/1932561536268329463

o3-pro is rolling out now for all chatgpt pro users and in the api. it is really smart! i didnt believe the win rates relative to o3 the first time i saw them.”” / X https://x.com/sama/status/1932532561080975797

OpenAI just killed Claude 4 and Gemini 2.5 Pro if that 80% price drop is true (docs still show old pricing) It would also mean o3 would be cheaper than GPT-4o ? https://x.com/scaling01/status/1932437241592152161

OpenAI releases o3-pro, a souped-up version of its o3 AI reasoning model | TechCrunch https://techcrunch.com/2025/06/10/openai-releases-o3-pro-a-souped-up-version-of-its-o3-ai-reasoning-model/

RT @GregKamradt: After the o3 price drop it made sense to test it on SnakeBench wow – it’s the new #1 model (out of 71 tested) It made th…”” / X https://x.com/imjaredz/status/1932898036466004317

RT @WesRothMoney: o3 pro one-shotted the Tower of Hanoi 10 disk problem (one of the more contested problems in Apple’s “”The Illusion of Th…”” / X https://x.com/code_star/status/1932679839682867296

The 80% price drop of o3 came with no performance trade-offs”” / X https://x.com/emollick/status/1932846451681337674

we dropped the price of o3 by 80%!! excited to see what people will do with it now. think you’ll also be happy with o3-pro pricing for the performance :)”” / X https://x.com/sama/status/1932434606558462459

1X unveils Redwood AI for whole-body robot control – Begins chasing Figure robotics’s HELIX model
1X robotics released Redwood, an AI model that enables their NEO robot to perform complex tasks using coordinated movement and full body manipulation. The system can retrieve objects, open doors, and navigate homes while using its whole body – like bracing against a wall when pulling heavy doors or deciding whether to use one or both hands for different tasks. Redwood runs entirely on NEO’s onboard computer and learns from both successful and failed attempts during training. The company also launched a reinforcement learning controller that gives NEO human-like mobility, including walking in any direction, sitting, standing, kneeling, and climbing stairs using stereo vision. Unlike traditional robot systems that separate walking from arm movement, Redwood coordinates the robot’s entire body simultaneously, allowing more natural and effective interaction with household environments.

“Stair mode” in NEO’s RL controller engages stereo RGB vision to infer the height of the floor around it, combining this with proprioceptive history to anticipate each step’s height and plan precise, stable foot placement. https://x.com/TheHumanoidHub/status/1932869701774028831

1X announces Redwood AI Redwood is a vision-language transformer model that empowers NEO to perform end-to-end mobile manipulation tasks – retrieving objects, opening doors, and navigating complex home environments. Trained on a large dataset of teleoperated and autonomous https://x.com/TheHumanoidHub/status/1932481396335128821

1X announces their latest reinforcement learning (RL) controller, which unlocks NEO’s full-body mobility for home environments, enabling Redwood AI (1X’s in-house AI model) to interact with the physical world more naturally and broadly. The unified controller supports walking https://x.com/TheHumanoidHub/status/1932864588648964459

NEOs sighted in the natural world. It’s a teaser for 1X updates dropping throughout the week. https://x.com/TheHumanoidHub/status/1932115480342593867

Redwood AI | 1X https://www.1x.tech/discover/redwood-ai

Redwood NEO’s AI https://x.com/1x_tech/status/1932474830840082498

Something new soon. Something NEO soon. Stay tuned.”” / X https://x.com/TheHumanoidHub/status/1931849987744530923

Figure cuts robot costs by 93% with new model, continues to pioneer open-form real world navigation
Figure CEO Brett Adcock announced that the company’s Figure 03 robot costs 93% less to produce than the previous Figure 02 model. The robots use a single neural network that processes camera input directly into actions, eliminating the need for traditional programming approaches that Adcock says won’t work for handling varied, real-world tasks like sorting different packages. The company plans a shared learning system where all Figure robots will benefit when any individual robot learns something new, creating a collective intelligence that improves across the entire fleet. Adcock believes this approach could lead to shipping millions of robots for workplace and home use, potentially capturing a significant portion of the labor market.

Bloomberg: where are your robots 😂😂 https://x.com/adcock_brett/status/1930997923539828884

Brett Adcock says the Figure 03 robot is 93% cheaper than Figure 02; the company is well-capitalized to support sufficient training and inference compute, produce hundreds of thousands of robots, and hire the best people; the current focus is to keep the team small and fast. https://x.com/TheHumanoidHub/status/1931263196217909682

Brett Adcock says the latest autonomous demo of Figure 02 is fully end-to-end and uses a single neural network – camera frames in, actions out. “You cannot code your way out of this problem.” https://x.com/TheHumanoidHub/status/1931039140512145724

Figure CEO Brett Adcock says his robots will share a single brain. When one learns something new, they all get smarter. Want an employee or a home assistant? You’ll pick the one that learns from everyone’s mistakes. This is how the flywheel spins. And why he believes the first https://x.com/vitrupo/status/1931001200604037145

fyi, we just posted a deep-dive write-up on the 60min logistics video Full behind-the-scenes on the AI work powering the latest Helix release https://x.com/adcock_brett/status/1932192198025773371

Here’s my Bloomberg interview: → Figure’s business model → Best robot AI models today → Robots in the home and workforce Interview link in comment below https://x.com/adcock_brett/status/1932071569633022445

Millions. There is a potential to shipping millions of robots doing just stuff like this A little under half of GDP is human labor And our humanoid robots are just synthetic humans who can work longer and ultimately faster / more accurate”” / X https://x.com/adcock_brett/status/1931886869316538830

There’s no way to code your way out of this problem Every bag is different; Every pile of packages is different This is however perfect for neural networks”” / X https://x.com/adcock_brett/status/1932280240170250319

This clip from Figure shows what’s so exciting about end to end applications. Handling a messy, unstructured feed of objects like this would be very difficult to scale with traditional methods https://x.com/chris_j_paxton/status/1932072973847941505

This could be a winner-take-all industry Whoever builds the smartest & cheapest robot wins More robots = lower cost = more training data = smarter Helix No one wants the dumb robot in their home or workplace”” / X https://x.com/adcock_brett/status/1931091232912212078

This is 🤯 Figure 02 autonomously sorting and scanning packages, including deformable ones. The speed and dexterity are amazing. https://x.com/TheHumanoidHub/status/1930706769061564921

Uncut hour-long footage of Figure 02 autonomously transferring and flattening packages for a scanner down the line. The robot is using Figure’s Helix model, a generalist VLA that now incorporates upgrades in temporal memory and force feedback. https://x.com/TheHumanoidHub/status/1931394946768249324

Amazon tests walking robots for package delivery
Amazon is testing humanoid robots that walk on two legs to deliver packages, setting up an indoor obstacle course to evaluate models from companies like Unitree. The robots could eventually ride in Amazon’s Rivian delivery vans and handle deliveries autonomously without human assistance.

The Information: Amazon is gearing up to test humanoid robots that walk on two legs to deliver packages. The company has set up an indoor obstacle course to test robots from a variety of companies, including Unitree. Eventually, humanoid robots could ride in Amazon’s electric https://x.com/TheHumanoidHub/status/1930525233511117224

Meta releases V-JEPA 2 world model for physical reasoning (coming to compete with Nvidia’s COSMOS?)
Meta released an AI system that learns how the physical world works by watching video, similar to how children develop intuition about gravity and object movement. V-JEPA 2 can predict what happens when objects interact and use this understanding to control robots in unfamiliar environments without prior training on those specific tasks. The 1.2 billion-parameter model achieved state-of-the-art results on video understanding benchmarks and successfully guided robots to pick up and place objects with 65-80% success rates. The system represents a step toward AI that can think before acting, using an internal model of physics to plan sequences of actions. Meta trained the model on over 1 million hours of video plus robot control data, allowing it to work with new objects and environments without needing training data from each specific robot setup.

Introducing the V-JEPA 2 world model and new benchmarks for physical reasoning https://ai.meta.com/blog/v-jepa-2-world-model-benchmarks/

Our vision is for AI that uses world models to adapt in new and dynamic environments and efficiently learn new skills. We’re sharing V-JEPA 2, a new world model with state-of-the-art performance in visual understanding and prediction. V-JEPA 2 is a 1.2 billion-parameter model, https://x.com/AIatMeta/status/1932808881627148450

AI matches doctors in medical diagnosis accuracy
A study found that doctors using a custom GPT-4 system diagnosed patients more accurately than doctors using traditional research tools like Google and PubMed. However, the AI system by itself performed just as well as the human-AI combination, suggesting that doctors may not be adding value when paired with AI for diagnostic tasks.

New paper shows a familiar result on LLMs & medicine: Doctors given clinical vignettes produce significantly more accurate diagnoses when using a custom GPT built with the (obsolete) GPT-4 than doctors with Google/Pubmed but not AI. Yet AI alone is as accurate as doctors + AI. https://x.com/emollick/status/1931907652118069510

Glean raises $150M at $7.2B valuation for enterprise AI search
Glean secured $150 million in Series F funding at a $7.2 billion valuation to expand its AI platform that helps companies search and analyze their internal data. The company has grown rapidly, surpassing $100 million in annual recurring revenue and processing over 100 million AI agent actions per year for enterprise customers.

Glean raises $150M Series F at $7.2B valuation to transform how companies use AI to accelerate innovation https://www.glean.com/blog/glean-series-f-announcement

Apple improves Vision Pro avatars with realistic details – reminds me of guassian splatting and a trend I’ve been following for two years.
Apple updated Personas, the digital avatars that represent users during video calls on Vision Pro headsets. The Vision OS 2.6 update makes the avatars significantly more lifelike, with improved hair, eyelashes, and skin tone accuracy that closely match users’ actual appearances. The feature launched as a beta that created basic digital representations for video conferencing, but the latest version shows substantial visual improvements that make the avatars look more natural and realistic.

The update to Personas looks so impressive that they should’ve dedicated a few more minutes on this alone. This deserved more. https://x.com/chrisoffner3d/status/1932158766314893719

Instagram tests AI-powered 3D photo conversion feature
Instagram is beta testing a feature that automatically converts any static photo into a 3D stereoscopic image using AI. Users can toggle the 3D effect on and off with a dedicated button, and early testing shows the conversion works reliably across different types of photos without noticeable errors. The feature creates what are technically called “3DoF volumetric spatial conversions” rather than traditional stereoscopic 3D, since users can adjust the viewing angle after the photo is taken.

The beta 3D photo integration with Instagram is very well done! Every static photo becomes an AI generated stereoscopic 3D photo, and there is a “3D” button that lets you toggle the feature on and off for comparison. Every photo I looked at “just worked”, with no glaring”” / X https://x.com/ID_AA_Carmack/status/1933199948759146810

Starbucks debuts AI assistant for baristas in stores
Starbucks launched Green Dot Assist, an AI chatbot that helps baristas get instant answers to work questions through in-store iPads. Instead of searching through manuals, employees can ask the system about drink recipes, procedures, equipment repair, or policies and receive immediate responses.

Meet Green Dot Assist: Starbucks Generative AI-Powered Coffeehouse Companion – About Starbucks (points for the vocal frye) https://about.starbucks.com/press/2025/meet-green-dot-assist-starbucks-generative-ai-powered-coffeehouse-companion/

ChatGPT adds expressive voice with human-like speech patterns
OpenAI updated ChatGPT’s Advanced Voice Mode to include deliberate speech imperfections like nervous laughs, “ums,” and vocal changes that make conversations feel more natural. The AI system now uses disfluencies and expressive delivery rather than perfect text-to-speech conversion, creating interactions that users describe as talking with a human friend instead of a machine. The update demonstrates how multimodal voice processing differs from traditional text-to-speech technology by incorporating the natural hesitations and vocal variations people use in real conversations.

ChatGPT voice is getting really good — https://x.com/gdb/status/1931456650336141752

Haven’t tried the updated Advanced Voice that was recently launched to all paid users in ChatGPT? Then take a listen below. Prompt: Wish me an awkward happy birthday. https://x.com/OpenAI/status/1932166285447856130

The new ChatGPT Advanced Voice Mode is super interesting – lots of deliberate use of disfluencies (nervous laughs, ums & ahs) and vocal changes make it feel much more human than the previous version Really shows the possibilities from multimodal voice vs most AI’s text-to-speech”” / X https://x.com/emollick/status/1931557886947205629

Wow, new expressive voice in ⁦⁦@ChatGPTapp⁩ doesn’t just talk, it performs. Feels less like an AI and more like a human friend. Nice work ⁦@OpenAI⁩ team. 🎤🎶🚀 https://x.com/shaunralston/status/1931361225046405233

AMD unveils MI400 AI chips with OpenAI partnership
AMD announced its next-generation Instinct MI400 AI chips that will ship next year, designed to work as unified “rack-scale” systems spanning entire data centers. The chips can be assembled into server racks called Helios that function like a single massive computer, allowing thousands of chips to work together seamlessly. OpenAI CEO Sam Altman appeared at AMD’s launch event to confirm his company will use the chips, giving AMD a notable endorsement against dominant rival Nvidia. AMD is positioning the MI400 series as a lower-cost alternative to Nvidia’s chips, with executives claiming “significant double-digit percentage savings” in both acquisition and operating costs. The company has secured major customers including OpenAI, Tesla, and Meta, while Oracle plans to offer clusters with over 131,000 of AMD’s current MI355X chips to its customers.

AMD reveals next-generation AI chips with OpenAI CEO Sam Altman https://www.cnbc.com/2025/06/12/amd-mi400-ai-chips-openai-sam-altman.html

Qualcomm acquires Alphawave for $2.4 billion to expand AI capabilities
Qualcomm agreed to buy British semiconductor company Alphawave for $2.4 billion in cash, paying a 96% premium over Alphawave’s March closing price. Alphawave specializes in high-speed connectivity chips used in data centers, which are critical infrastructure for AI applications. The deal signals Qualcomm’s strategy to diversify beyond traditional smartphone chips and compete with companies like Nvidia in the AI hardware space.

Qualcomm strengthens AI portfolio with $2.4 billion Alphawave deal | Reuters https://www.reuters.com/world/uk/qualcomm-acquire-uks-alphawave-24-billion-2025-06-09/

Google’s Veo 3 continues to dominate AI video chatter and memes
Google’s Veo 3 AI system is producing notably higher-quality videos than competitors, with examples ranging from realistic firefight scenes to creative concepts like “a garlic bread grows eyes and runs along the table.” The system generates coherent videos even for unusual prompts such as “a shark made of crabs eats a crab made of sharks” and can create educational content like a polar bear explaining aviation history. Early testing shows both the full and faster versions of Veo 3 consistently produce usable results on first attempts.

Firefight video – Veo 3 is so good, it’s not even close right now https://x.com/bilawalsidhu/status/1932872975876788692

The faster, cheaper version of Veo 3 is solid compared to the big one: “”a garlic bread grows eyes & runs along the table”” “”a shark made of crabs eats a crab made of sharks”” “”a man in a bunny hat throws a strawberry at a target”” All picked from the first set of videos generated. https://x.com/emollick/status/1932140109698072766

Veo 3, A polar bear explaining why the Concorde failed https://x.com/_akhaliq/status/1933069477807337771

Nvidia’s Huang challenges Anthropic CEO on AI job displacement fears
Nvidia CEO Jensen Huang publicly disagreed with Anthropic CEO Dario Amodei’s prediction that AI could eliminate half of entry-level white-collar jobs within five years. Huang criticized Amodei’s approach, saying “He thinks AI is so scary, but only they should do it,” and argued that AI development should happen openly rather than being controlled by a few companies. Amodei had warned that AI could spike unemployment by 20% in sectors like law, finance, and consulting. Anthropic disputed Huang’s characterization, saying Amodei has actually advocated for industry-wide transparency standards and has never claimed only Anthropic should develop AI systems.

Jensen hammers Amodei in a recent article Never trust anyone who says: We’re the special people and only we should be allowed to do this very important thing because we’re the only ones who can be trusted and everyone else it too evil/stupid to be trusted with it. https://x.com/jeremyphoward/status/1933597258047762657

Jensen Huang dismisses Anthropic CEO’s claim that AI will eliminate jobs: ‘He thinks AI is so scary, but only they should do it’ https://www.yahoo.com/news/jensen-huang-dismisses-anthropic-ceos-144719582.html

Perplexity’s Comet browser challenges Arc with AI agent capabilities
Continued from last week: Perplexity launched Comet, an AI browser that performs tasks autonomously rather than just chatting with users. The browser can read emails and draft responses, fill out complex forms using context from multiple pages, schedule meetings by checking different calendars, and search browsing history like a personal assistant. Early users report it feels like having a digital assistant that handles work independently. This puts Perplexity in direct competition with Browser Company’s recently launched Dia browser, which focuses more on conversational AI features. While Arc built significant hype with $50 million in funding and strong design, user sentiment has reportedly turned negative with many feeling abandoned, while Comet users are actively requesting access to the limited beta.

Comet is peerless. Can’t wait to get everyone on it. More invites will go out this week as we near the final stage of testing ahead of the release.”” / X https://x.com/AravSrinivas/status/1933289407705960697

The Browser Company launches AI-first browser Dia in beta
The Browser Company released Dia, a web browser that integrates AI directly into the browsing experience rather than requiring users to visit separate AI websites. The browser uses a chatbot accessible through the URL bar that can search the web, summarize uploaded files, analyze open tabs, and even write drafts based on tab contents. Users can customize the AI’s tone and writing style through conversation, and an optional feature lets the browser use seven days of browsing history for context. The move comes after the company stopped developing its previous browser Arc, which failed to achieve mass adoption despite popularity among tech enthusiasts. Dia runs on Chromium and includes a “Skills” feature that lets users create code snippets for browser shortcuts, positioning itself as a way to streamline AI use within the primary workspace where most people spend their digital time.

The Browser Company launches its AI-first browser, Dia, in beta | TechCrunch https://techcrunch.com/2025/06/11/the-browser-company-launches-its-ai-first-browser-dia-in-beta/

Apple delays major Siri AI upgrade to spring 2026
Apple pushed back its planned overhaul of Siri’s AI capabilities by several months, now targeting a spring 2026 release instead of the original timeline. The delay affects Apple’s broader artificial intelligence strategy as the company works to integrate more advanced conversational abilities into its voice assistant. Apple executives have publicly addressed the postponement, defending their AI development approach and timeline for Apple Intelligence features. I still have my theory!

Apple (AAPL) Targets Spring 2026 for Release of Delayed Siri AI Upgrade – Bloomberg https://www.bloomberg.com/news/articles/2025-06-12/apple-targets-spring-2026-for-release-of-delayed-siri-ai-upgrade?srnd=undefined&sref=9hGJlFio&embedded-checkout=true

Apple Execs Defend Siri Delays, AI Plan and Apple Intelligence | WSJ – YouTube https://www.youtube.com/watch?v=NTLk53h7u_k

Apple’s AI models underperform compared to competitors
Apple’s latest AI models lag behind competing systems despite the company’s reluctance to share detailed performance data. Independent testing shows Apple’s on-device AI performs worse than Google’s Gemma 3-4B and Qwen 3-4B models, both of which are freely available to developers. Apple’s server-based AI system delivers performance similar to Meta’s Llama 4 Scout model. Apple continues its practice of not publishing standard AI benchmarks, making it difficult to directly compare their systems with industry alternatives. The performance gap highlights challenges for Apple as it integrates AI features across its devices and services.

Apple doesn’t report benchmarks for their AIs, reporting on an ill-documented head-to-head evaluation But even by their standards, Apple’s latest on device models are mostly worse than the open Gemma 3-4B from Google or Qwen 3-4B And their server LLM is similar to Llama 4 Scout https://x.com/emollick/status/1932420903515590997

Disney and Universal sue Midjourney for copyright infringement
Disney and NBCUniversal filed the first major Hollywood lawsuit against an AI company, claiming Midjourney’s image generator creates unauthorized copies of their characters including Marvel superheroes, Star Wars figures, and animated characters from Frozen and Shrek. The companies argue that Midjourney, which reportedly earned $300 million last year from subscriptions, profits by letting users generate images of copyrighted characters without permission or payment to the original creators. The lawsuit comes as Midjourney prepares to launch video generation capabilities, potentially expanding the scope of alleged copyright violations. Disney and Universal contacted Midjourney before filing suit but say the company continued releasing improved versions that produce “even higher-quality infringing images” according to Midjourney’s own CEO.

Disney, Universal Sue AI Company Midjourney for Copyright Infringement https://variety.com/2025/digital/news/disney-nbcuniversal-studio-lawsuit-ai-midjourney-copyright-infringement-1236428188/

Midjourney video 👀 https://x.com/bilawalsidhu/status/1932942424751366383

Runway launches conversational video creation and editing tool
Runway introduced Chat Mode, allowing users to generate images and videos through natural conversation rather than complex prompts. The feature works with Runway’s Gen-4 system and lets creators iterate on ideas by chatting with the AI, which can correct mistakes and refine outputs based on feedback. Users can reference existing images, request modifications like adding textures to objects, and generate videos from static images all within the same chat interface. The tool represents a shift toward more intuitive AI interaction, where creators can develop and evolve visual concepts through back-and-forth dialogue instead of crafting precise technical instructions.

We have been developing new interfaces for Runway that can adapt and change based on the complexity of your task. Chat Mode introduces a creative partner experience that feels natural and intuitive. We will also continue working on additional interfaces that enable different”” / X https://x.com/c_valenzuelab/status/1933238580400537698

We have been working hard on some very exciting updates and new products that bring a completely new experience to Runway. Creating anything should be as natural and easy as possible, regardless of the complexity of your idea. Runway will feel like your creative partner for all”” / X https://x.com/c_valenzuelab/status/1932600586123227219

OpenAI explores emotional bonds between humans and ChatGPT
OpenAI acknowledges that users increasingly describe ChatGPT as feeling “alive” and are forming emotional attachments to the AI system. The company is researching how these relationships affect people’s well-being as AI becomes more conversational and integrated into daily life. OpenAI distinguishes between whether AI is actually conscious versus how conscious it appears to users, focusing on the latter since it directly impacts human emotions. The company aims to make ChatGPT warm and helpful without implying it has feelings, desires, or an inner life that could encourage unhealthy dependence. OpenAI worries that if people increasingly rely on AI for emotional support, it might change expectations for human relationships and reduce tolerance for the messiness of real connections.

some thoughts on human-ai relationships and how we’re approaching them at openai it’s a long blog post — tl;dr we build models to serve people first. as more people feel increasingly connected to ai, we’re prioritizing research into how this impacts their emotional well-being. https://x.com/joannejang/status/1930702341742944589

7 AI Visuals and Charts: Week Ending June 13, 2025

Welcome to the most boring video we’ve ever posted Here’s 60 minutes of our humanoid robot solving logistics, powered by our Helix neural network https://x.com/adcock_brett/status/1931391783306678515

This graph (shows a steep decline in organic search traffic) https://x.com/fdaudens/status/1932501681628905788

Narrator: There was, in fact, some disagreement. (a screen shot of a series of headlines conflicting around AI taking graduates jobs) https://x.com/fdaudens/status/1931007004387992039

How good are models at Pixel art? I built an app to find out, piClash, all in @Magicpathai using @OpenRouterAI. It’s crazy to see how creative LLMs have become, each drawing is done via function calling. Try the app too 👇 https://x.com/skirano/status/1931801217967313384

What an incredible trajectory of performance improvements for the reasoning models since the original o1-preview ! 60%+ winrates are comparatively huge, and few model upgrades achieved this historically”” / X https://x.com/BorisMPower/status/1932556016455201145

More Google Veo video examples “The world’s gone mad” https://x.com/Kalshi/status/1932891608388681791

Goddamit Greg! https://x.com/bilawalsidhu/status/1931869876697583960

Top 45 Links of The Week – Organized by Category

AgentsCopilots

As a general answer machine, I wonder if Deep Research LLMs are better than the main methods of getting answers for most people: Googling, crowdsourcing (posting here/Reddit, etc.), asking friends I think if you have access to an expert, that is still the way to go, otherwise…”” / X https://x.com/emollick/status/1931583145066811736

Introducing Genspark AI Browser – Lightning Fast, Ad-Free, with Super Agent does everything for you. Download Genspark AI Browser today: https://x.com/genspark_ai/status/1932473797548159006

We keep talking about this binary future where we’re all watching the same Netflix show or we’re all lost in our own AI generated fever dreams. But that’s not how culture actually works. The interesting stuff happens in the middle. Think Westworld — the narrative division”” / X https://x.com/bilawalsidhu/status/1932598550514586039

Anthropic just dropped a free course on building AI Apps with MCP. Learn to connect AI Agents to external data sources like GitHub, Google Docs, local files using MCP. 100% free. https://x.com/Saboo_Shubham_/status/1929916710682783915

How we built our multi-agent research system \ Anthropic https://www.anthropic.com/engineering/built-multi-agent-research-system

New on the Anthropic Engineering blog: how we built Claude’s research capabilities using multiple agents working in parallel. We share what worked, what didn’t, and the engineering challenges along the way. https://x.com/AnthropicAI/status/1933630785879507286

I finally built PodPixel using @Replit 🎉 An web app that transcribes podcasts & pulls out all links/resources with context. Just use the search or drop a URL, and find those links. Try yourself. https://x.com/designworkplan/status/1928756748153659509

What “”Working”” Means in the Era of AI Apps | Andreessen Horowitz https://a16z.com/revenue-benchmarks-ai-apps/

ScreenSuite – The most comprehensive evaluation suite for GUI Agents! https://huggingface.co/blog/screensuite

💡David Tag from @LinkedIn reveals how they built their first production AI agent for hiring using LangChain and LangGraph. Learn the technical architecture behind LinkedIn Hiring Assistant and the framework that scaled AI development across 20+ teams. Watch the full session https://x.com/LangChainAI/status/1933576634843738434

Assaf Elovic (Head of AI at @mondaydotcom ) revealed how they are using LangGraph and LangSmith to power their AI agent workforce. Watch the full session here: https://x.com/LangChainAI/status/1932165368375841255

Four termsheets received in 2 weeks. Back to building. ✈️”” / X https://x.com/scottastevenson/status/1933117996068905457

🎉 We’ve raised $150 million at a $7.2 billion valuation – advancing our mission to transform how enterprises use AI to accelerate innovation. We’re grateful to our customers, partners, investors, and team for helping us reach this milestone. Your support fuels our momentum and https://x.com/glean/status/1932453797051211825

This weekend I’m excited to share a tutorial that shows you how to build an agentic extraction workflow over a Fidelity Multi-Fund Annual Report: the document contains a list of multiple funds, with each fund reporting multiple tables of financial data. Extracting a list of https://x.com/jerryjliu0/status/1931810929425158272

A post by Stripe engineer @thegautam on building a successful payments foundation model for fraud detection recently went viral. I want to talk about how unusual this particular use case is, which helps understand why such “”instant wins”” from deploying advanced AI are so rare. As”” / X https://x.com/random_walker/status/1932046940822212827

Our New Model Helps AI Think Before it Acts https://about.fb.com/news/2025/06/our-new-model-helps-ai-think-before-it-acts/

Canva now requires use of AI during developer job interviews • The Register https://www.theregister.com/2025/06/11/canva_coding_assistant_job_interviews/

Meet Dia. Now available for Arc members. https://x.com/diabrowser/status/1932800009990517190

RT @abhshkdz: We’re excited to launch Scouts — always-on AI agents that monitor the web for anything you care about. https://x.com/krandiash/status/1932471532871438685

Anthropic

Sam Altman-backed Reddit just sued Anthropic over data scraping The company has alleged that Anthropic denied its licensing deal offer and hit its servers with bots over 100K times This comes a day after Anthropic cut off model access to OpenAI-acquired Windsurf https://x.com/rowancheung/status/1930540021028717054

Apple

Apple Intelligence gets even more powerful with new capabilities across Apple devices – Apple https://www.apple.com/newsroom/2025/06/apple-intelligence-gets-even-more-powerful-with-new-capabilities-across-apple-devices/

At WWDC 2025, Apple showed off only a handful of AI upgrades, including: —New Live translation for FaceTime, Messages, and calls —Visual intelligence via screenshots —AI-powered intelligent actions in Shortcuts —AI “”Workout Buddy”” on Apple Watch https://x.com/rowancheung/status/1932341247810678845

Audio

Introducing Eleven v3 (alpha) – the most expressive Text to Speech model ever. Supporting 70+ languages, multi-speaker dialogue, and audio tags such as [excited], [sighs], [laughing], and [whispers]. Now in public alpha and 80% off in June. https://x.com/elevenlabsio/status/1930689774278570003

BusinessAI

Wave 10: The Windsurf Browser https://windsurf.com/blog/windsurf-wave-10-browser

How 100 Enterprise CIOs Are Building and Buying Gen AI in 2025 | Andreessen Horowitz https://a16z.com/ai-enterprise-2025/

Model Once, Represent Everywhere: UDA (Unified Data Architecture) at Netflix | by Netflix Technology Blog | Jun, 2025 | Netflix TechBlog https://netflixtechblog.com/uda-unified-data-architecture-6a6aee261d8d

Getty argues its landmark UK copyright case does not threaten AI | Reuters https://www.reuters.com/sustainability/boards-policy-regulation/gettys-landmark-uk-lawsuit-copyright-ai-set-begin-2025-06-09/

EthicsLegalSecurity

Ohio State launches bold AI Fluency initiative to redefine learning and innovation https://news.osu.edu/ohio-state-launches-bold-ai-fluency-initiative-to-redefine-learning-and-innovation/

SAG-AFTRA and Video Game Companies Reach Tentative New Deal https://variety.com/2025/gaming/news/sag-video-game-companies-tentative-deal-actors-strike-1236125631/

Google

YES! Google’s Veo 3 is now in n8n! 🤯 This AI system uses the viral Veo 3 model to create AI videos at scale: → AI agent generates viral video ideas → Records everything in Airtable database → Generates video content with FalAI and Veo 3 → Logs finished directly into your https://x.com/mikefutia/status/1931023310579507430

ByteDance-Seed strikes again and destroys Veo 3 and you thought american labs have any chance at competing with chinese ones? paper: https://x.com/scaling01/status/1933048431775527006

Imagery

RT @higgsfield_ai: Higgsfield integrated Flux.1 Kontext. The content game changes today. Photo editing, cinematic motion, VFX, and avatar…”” / X https://x.com/_akhaliq/status/1932903530173747261

RT @krea_ai: today, we’re introducing our first image model: Krea 1. Krea 1 offers superior aesthetic control and image quality. It has a…”” / X https://x.com/_akhaliq/status/1932479466300670401

Multimodality

This is a very thought provoking interview with my former student. I do think AI personas (esp multimodal and real time) may be addictive and seem better than humans – but so is heroin (albeit heroin has less useful applications than AI).”” / X https://x.com/sirbayes/status/1932155427703431647

Agility’s Digit executes a multi-step task autonomously from a natural language command. “”Bring me the ingredients to make pasta.”” https://x.com/TheHumanoidHub/status/1930387671626690985

Vision Transformers have high computational costs. Existing token reduction methods like pruning and merging are exclusive, causing significant information loss and needing post-training to recover performance. This paper presents Token Transforming, a unified many-to-many https://x.com/rohanpaul_ai/status/1932718446648918269

OpenAI

OpenAI’s open model is delayed | TechCrunch https://techcrunch.com/2025/06/10/openais-open-model-is-delayed/

RT @kevinweil: Because you all asked: we’re going to double the rate limits for o3 for Plus users. Rolling out as we speak. Now go do aweso…”” / X https://x.com/OpenAI/status/1932586531560304960

OpenAI 🤝 Mattel:”” / X https://x.com/gdb/status/1933221591350964633

How we’re responding to The New York Times’ data demands in order to protect user privacy | OpenAI https://openai.com/index/response-to-nyt-data-demands/

Perplexity

Discover articles now default to “Summary” mode (less verbose and lighter to read) with a toggle to switch to “Report” mode for depth. https://x.com/AravSrinivas/status/1932299234797052197

Robotics

‘CLONE’ – whole-body teleoperation of a humanoid. Intuitive control signals are captured by tracking the teleoperator’s head and hand poses using Apple Vision Pro. A Mixture-of-Experts policy takes the sparse input and synthesizes the corresponding whole-body humanoid pose. https://x.com/TheHumanoidHub/status/1931059055533175123

ScienceMedicine

Unprecedented dataset of molecular simulations to train AI models released https://phys.org/news/2025-06-unprecedented-dataset-molecular-simulations-ai.html

World first: Breakthrough AI powered Brain-Computer Interface Enables Real-Time Speech for ALS Patient → A 45-year-old man with ALS can now produce expressive speech and melody using a brain-computer interface (BCI) that translates brain signals into audio in 10 milliseconds. https://x.com/rohanpaul_ai/status/1933094038816858372

Video

Introducing Chat Mode. A new way to create with Gen-4 Images, Videos and References. Now you can generate anything you want, all from within a single conversational interface. Available for all users. https://x.com/runwayml/status/1933213502728237342

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading