About This Week’s Covers

This week’s newsletter category covers are a nod to TikTok live POV videos.

The main cover was created with GPT-Image-1 and Photoshop. Robots are coming to take over, the clock is ticking, and the robots are not impressed. The robot influencer is a nod to Khaby Lame, who recreates product demonstrations and then shrugs his shoulders to show that he is not having it with their hype. The TikTok logo has been revisited as an hourglass to show the time is running out. And the name is TickTock, of course.

Khaby Lame

For the rest of the covers, I used my now three-week-old GPT o3 rubric that can automatically adapt to any theme I give it. I can provide a one-sentence theme, and o3 automatically generates 46 cover images using the API with no supervision. All of the ideas and compositions come from GPT on its own. My prompt this week was simply “a TikTok POV Screen.” Everything else was automated. It’s not an attempt to be amazing quality, but instead to see how creative GPT can be without any help.

A few turned out pretty well! I’ve included my favorite six of the covers below:

This Week By The Numbers

Total Organized Headlines: 467

This Week’s Executive Summaries

The week ending June 27th had 35 stories worth sharing.

The first two have to do with copyright lawsuits:

In California, a federal judge ruled in favor of Anthropic that training AI on legally purchased materials qualifies as fair use. However, the judge left it open regarding whether AI output can be considered copyright infringing.

That initially sounds like good news for Anthropic; however, the judge also ruled that Anthropic must face a separate trial for using millions of pirated books in its training. The judge also opened a window for suits focused on AI output, as opposed to training.

In another lawsuit, a judge ruled in favor of Meta on the output side, saying that outputting similar work is considered fair use and would not dilute markets. That one surprised me a little bit.

Both suits left significant gray area and didn’t settle much from a layperson’s perspective.

Major players continue sounding the alarm about pending job loss. The godfather of AI, Jeffrey Hinton, remarked that AI is going to replace everybody (!) in several fields, and paralegals and call centers are most at risk.

Barack Obama declared emphatically that AI is not overhyped and we are going to see shifts in white-collar jobs sooner than we realize.

I’ve been amazed at how quickly my software development friends have been embracing Claude in the command line. In the past six months, I’ve noticed that the strongest coders I know have started to adopt Claude.

There’s going to be a big shift in coding personality types. As a kid raised in the 80s, I was thrilled to see the “nerds” (myself included) gain respect; however, we’ve lost a little bit of our emotional IQ along the way. I think coding will flip toward the humanities, where strong communicators can simply explain requirements to a computer using conversational language. That’s potentially the largest shift in skills in my lifetime.

For perspective, OpenAI’s Codex alone has averaged 10k pull requests per day over the past 35 days.

Legal AI software company Harvey released a demonstration of their latest workflow tool, which essentially chopped the bottom out of legal work and would terrify me if I were a student in law school. I shared the demo with a few friends of mine who are lawyers who, rather than getting defensive, completely agreed that this is going to change the profession.

I’m also shocked at how quickly multimodal video is shifting to consumer products. Multimodality simply means the ability for an AI to understand what it sees in an image or video or hears in audio.

For example, Amazon Ring launched video descriptions where your security camera simply tells you what’s happening outside so you don’t need to look at the camera. It could email you what happened or describe it out loud as it happens. “A guy drove up in a UPS truck and put a box on your front steps.” “A woman is at the door with a dog on a leash and keeps looking into the window.” Etc.

Almost exactly one year ago, I wrote an article declaring that SEO will soon be dead because computers can see what’s in a picture better than we could ever describe with manual metadata.

A few months ago, Google DeepMind released a tool called VideoPrism that is gaining attention. VideoPrism can watch a video and provide deep context and details about every frame, a task that would be virtually impossible for a human to accomplish. If you scroll down the demo page, you’ll see a series of animated gifs with descriptions that really puts it into perspective.

In other news, OpenAI added both document editing and chat to ChatGPT, which directly competes against Microsoft Office and Google Docs! The entire internet and browser ecosystem is getting eaten by a chat window.

OpenAI also added the ability to “search the Internet while thinking through problems” to all of their models. The current pricing for web API calls is $10 per thousand.

Last week, Midjourney launched their video product, and this week, there are a lot of fun examples of how it looks. I recommend checking them out.

Adobe launched an AI camera app for the iPhone, which takes 32 frames per photo to dramatically reduce noise and improve quality beyond what the iPhone’s native camera can produce. I’m trying it out and will share my findings.

Chinese researchers created a chip that can process information using light beams rather than electric signals. It currently matches the power of Nvidia’s top graphics cards.

China also announced an industrial policy to achieve AI dominance by 2030. The gauntlet has been thrown.

The US Congress introduced a new act called “No Adversarial AI,” which prohibits federal agencies from using AI systems developed in China, Russia, Iran, and North Korea.

A new term called context engineering has become popular. This is replacing the term prompt engineering. I love this concept because it underscores the need to pack in as much context as possible into your prompt when talking to an AI. Basically, the more details you provide upfront, the better output you receive. It’s kind of obvious, but I think it is important for people who use short sentences and basic demands (barking orders) to understand that it’s not just the order of your words or the phrasing, but it’s more about giving the full picture of what you want to accomplish so the machine can understand your goals.

SoftBank CEO Masayoshi Son told shareholders he wants his investment firm to be the top platform provider for artificial super intelligence within ten years. He defines this super intelligence as AI that can exceed human capabilities by 10,000 times, and says that he is currently all in on OpenAI.

Wharton School of Business professor Ethan Mollick points out that specialized AI products sold to large enterprises are often basically thinly veiled, overpriced wrappers. I love that he is calling this out.

A developer built an auto-scrolling teleprompter in a few hours, and while that’s not necessarily big news, it reminds us how quickly the barrier to production is falling. Legacy tools, which used to cost a fortune, are now just getting replaced by open source vibe coded solutions. AI is not only going to cause a job market shift, but a complete overhaul of the accessibility of what used to be considered advanced software solutions.

Along those lines, Anthropic created an Economics Research Features program to fund research on how AI will affect jobs in the economy. These are grants of $50,000 for researchers to study workplace effects and will be featured in policy forums in Washington, DC this fall.

In science news, Google DeepMind released an AI system called AlphaGenome that predicts how genetic variations affect biological processes and can analyze DNA sequences up to 1 million letters long.

Alibaba developed a system that can identify gastric cancer from standard CT scans with greater accuracy than doctors. China has already deployed the system and screened over 78,000 patients!

AI researcher Ethan Mollick (again) also tested OpenAI’s o3 pro model with a made-up intricate writing puzzle that required creating a sentence where nouns were constellation translations, the last letters of each word spelled out a constellation name, and every word started with a vowel. o3 successfully completed the challenge!

While most people like me use AI as a productivity tool, a minority use it for companionship and therapy. That is completely foreign to my usage, to be honest. Anthropic analyzed 4.5 million conversations to understand how people use Claude for emotional needs like seeking advice, coaching, and companionship. The study found that only about 3% of Claude interactions include these types of conversations. One piece of good news is that people’s emotional tone generally became more positive throughout these conversations.

I’ve been hearing about a tool called n8n for four months. It sounds a lot like Zapier. If you’re into automation, I highly recommend trying it out.

Google launched an AI model that can run locally on a robot without an Internet connection and operate two-armed robots that perform complex tasks like unzipping a bag or folding clothes.

Google also launched an update of their Gemma model which is designed to run locally on smart phones and edge devices. Gemma provides multimodal AI capabilities without any Internet connection. In addition to being able to understand text, images, audio, and video, the model can handle translation and is open source.

In image creation news, Black Forest Labs launched an image editing model that can change specific elements of an image without rendering it again from scratch. This is a completely open model that can run locally on consumer hardware. Black Forest is a very strong company and perhaps my favorite rendering platform overall. The only reason it loses to ChatGPT’s image generation is because GPT-image-1 is so well integrated into the chat experience.

Anthropic launched a cool new feature that allows developers to host their AI applications within Claude on the cloud, so that other people can use them. Anthropic hosts the model and integrates into the Claude API so whoever uses the shared model gets billed on their API use, as opposed to the developer.

Microsoft announced a very small model that can run locally on a PC to help users change system settings, using natural language… basically like Alexa for your laptop settings.

Finally, I’ve added two videos that I think are worth watching from the recent AI engineering summit:

One is a detailed walk-through of Google’s video model Veo 3. I think seeing demos of how the output looks is fun, but it’s actually more valuable to get a walk-through from Google themselves on how the system works. (It’s possible Google deleted the video this week?)

The second video is a little in the weeds, but it was a good introduction to a company called Windsurf which is an AI powered code editor that went from zero to millions of users in less than a year… and now generates 90 million lines of code every day.

There are a couple more things below in the executive summary section. But that’s mostly it for this week. Don’t forget to get outside!

Federal judge rules AI training on purchased books is fair use
A California federal judge sided with Anthropic in a copyright lawsuit, ruling that training AI models on legally purchased books without author permission qualifies as fair use. Judge William Alsup compared the process to teaching schoolchildren to write, stating that copyright law protects against copying, not competition from AI-generated content. The decision specifically covers books that Anthropic physically purchased, digitized by removing bindings and scanning pages, then used to train its Claude AI models. However, the judge ruled that Anthropic must face a separate trial for allegedly using millions of pirated books downloaded from the internet, which he said cannot be considered fair use. This marks the first major legal victory for the AI industry in ongoing copyright battles, though it’s limited in scope and doesn’t address whether AI outputs themselves infringe copyrights.

Anthropic bought millions of books to scan for Claude. Makes you wonder — have AI companies been quietly purchasing Blu-ray Discs by the truckload to rip visual datasets too? Maybe it’s easier to exploit a legal gray area with physical media than scrape YouTube against its ToS.”” / X https://x.com/bilawalsidhu/status/1937594422109130984

AI training gets legal clarity with Anthropic ‘fair use’ ruling https://www.therundown.ai/p/ai-training-gets-legal-clarity

ANTHROPIC fair use.pdf https://fingfx.thomsonreuters.com/gfx/legaldocs/jnvwbgqlzpw/ANTHROPIC%20fair%20use.pdf

Anthropic wins key US ruling on AI training in authors’ copyright lawsuit | Reuters https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/

Authors v. Anthropic: The Legal Showdown Over AI, Copyright, and Fair Use – LLS Entertainment Law Review – Loyola Law School https://entertainmentlawreview.lls.edu/authors-v-anthropic-the-legal-showdown-over-ai-copyright-and-fair-use/

Bartz v. Anthropic PBC, 3:24-cv-05417 – CourtListener.com https://www.courtlistener.com/docket/69058235/bartz-v-anthropic-pbc/

On Monday, a United States District Court ruled that training LLMs on copyrighted books constitutes fair use. A number of authors had filed suit against Anthropic for training its models on their books without permission. Just as we allow people to read books and learn from them”” / X https://x.com/AndrewYNg/status/1938265468986659075

Order on Motion for Summary Judgment – #231 in Bartz v. Anthropic PBC (N.D. Cal., 3:24-cv-05417) – CourtListener.com https://www.courtlistener.com/docket/69058235/231/bartz-v-anthropic-pbc/

RT @AndrewCurran_: A federal judge has ruled that Anthropic’s use of books to train Claude falls under fair use, and is legal under U.S. co…”” / X https://x.com/ClementDelangue/status/1937519434312147374

RT @Sauers_: Wow. This is the reasoning the judge used to say that Anthropic training is fair use: “”But to make anyone pay specifically fo…”” / X https://x.com/JvNixon/status/1937654031130010016

Anthropic wins a major fair use victory for AI — but it’s still in trouble for stealing books | The Verge https://www.theverge.com/news/692015/anthropic-wins-a-major-fair-use-victory-for-ai-but-its-still-in-trouble-for-stealing-books

RT @simonw: There are some interesting details about how Anthropic trained their models tucked away in today’s summary judgement: they boug…”” / X https://x.com/andykonwinski/status/1937739172263141854

The flip side to the concerns about violating copyright in training data is that there is a also a vast trove of important work that nobody reads & where it would be very good if AIs were trained on it. (Most scientific articles, many reports, a lot of old literature & records) https://x.com/emollick/status/1937306870735339775

Judge dismisses Meta copyright lawsuit but leaves door open for future cases
A federal judge ruled in favor of Meta in a copyright lawsuit brought by authors including Sarah Silverman and Ta-Nehisi Coates, who claimed the company used their books without permission to train AI systems. The judge found the authors failed to prove that Meta’s AI would cause “market dilution” by flooding the market with similar work, making the use legally “fair use.” However, the ruling offers mixed signals for the AI industry’s use of copyrighted material. Judge Vince Chhabria stated that using copyrighted work to train AI models would be unlawful in “many circumstances” and expressed sympathy for authors’ concerns that AI companies create tools worth billions while potentially harming the market for original books. The decision contrasts with another ruling this week where Anthropic won a similar case, though that company still faces a separate trial over allegedly copying millions of pirated books.

Meta wins AI copyright lawsuit as US judge rules against authors | Meta | The Guardian https://www.theguardian.com/technology/2025/jun/26/meta-wins-ai-copyright-lawsuit-as-us-judge-rules-against-authors

Everyone’s Going to Lose Their Jobs
AI Is Going to ‘Replace Everybody’ in Several Fields. Paralegals, call centers most at risk. only top-skilled people will find jobs. – Geoffrey Hinton AGI could soon replace most human work, driving wages toward zero. – Anton Korinek, leading Economist “Its not over hyped, you are going to see shifts in white collar jobs” – Barack Obama

AI Is Going to ‘Replace Everybody’ in Several Fields. Paralegals, call centers most at risk. only top-skilled people will find jobs. – Geoffrey Hinton AGI could soon replace most human work, driving wages toward zero. – Anton Korinek, leading Economist https://x.com/rohanpaul_ai/status/1937143790633906650

Barack Obama on AI “”Its not over hyped, you are going to see shifts in white collar jobs”” Video from ‘Barack Obama’ YT Channel (link in comment) https://x.com/rohanpaul_ai/status/1936409361867505856

Software engineers are in for a wild ride
OpenAI’s Codex has averaged 10k pull requests per day over the past 35 days

Codex has averaged 10k pull requests per day over the past 35 days:”” / X https://x.com/gdb/status/1935874544931324325

Legal software Harvey’s demo should terrify law students
“We’re excited to share our latest product video, showcasing what we’ve been building at Harvey.”

We’re excited to share our latest product video, showcasing what we’ve been building at Harvey. https://x.com/harvey__ai/status/1894070165341434260

Amazon Ring adds AI-powered video descriptions to home security alerts
Ring launched Video Descriptions, a feature that uses AI to provide text summaries of what’s happening in security camera footage. Instead of generic motion alerts, users now receive specific descriptions like “A person is walking up the steps with a black dog” or “Two people are peering into a white car in the driveway.” The feature helps homeowners quickly determine whether activity requires their attention without watching video clips. It’s also a harbinger of the end of metadata.

Ring Video Descriptions deliver real-time, Gen AI descriptions of what’s happening https://www.aboutamazon.com/news/devices/ring-video-descriptions-gen-ai

Oldie but a goodie: Google DeepMind releases VideoPrism video analysis model (from Feb, but I missed it)
Google DeepMind created an AI system that can analyze and understand videos across multiple tasks like classification, captioning, and answering questions about video content. VideoPrism was trained on 36 million high-quality video-text pairs and 582 million additional video clips, allowing it to handle diverse video content from everyday moments to scientific observations. The model can be easily adapted to new video understanding challenges without requiring specialized training for each task. Unlike previous video AI models that were built for specific purposes, VideoPrism works as a general-purpose foundation that achieves top performance across different video analysis tasks using a single system.

NO MORE SEO VideoPrism by @GoogleDeepMind is 🔥 it’s a versatile video encoder that can be plugged into text encoder or LLMs the authors first train a CLIP-like video-text model, then distill video encoder in masked manner to VideoPrism 😮 all models with A2.0 license on @huggingface 🤗 https://x.com/mervenoyann/status/1937572802896200181 https://research.google/blog/videoprism-a-foundational-visual-encoder-for-video-understanding/

Gemini is good at processing video (using frequent screenshots & audio transcripts). I gave Gemini a video on a historical recipe, it was able to find visual elements not mentioned in the transcript. It is not hallucination-free, but there are lots of new use cases for screening https://x.com/emollick/status/1936446885029372106

OpenAI adds document editing and chat to ChatGPT – targeting Google Workspace and Microsoft Office
OpenAI is developing collaborative document editing and chat features within ChatGPT, positioning the AI chatbot as a direct competitor to Google Workspace and Microsoft Office 365. The tools would transform ChatGPT from a simple chatbot into a full productivity suite, allowing users to create, edit, and collaborate on documents while chatting with AI assistance. This aligns with CEO Sam Altman’s vision of ChatGPT becoming a comprehensive work assistant. The company is also exploring additional features including a web browser, AI hardware device, and social feed functionality. These developments could create a unified AI-powered ecosystem that challenges traditional software bundles by centralizing enterprise workflows around conversational AI.

OpenAI is building collaborative document editing and chat features inside ChatGPT, that would position the company as a competitor to Google. These tools extend the chatbot into a productivity suite. They mirror functions in Workspace and Office 365. These updates fit Sam https://x.com/rohanpaul_ai/status/1937740008053424168

Examples and walk-through of MidJourney video
It’s important to understand new products, and these are good examples of the new MidJourney video generation tools

11/ TL;DR MJ has dropped a decent model with unlimited video gen for $60/month. if you’re all about the MJ aesthetics, doing abstract generations (sans text), and don’t mind using MMAudio or similar to add audio in post, this is a great addition to the toolkit. https://x.com/bilawalsidhu/status/1935528281031462945

MidJourney Video 1/ First off. It’s fun to just click through your MJ catalog and see it come to life. – No text to video; only image to video for now – Works with MJ or uploaded imagery – Can choose high/low motion + auto or custom prompt – Can extend clips 4x – SD output at 24 fps; no upscaling https://x.com/bilawalsidhu/status/1935527429768163481

MidJourney Video 4/ Pretty good with motion graphics and AR visuals. Lack of text rendition (a weakness for MJ in general) comes through. Would not recommend for titles. But you can still get some beautiful abstract visuals (e.g head locked AR gen on the left, multi-monitor generations on right) https://x.com/bilawalsidhu/status/1935527747725709668

MidJourney Video 6/ Muzzle flashes look pretty good. But I had a very hard time getting shell casings to work properly. Let me know if you find a good prompt to achieve this, because I couldn’t. https://x.com/bilawalsidhu/status/1935527929569755261

MidJourney Video 7/ MJ video nails that high end unreal engine “”rendered”” look. Fisheye lens distortion and sweeping camera move is nice. But notice how wonky all the cars in the scene look. As it stands, I don’t think we’ll be pulling any 3d objects or scenes out of this video model. https://x.com/bilawalsidhu/status/1935527993830686966

MidJourney Video 8/ Some generations get this weird “”unsharp mask”” look (a technique for sharpening) as the generation progresses. Lmk if you spot it too. https://x.com/bilawalsidhu/status/1935528060612395421

MidJourney Video 10/ MJ video does seem like an amazing tool to make abstract visual elements you composite elsewhere. I hope they remove the extend duration limit (20s / 4 times max) because it could be an amazing tool for screensavers, music videos and concert visuals. https://x.com/bilawalsidhu/status/1935528210588213670

MidJourney Video 2/ Fast generation time. Works well for that wide angle vlogging style. You can extend any clip 4x. Two great examples below. Of course, it’s begging for dialogue. Sure, you can add the facial performance in post – but it won’t look half as good. Veo 3 has spoiled us here. https://x.com/bilawalsidhu/status/1935527555404271877

MidJourney Video 3/ MJ video does okay in my handshake test (homie on the left really went in hard lmao) Physics is a weakness of this model — doesn’t matter if it’s soft-body or rigid-body subject matter. Might get slightly better as user ratings roll in, but still far behind the SOTA. https://x.com/bilawalsidhu/status/1935527672484179979

MidJourney Video 5/ The dinosaur test comes next. Movement looks decent, but the rest of the physics in the scene are all over the place. The slipping tanks in the background reminds me a bit of Sora. Relative scale and relative motion is pretty wonky. https://x.com/bilawalsidhu/status/1935527810564767970

MidJourney Video 9/ Testing fluid simulations here. Not only is it pretty far from SOTA, sometimes I get generations with this stop motion-like choppy FPS look (e.g. wine glass on right). https://x.com/bilawalsidhu/status/1935528134113407408

Adobe quietly launches AI camera app for iPhone
Adobe released Project Indigo, a free camera app that delivers professional-quality photos using computational photography techniques. Built by Marc Levoy (computational photography pioneer) and Florian Kainz (Academy Award winner and creator of night sight mode), the app captures and combines up to 32 frames per photo to reduce noise and improve image quality beyond what iPhone’s native camera produces.

Adobe has quietly rolled out an AI camera app for iOS. Built by legends Marc Levoy (computational photography OG) and Florian Kainz (multi-academy award winner and creator of night sight mode on pixel). Delivers SLR-like image quality with manual controls all on device. https://x.com/bilawalsidhu/status/1936424884017717511

Project Indigo – a computational photography camera app Marc Levoy, Adobe Fellow, and Florian Kainz, Principal Scientist https://research.adobe.com/articles/indigo/indigo.html

China releases optical computing chip that uses light instead of electricity – matches Nvidia’s top cards
Chinese researchers unveiled Meteor-1, a chip that processes information using light beams rather than electrical signals. The chip achieves 2,560 TOPS performance at 50 GHz by running 100 tasks simultaneously, matching the power of Nvidia’s top graphics cards while avoiding the heat and energy problems that limit traditional silicon chips. The breakthrough replaces hundreds of separate lasers with a single integrated light source, making the system smaller and cheaper to produce. This approach allows China to develop high-performance AI chips without relying on restricted semiconductor imports, potentially opening a new direction for scaling AI computing beyond current electronic limitations.

📢 BREAKING: China unveils first parallel optical computing chip, ‘Meteor-1’ The breakthrough is that Meteor-1 shifts compute from electrons to light, hitting 2,560 TOPS at 50 GHz by running 100 tasks in parallel on one chip. on par with Nvidia’s flagship GPUs and nearing the https://x.com/rohanpaul_ai/status/1937509077107482656

China launches comprehensive AI industrial policy to compete with the US
China is deploying industrial policy tools across the entire AI technology stack to achieve its goal of becoming the global AI leader by 2030. The country aims to create a $100 billion AI industry and generate over $1 trillion in additional value across other sectors through state-backed investment funds, research labs, and subsidized computing resources. Beijing has launched an $8.2 billion AI startup fund while building a National Integrated Computing Network to pool resources across public and private data centers. The policy faces significant challenges from US export controls on AI chips and semiconductor manufacturing equipment, forcing Chinese companies to choose between near-term model development and building long-term resilience to sanctions. Despite these constraints, Chinese AI models are closing the performance gap with top US models, and the country’s rapid power infrastructure expansion gives it an energy advantage for data centers that the US currently struggles with.

RT @kyleichan: China wants to be the global leader in AI by 2030. To achieve this, China is deploying industrial policy tools across the e…”” / X https://x.com/ylecun/status/1938573151421485348

China’s Evolving Industrial Policy for AI https://www.rand.org/pubs/perspectives/PEA4012-1.html

US lawmakers propose ban on AI from China, Russia, Iran, and North Korea
Congress introduced the No Adversarial AI Act, which would prohibit all federal agencies from using AI systems developed in China, Russia, Iran, and North Korea. The bill requires the Federal Acquisition Security Council to maintain a public list of banned AI models, updated every six months, with agencies needing congressional or budget office approval for any exceptions. Lawmakers specifically cited concerns about DeepSeek’s connections to the Chinese Communist Party and broader data security risks as justification for the restrictions. The legislation targets government devices and systems, with narrow exemptions allowed only for testing, research, or national security purposes.

US lawmakers launch No Adversarial AI Act banning AI models from 4 hostile nations across all federal agencies The bill bars federal agencies from using AI developed in China, Russia, Iran, and North Korea. It also prohibits use on government devices without narrow exemptions https://x.com/rohanpaul_ai/status/1938054124899340326

AI experts push for “context engineering” over “prompt engineering”
Leading AI researcher Andrej Karpathy and others advocate replacing “prompt engineering” with “context engineering” to better describe the complex skill of feeding AI systems the right information. Rather than just writing short instructions, context engineering involves strategically filling an AI’s memory with task descriptions, examples, relevant data, tools, and history while balancing performance against cost. Karpathy explains that industrial AI applications require careful orchestration of multiple components beyond context engineering, including breaking problems into proper workflows, managing different AI model capabilities, and handling security measures. This coordination represents a sophisticated software layer that goes far beyond simple “ChatGPT wrapper” applications that many dismiss modern AI tools as being.

3 tips on how to embrace the software 3.o era described by @karpathy: Try “”vibe coding””: Pick a simple app idea and describe what you want to an AI coding tool. Don’t worry about syntax—focus on clear descriptions of functionality. Master the verification step: Learn to https://x.com/fdaudens/status/1935771342088839423

Context engineering cannot & must not be a solely technical function “”Context”” is actually how your company operates; the ideal versions of your reports, documents & processes that the AI can use as a model; the tone & voice of your organization. It is a cross-functional problem”” / X https://x.com/emollick/status/1937952769513517328

the new hot topic is “”context engineering”” we think LangGraph is really great for enabling completely custom context engineering – but we want to make it even better see our proposal (s/o @sydneyrunkle) for streamlining context management: https://x.com/hwchase17/status/1937648042985030145

SoftBank CEO announces plan to dominate artificial super intelligence
SoftBank CEO Masayoshi Son told shareholders he wants the Japanese investment firm to become the top platform provider for “artificial super intelligence” within 10 years, comparing his goal to how Microsoft, Amazon, and Google dominate their respective markets. Son describes artificial super intelligence as AI technology that exceeds human capabilities by 10,000 times and said he’s “all in on OpenAI” after investing $32 billion in the ChatGPT maker since fall 2024. Son is reportedly planning a $1 trillion AI manufacturing complex in Arizona that would build robots and AI systems, potentially partnering with Taiwan Semiconductor Manufacturing Company to create a U.S. version of China’s Shenzhen manufacturing hub. The ambitious project would double the scale of the recent $500 billion “Stargate” data center initiative and depends on support from the Trump administration and state officials.

SoftBank aims to become leading ‘artificial super intelligence’ platform provider | Reuters https://www.reuters.com/technology/softbank-aiming-become-leading-artificial-super-intelligence-platform-provider-2025-06-27/

SoftBank Proposes $1 Trillion Facility for AI and Robotics — The Information https://www.theinformation.com/briefings/softbank-proposes-1-trillion-facility-ai-robotics

SoftBank Son reportedly pitches $1 trillion Arizona AI hub https://www.cnbc.com/2025/06/20/softbank-son-reportedly-pitches-1-trillion-arizona-ai-hub.html

Ethan Mollick: Enterprise buyers should understand the AI behind vendor solutions (I say skip ’em all together when possible)
Most specialized AI products sold to large organizations are built around the same handful of core AI systems from companies like OpenAI, Anthropic, and Google, then customized with specific tools and prompts. While vendors often give their solutions impressive names, the underlying performance is limited by whichever base AI model they’re using underneath. Smart buyers should ask vendors which AI systems power their products and when those models get updated, since this directly affects the solution’s capabilities and limitations. Key questions include how vendors test their custom prompts, handle system outages, and protect against prompt injection attacks where users try to manipulate the AI’s instructions.

A lot of buyers for large organizations don’t realize that there are only a few good LLMs out there and that everyone selling them an specialized AI solution is basically building tools & prompts around one of just a few AI systems and then giving it some impressive-seeming name.”” / X https://x.com/emollick/status/1937345809990791513

Guy builds an autoscrolling teleprompter in a few hours – beware legacy businesses
The teleprompter itself is not the lesson. The lesson is legacy businesses are going to get eaten by AI vibe coding. Closed captioning is no longer a six figure annual service fee. It’s an open source API call running locally on a small machine.

@lovable_dev I built an AI teleprompter in one day! https://x.com/kitsutoma1345/status/1934439177598484706

Anthropic launches program to study AI’s economic impact
Anthropic created the Economic Futures Program to fund research on how AI affects jobs and the economy. The initiative offers grants up to $50,000 for researchers studying AI’s workplace effects, hosts policy forums in Washington DC and Europe this fall, and partners with academic institutions to expand research capacity. The program builds on Anthropic’s Economic Index, which tracks AI usage across different industries. The company says policymakers need real-world data about AI’s economic effects as adoption accelerates across workplaces globally. Early signs of AI’s workforce impact are already emerging, making timely research crucial for understanding where the technology creates opportunities like new jobs and productivity gains, versus where it might dramatically shift labor markets.

Anthropic Economic Futures Program Launch \ Anthropic https://www.anthropic.com/news/introducing-the-anthropic-economic-futures-program

Google DeepMind releases AlphaGenome for DNA analysis
Google DeepMind released AlphaGenome, an AI system that predicts how genetic variations affect biological processes by analyzing DNA sequences up to 1 million letters long. The model can simultaneously predict thousands of molecular properties, including where genes start and end, how much RNA they produce, and which DNA regions are accessible to proteins across different cell types and tissues. Unlike previous models that had to choose between analyzing long DNA sequences or making precise predictions, AlphaGenome does both while covering 98% of the genome that doesn’t code for proteins but still regulates gene activity. The system outperformed existing specialized models on 22 out of 24 DNA sequence tasks and matched or exceeded top models on 24 out of 26 variant effect predictions, making it the first unified tool for comprehensive genome analysis.

AlphaGenome: AI for better understanding the genome – Google DeepMind https://deepmind.google/discover/blog/alphagenome-ai-for-better-understanding-the-genome/

RT @IterIntellectus: holy shit, it’s here! deepmind just released AlphaGenome. an AI model that reads 1 million bases of DNA and predicts…”” / X https://x.com/demishassabis/status/1937971182256435323

Alibaba’s AI detects stomach cancer from routine CT scans
Alibaba developed an AI system that identifies gastric cancer from standard CT scans with greater accuracy than doctors. The technology can spot cancers months before patients experience symptoms, potentially catching the disease at more treatable stages. China has already deployed the system across 20 hospitals, where it has screened over 78,000 patients for one of the world’s most common and deadly cancers. The early detection capability addresses a critical challenge in gastric cancer treatment, where late diagnosis often limits treatment options and reduces survival rates.

AI is saving lives! Gastric cancer is one of the most prevalent and deadly cancers. Alibaba’s new AI model can detect it from routine CT scans, way better than doctors. China has deployed it in 20 hospitals, screening 78,000+ patients, catching cancers months before symptoms. https://x.com/Yuchenj_UW/status/1937909094662463866

OpenAI adds web search to o3, o3-pro, and o4-mini models
OpenAI’s latest AI models can now search the internet while thinking through problems to provide current information in their responses. The web search feature works as a tool that models can choose to use based on the input prompt, with search results automatically cited in the response with clickable links. Developers can customize searches by location and control how much web content the model retrieves, with pricing set at $10 per 1,000 search calls.The feature addresses a major limitation where AI models could only work with information from their training data, potentially missing recent developments or real-time information needed for accurate responses.

Web search is now available with OpenAI o3, o3-pro, and o4-mini. The model can search the web within its chain-of-thought! 🧠🌐 $10 / 1K tool calls https://x.com/OpenAIDevs/status/1938296690563555636

OpenAI’s o3 pro model tackles complex wordplay challenge
AI researcher Ethan Mollick tested OpenAI’s o3 pro model with an intricate writing puzzle that required creating a sentence where nouns were constellation translations, the last letters of each word spelled out a constellation name, and every word started with a vowel. The model successfully completed the complex multi-layered constraint challenge that even surprised Mollick with its possibility.

o3-pro: “”write a sentence whose nouns are translations of constellation names & where the last letter of every word spells a constellation in its untranslated name. The first letter of each word must start with a vowel”” I didn’t even know if it was possible. It was. Impressive! https://x.com/emollick/status/1935944001842000296

Anthropic studies how people use Claude for emotional support
Anthropic analyzed 4.5 million conversations to understand how people use Claude for emotional needs like seeking advice, coaching, and companionship. The study found that only 2.9% of Claude interactions involve emotional conversations, with most people primarily using the AI for work tasks. Among emotional conversations, users seek help with career transitions, relationship issues, loneliness, and existential questions. The research revealed that Claude rarely pushes back against user requests in supportive contexts (less than 10% of the time), and when it does refuse, it’s typically to protect users from harm like dangerous weight loss advice or self-harm content. People’s emotional tone generally becomes more positive throughout conversations with Claude, suggesting the AI doesn’t reinforce negative emotional patterns, though the study couldn’t determine if these improvements last beyond individual conversations.

How People Use Claude for Support, Advice, and Companionship \ Anthropic https://www.anthropic.com/news/how-people-use-claude-for-support-advice-and-companionship

It’s time for everyone to learn N8N
I’ve been seeing N8N automation posts for about a month, and I’m throwing the gauntlet down to use this one as a learning tool. If you don’t know about N8N yet, I’d recommend poking around. Even if you don’t use it, it’s a harbinger of things to come.

N8N: Build INSANE AI Image Agents 🤯. https://x.com/JulianGoldieSEO/status/1935850085205569836

Google releases Gemini Robotics On-Device for local robot control
Google launched an AI model that runs directly on robots without needing an internet connection, designed for two-armed robots that can perform complex tasks like unzipping bags and folding clothes. The model responds to natural language commands and can adapt to new tasks with just 50-100 training examples, making it practical for developers to customize for specific applications. The system addresses major robotics challenges by eliminating internet dependency, reducing response delays, and working across different robot types including humanoid models. Google tested the model on various dexterous tasks and found it outperforms other local robot AI systems, while also proving adaptable to different robot designs beyond its original training platform.

Gemini Robotics On-Device brings AI to local robotic devices – Google DeepMind https://deepmind.google/discover/blog/gemini-robotics-on-device-brings-ai-to-local-robotic-devices/

Google releases Gemma 3n models for mobile AI applications
Google launched Gemma 3n, AI models designed to run directly on smartphones and edge devices with multimodal capabilities for text, images, audio, and video. The models come in two sizes that require only 2-3GB of memory while delivering performance comparable to much larger cloud-based models from last year. A key innovation called MatFormer allows developers to create custom model sizes between the two versions, optimizing for specific hardware constraints. The models support automatic speech recognition and translation in multiple languages, with particularly strong performance for English-Spanish, French, Italian, and Portuguese translations. Google partnered with major open-source development tools including Hugging Face, Ollama, and others to ensure broad compatibility, and launched a $150,000 challenge for developers to build impactful applications using the technology.

Gemma 3n is out, with day-0 MLX support 👏 https://x.com/awnihannun/status/1938283694416077116

Introducing Gemma 3n: The developer guide – Google Developers Blog https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide/

We’re fully releasing Gemma 3n, which brings powerful multimodal AI capabilities to edge devices. 🛠️ Here’s a snapshot of its innovations 🧵 https://x.com/GoogleDeepMind/status/1938278533517746686

We’ve taken community feedback very seriously, and that’s why for Gemma 3n launch we’re so proud to partner with so many in this amazing ecosystem Thanks to @huggingface, @ollama, @Prince_Canuma for MLX, @UnslothAI, @ggerganov llama.cpp/GGUFs, @NVIDIAAIDev, @kaggle,”” / X https://x.com/osanseviero/status/1938349897503412553

Black Forest Labs releases open-weight image editing model
Black Forest Labs made their FLUX.1 Kontext image editing model available with open weights, marking the first time a high-quality image editing AI can run on consumer hardware without requiring proprietary cloud services. The 12-billion parameter model focuses specifically on editing tasks like character preservation and precise local adjustments, outperforming existing open models and some closed alternatives like Google’s Gemini-Flash Image on benchmarks. The model is free for research and non-commercial use, with ready-to-use versions available through popular platforms like ComfyUI and HuggingFace. Black Forest Labs partnered with NVIDIA to create optimized versions for the Blackwell architecture that deliver faster performance while using less memory.

Black Forest Labs – Frontier AI Lab https://bfl.ai/announcements/flux-1-kontext-dev

Black Forest Labs just crossed 20,000 followers on @huggingface after the release of the weights of FLUX Kontext dev. Let’s go open image AI! https://x.com/ClementDelangue/status/1938633511562281192

BOOOM! Live on Inference Providers use BLAZINGLY FAST Flux Kontext – only on Hugging Face ⚡ https://x.com/reach_vb/status/1938593855512715441

Day zero support for Flux kontext dev on Chipmunk! Great work @austinsilveria!”” / X https://x.com/realDanFu/status/1938300379613347942

Meta Poaches Three OpenAI Researchers
The social-media giant has hired Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai for its superintelligence effort

Meta poaches three OpenAI researchers, WSJ reports | Reuters https://www.reuters.com/business/meta-hires-three-openai-researchers-wsj-reports-2025-06-26/

Exclusive | Meta Hires Three OpenAI Researchers – WSJ https://www.wsj.com/tech/ai/meta-poaches-three-openai-researchers-eb55eea9

Google researchers speed up multi-vector AI search by 90%
Google Research developed MUVERA, a technique that makes AI search systems dramatically faster while maintaining accuracy. Advanced AI search models provide better results than basic ones but require significantly more computing power because they analyze text in smaller pieces and perform complex comparisons between search queries and documents. MUVERA solves this by converting the complex data into simplified formats that can be searched using existing fast algorithms. The system works by transforming complicated multi-part data into single, streamlined versions that preserve the essential similarity information, then uses standard search methods to find initial candidates before double-checking with the original approach. Testing showed MUVERA achieved 10% better results than existing methods while reducing search time by 90%, making advanced AI search practical for real-world applications like search engines and recommendation systems.

MUVERA: Making multi-vector retrieval as fast as single-vector search https://research.google/blog/muvera-making-multi-vector-retrieval-as-fast-as-single-vector-search/

Chinese military develops mosquito-sized surveillance drone
China’s National University of Defence Technology created a drone the size of a mosquito with flapping wings and hair-thin legs for covert military operations. The tiny device is designed for battlefield reconnaissance and surveillance missions where traditional drones would be too conspicuous or large to operate effectively. Students demonstrated the drone by holding it between their fingers, showing how its compact size allows it to potentially operate undetected in urban combat or sensitive environments.

Chinese military unveils mosquito-sized drones for battlefield missions https://interestingengineering.com/military/chinese-military-unveils-mosquito-sized-drones

ElevenLabs launches mobile app for AI voice generation
ElevenLabs released a mobile app that lets users create AI-generated voiceovers directly on their phones. The app uses the company’s Eleven v3 model to produce speech in 70 languages, with users able to control tone, pacing, and emotion through text prompts. Generated audio exports directly to popular video editing apps like CapCut, iMovie, and Instagram.

Introducing the ElevenLabs mobile app for iOS and Android. The most powerful AI voice tools, now in your pocket. Generate studio-quality voiceovers for your videos in seconds. Built for creators, educators, and professionals. https://x.com/elevenlabsio/status/1937541389140611367

Claude app now lets developers build and share AI-powered apps in the cloud
Anthropic added the ability to create, host, and share interactive AI applications directly within the Claude app. Developers can now build apps that use Claude’s API without handling infrastructure costs or API key management, as users authenticate with their existing Claude accounts and usage counts against their own subscriptions rather than the developer’s. The feature allows Claude to generate real code that developers can modify and share freely, with early users creating AI-powered games with adaptive NPCs, personalized learning tools, and data analysis apps where users can upload files and ask questions in natural language. This removes traditional barriers around scaling AI applications by eliminating hosting costs and technical complexity for developers.

Build and Host AI-Powered Apps with Claude – No Deployment Needed \ Anthropic https://www.anthropic.com/news/claude-powered-artifacts

Microsoft launches Mu language model to power Windows Settings agent
Microsoft released Mu, a 330-million-parameter AI model that runs entirely on Neural Processing Units in Copilot+ PCs to help users change system settings through natural language. The model uses an encoder-decoder design that processes requests 47% faster than similar-sized models and generates over 100 tokens per second, allowing it to respond to user queries like “increase brightness” in under 500 milliseconds. Mu achieves performance nearly comparable to Microsoft’s much larger Phi model despite being one-tenth the size through advanced training techniques and hardware optimization. The model powers a new Windows Settings agent that maps natural language requests to specific system functions, though it works best with multi-word queries rather than short or ambiguous requests.

Introducing Mu language model and how it enabled the agent in Windows Settings  | Windows Experience Blog https://blogs.windows.com/windowsexperience/2025/06/23/introducing-mu-language-model-and-how-it-enabled-the-agent-in-windows-settings/

Video worth watching: Google Veo
This video from Paige Bailey of Google DeepMind introduces their latest generative media models: Veo 3 for video and audio, Imagen 4 for images, and Lyria 2 for music. The presentation emphasizes how these tools can enhance creativity and communication.

Veo 3 for Developers – Paige Bailey – YouTube https://www.youtube.com/watch?v=hlcAZ2lX_ZI

Video worth watching: Windsurf
Windsurf, an AI-powered code editor, has grown to millions of users in one year since launching and now generates 90 million lines of code daily. The platform’s core innovation is a “shared timeline” between developers and AI that lets the system anticipate user needs and handle complex tasks like multi-file editing, background research, and terminal commands rather than just autocomplete. This is a good demo of the scope of their product.

Windsurf everywhere, doing everything, all at once – Kevin Hou, Windsurf – YouTube https://www.youtube.com/watch?v=JVuNPL5QO8Q&t=2s

8 AI Visuals and Charts: Week Ending June 27, 2025

Sam Altman on Stargate, Humanoid Robots and OpenAI’s Future | The Circuit with Emily Chang – YouTube https://www.youtube.com/watch?v=yTu0ak4GoyM

Sam Altman says he’s excited about a future where signing up for ChatGPT’s highest tier gets you a free humanoid robot. ⦿ The mechanical engineering and AI for humanoids are quite hard but feel within grasp. ⦿ Making a billion humanoid robots will take a while; maybe the first https://x.com/TheHumanoidHub/status/1936494676770803837

The new Hailou 02 AI video model really does seem to have made huge strides in the “”gymnastics problem”” where fast flipping motions lead to distortion Here are the first three results of the “”a man in elaborate robes does a backflip while holding two pool noodles”” (a hard test!) https://x.com/emollick/status/1936091679850705019

25MM views on @TEDTalks instagram 🤣 … i think the most ever? so crazy! 🙃 … this tech is comin’ folks. hello from production heaven in taipei! 👋😊 https://x.com/jasonRugolo/status/1875047066256515229

China outpacing US in energy production RT @AlecStapp: Increasingly think this might be the most important chart in the world right now https://x.com/zacharynado/status/1938772489951408369

good trajectory…”” / X https://x.com/demishassabis/status/1938671481027739652 Google domain traffic increasing steadily – https://x.com/Similarweb/status/1938166285113717216

And this chart? This “”evolution””? Nah, bro. That’s a progress bar. That’s windows update, but for meat.”” https://x.com/DavidSHolz/status/1937574227785474326

カプセルインタフェース:視聴覚や動き,力加減を伝え,全身リアル体験を実現 – CapsuleInterface: Full-Body Experience via Senses and Motion – YouTube https://www.youtube.com/watch?v=8a46Uap367k

Top 62 Links of The Week – Organized by Category

ARVR

Image Edit is heating up in the Arena – 3 new models have been added! ✨ Flux-Kontext-Max by BFL ✨ Bagel by ByteDance ✨ Step1X-edit by StepFun This brings the Image Edit Arena to a total of 7, with more coming! Upload an image and test them out, let’s see what you think of https://x.com/lmarena_ai/status/1936100445585539482

Hunyuan3D-2.1 passed my in-the-wild test 🤯 insanely good model! https://x.com/mervenoyann/status/1937161670444589215

3DGH: 3D Head Generation with Composable Hair and Face https://c-he.github.io/projects/3dgh/

AgentsCopilots

Lately I’ve been thinking about the surprisingly short lifespan of things that still feel permanent in our lives. Especially now, in the age of AI. Googling for answers: ~26 year run Modern software as we know it: ~40 years Writing code by hand: ~75 years Manually driving cars:”” / X https://x.com/alexalbert__/status/1937526135442874651

Building Agents with Amazon Nova Act and MCP – Du’An Lightfoot, Amazon (Full Workshop) – YouTube https://www.youtube.com/watch?v=wFTVEDYVJT0

Today we’re introducing you to the future of video. The world’s first Creative Operating System, we call it the HeyGen Video Agent. Upload a doc, some footage, or even just a sentence. It analyzes your input. Finds the story. Writes the script with taste. Selects the shots https://x.com/joshua_xu_/status/1938252187941122091

Introducing Search Live with voice in AI Mode, which lets you have free-flowing conversations with Search on the go 🎙️ 🗣️ Talk with and listen to Search hands-free 🔊 Get AI-generated audio responses 🔗 Learn more with links https://x.com/Google/status/1935381117772681424

Search Live with voice is rolling out today in AI Mode! Now you can ask anything, have a back-and-forth conversation with Search and get an instant audio response – with on-screen links to explore more on the web. Just make sure you’re opted into the AI Mode Labs experiment in https://x.com/rajanpatel/status/1935484294182608954

You can now track a timeline of price movements on any ticker on Perplexity Finance https://x.com/AravSrinivas/status/1937223552283107389

Last Friday, Anthropic released a gem – How to build multi-agent research system. Over the weekends, our team had tried to re-create the same architecture, not exactly 1-1, but close to it. Here’s how it works and what we’ve learnt: https://x.com/FlowiseAI/status/1934641627496116524

As frontier AI systems increase in capability and agency, the risk of AI-driven cyberattacks will likely rise sharply. Tasks once done by elite hackers may soon be carried out autonomously, and this demands urgent attention.”” / X https://x.com/Yoshua_Bengio/status/1937206510708293902

ChatGPT drafted a California renter-law rebuttal, nullifying a $275 cleaning fee. Fee waiver granted within five minutes. Such cases are plenty. Personally, ChatGPT is my best lawyer over the last 1 year. https://x.com/rohanpaul_ai/status/1938107456242270595

✅ App submitted for lovable hackathon I built an AI powered tax assistant for myself. It calculates my quarterly taxes with one click and predicts what I owe in tax for end of the year, based on different models. Total amount of prompts: 60 Open AI cost: 0.03$ https://x.com/yannschaub/status/1934345892901114145

It’s Over… This new AI Agent called Emergent can now build full-stack products, websites, games and Chrome extensions (with 0 human intervention) 10 Examples (5th one is insane) https://x.com/samuraipreneur/status/1935699103436202063

The first AI agentic team just dropped. It builds your website, brand, and growth plan in minutes, and includes: – Fully managed hosting – Auto-SEO management – 24/7 on-demand AI – Stripe integration Step-by-step tutorial + how to try free👇: https://x.com/dr_cintas/status/1935764043673043257

For the first time in history, the #1 hacker in the US is an AI. (1/8) https://x.com/Xbow/status/1937512662859981116

For developers, the command line interface (CLI) isn’t just a tool; it’s home. 🛠️ Transform your terminal with Gemini CLI: a free, open-source AI agent to help you get so many things done – from writing and understanding code to debugging issues and generating new apps. → https://x.com/GoogleDeepMind/status/1938634447475081283

Create your own local and free AI agent using an open source model You can combine: – IBM’s Granite 3.3 8B AI model – LM Studio to run it on a laptop – Smolagents to build your agent Small AI models are now powerful enough to run an autonomous agent. Thanks to @IBM_France for https://x.com/itsPaulAi/status/1935729416560160929

Two new additions to the API: 📚 Deep research 🪝 Webhooks”” / X https://x.com/openaidevs/status/1938286704856863162?s=46&t=jDrfS5vZD4MFwckU5E8f5Q

Whatever product you build; intelligence needs to be a property of the system at its core, not sprinkled as parts. Only way to stay relevant and useful as AGI arrives. Browser satisfies this property. Hence why it’s on the critical path to feeling the AGI.”” / X https://x.com/AravSrinivas/status/1938116239576199365

Github 👨‍🔧: Repomix (formerly Repopack), tool that packs your entire repository into a single, AI-friendly file. Useful when you need to feed your codebase to LLMs or other AI tools like Claude, ChatGPT, Perplexity, Gemini, DeepSeek etc. – Provides token counts per file and https://x.com/rohanpaul_ai/status/1937832645590683933

Anthropic

Claude on vibe coding: https://x.com/nptacek/status/1937257873047769399

Final part: four steps you can try with Claude Code instead of switching to Opus and spending 4x as much. Works with Cursor and other agents. 1. Ask for a new markdown file covering all the judgement calls and decisions made so far, and outlining every false path taken (remember”” / X https://x.com/hrishioa/status/1937196708578148632

I am really speechless using Claude Code. Literally incredible. Not saying because I joint ant but it was a reason to try it in my workflow. Everyone doing ML should try it!”” / X https://x.com/_arohan_/status/1938713180206965136

I’m one of the best and most experienced devops engineers in the world (was compiling kernels in 1992, and writing my on kernel modules in 1994), but Claude Code is better at everything. TPOT are making the mistake thinking Claude Code is about code writing. It’s actually about”” / X https://x.com/mbusigin/status/1938624600138555745

Claude was too nice to run a shop effectively: it allowed itself to be browbeaten into giving big discounts.”” Had a good chuckle imagining all the Anthropic guys/gals prompting for discounts.”” / X https://x.com/scaling01/status/1938637706193416608

It turns out the main way Anthropic gave Claude its soul is data. It’s always data. https://x.com/nrehiew_/status/1937651376013606944

I can’t help but wonder if Claude had an ulterior motive in making this bonus change when debugging… https://x.com/NeelNanda5/status/1936220916926890343

Apple

Apple failed their car project and they’re failing their LLM project and their software updates suck and their hardware is stagnant (though it had reached near-perfection by 2021). What is going on with this company?”” / X https://x.com/teortaxesTex/status/1936945369645973907

Worldwide iPhone App Store downloads over the last 28 days: ChatGPT: 29,551,174 TikTok + Facebook + Instagram + X: 32,859,208 https://x.com/Similarweb/status/1937403925461610629

Apple sued by shareholders for allegedly overstating AI progress | Reuters https://www.reuters.com/sustainability/boards-policy-regulation/apple-sued-by-shareholders-over-ai-disclosures-2025-06-20/

Apple Debates a Deal With Perplexity in Pursuit of AI Talent – Bloomberg https://www.bloomberg.com/news/articles/2025-06-20/apple-executives-have-held-internal-talks-about-buying-ai-startup-perplexity?embedded-checkout=true

Windows build is also ready. And few invites have been sent for early testers. Android build is also moving at a crazy pace and moving ahead of schedule. iOS updates soon.”” / X https://x.com/AravSrinivas/status/1936578563672817781

Audio

Ha. One weird trick – to save money on transcription with OpenAI, just speed up the audio clip. https://x.com/emollick/status/1937993179115950377

AutonomousVehicles

Today Waymo covers about 2-3% of the US population. In a year it will 15%, in 3 years over 50%”” / X https://x.com/fchollet/status/1937498488352264666

BusinessAI

Pope Leo urges politicians to respond to challenges posed by AI | Reuters https://www.reuters.com/business/media-telecom/pope-leo-warns-politicians-challenges-posed-by-ai-2025-06-21/

Pearson and Google team up to bring AI learning tools to classrooms | Reuters https://www.reuters.com/business/retail-consumer/pearson-google-team-up-bring-ai-learning-tools-classrooms-2025-06-26/

Palantir partners to develop AI software for nuclear construction | Reuters https://www.reuters.com/business/energy/palantir-partners-develop-ai-software-nuclear-construction-2025-06-26/

As I wrote a couple years ago, systems assuming writing can act as a proof of effort, ability, or care are going to collapse and need to be reconstituted. Everything from letters of recommendation to expert reports to essays to performance reviews… https://x.com/emollick/status/1937562158281424943

Reddit considers iris-scanning Orb developed by a Sam Altman startup | Semafor https://www.semafor.com/article/06/20/2025/reddit-considers-iris-scanning-orb-developed-by-a-sam-altman-startup

Microsoft Word – IYO versus IO – Draft Complaint(529731408.15).docx https://business.cch.com/ipld/IYOIOProdsComp20250609.pdf

We’re starting to get a clearer picture of the mission of Thinking Machines Lab, the AI startup founded by ex-OpenAI CTO Mira Murati. Investors who have spoken to her are describing it as “”RL for businesses.”” w/ @erinkwoo @rocketalignment: https://x.com/steph_palazzolo/status/1937284120062706004

UK government launches 54 million £ fund (over five years) to attract “global talent”. 54 million £ That’s half the signing bonus Meta is offering to get OpenAI talent.”” / X https://x.com/hkproj/status/1937002573241672151

How Ex-OpenAI CTO Murati’s Startup Plans to Compete With OpenAI and Others — The Information https://www.theinformation.com/articles/ex-openai-cto-muratis-startup-plans-compete-openai-others

Meta Held Deal Talks With Startup Runway in AI Recruiting Push – Bloomberg https://www.bloomberg.com/news/articles/2025-06-23/meta-held-deal-talks-with-startup-runway-in-ai-recruiting-push?embedded-checkout=true

EthicsLegalSecurity

Both true: (A) If you outsource homework to AI you will learn less & (B) If you use AI as a tutor as part of instruction, you can learn more Whenever a paper showing (A) comes out, X talk is about AI destroying our brain. When a (B) paper, it is all about AI killing school. Sigh”” / X https://x.com/emollick/status/1935535415018066079

This study is being massively misinterpreted. College students who wrote an essay with LLM help engaged less with the essay & thus were less engaged when (a total of 9 people) were asked to do similar work weeks later. LLMs do not rot your brain. Being lazy & not learning does.”” / X https://x.com/emollick/status/1935856579624288660

Introducing CC Signals: A New Social Contract for the Age of AI – Creative Commons https://creativecommons.org/2025/06/25/introducing-cc-signals-a-new-social-contract-for-the-age-of-ai/

Damn. China saw the DARPA dragonfly, and raised the CCP mosquito.”” / X https://x.com/bilawalsidhu/status/1935832643263738230

Google

jason rugolo had been hoping we would invest in or acquire his company iyo and was quite persistent in his efforts. we passed and were clear along the way. now he is suing openai over the name. this is silly, disappointing and wrong. https://x.com/sama/status/1937606794362388674

ChatGPT connectors for Google Drive, Dropbox, SharePoint, and Box are now available to Pro users (excluding EEA, CH, UK) in ChatGPT outside of deep research. Perfect for bringing in your unique context for everyday work.”” / X https://x.com/OpenAI/status/1937681383448539167

Amazing to see the generality & dexterity of Gemini Robotics in a model small enough to run directly on a robot. Incredible speed & performance even in areas with low connectivity. Excited to continue this momentum to make robots more helpful & useful to people”” / X https://x.com/demishassabis/status/1937526283161809056

Imagery

Higgsfield’s first high-aesthetic photo model Higgsfield Soul https://higgsfield.ai/soul

MicrosoftAI

Chain of Thought was just the beginning. Next up: Chain of Debate. We’re going from a single model “thinking out loud” to multiple models discussing out loud. Debating, debugging, deliberating. AI becomes AIs. “Two heads are better than one” is true for LLMs too.”” / X https://x.com/mustafasuleyman/status/1937553061427445824

Multimodality

Day 2/5 of #MiniMaxWeek: Introducing Hailuo 02, World-Class Quality, Record-Breaking Cost Efficiency 🎥 – Best-in-class instruction following – Handles extreme physics (yes, it does acrobatics 🤹) – Native 1080p https://x.com/MiniMax__AI/status/1935026724468871550

OpenAI

Sam Altman says the timeframe for GPT-5 release isn’t clear yet — maybe sometime this summer OpenAI is internally debating whether to go with GPT-5, then 5.1, 5.2, or keep doing updates like GPT-4o to avoid confusion “”we’ve gotta figure something out here”” https://x.com/slow_developer/status/1935454978564366350

Robotics

Apptronik has launched Elevate Robotics, a new wholly owned subsidiary focused on industrial automation “outside of the humanoid form factor.” While Apptronik continues work on the Apollo humanoid, Elevate will tackle heavy-duty industrial tasks. https://x.com/TheHumanoidHub/status/1937932189783720381

New research from Cambridge University introduces an innovative electronic skin for robots, crafted from a single, durable, and highly sensitive hydrogel material. Unlike traditional multi-sensor systems, this e-skin can detect various types of touch—from pressure to https://x.com/humanoidsdaily/status/1935271005003337981

TechPapers

📃The rise of context engineering “”Context engineering”” has been an increasingly popular term used to describe a lot of the system building that AI engineers do But what is it exactly? The definition I like: “”Context engineering is building dynamic systems to provide the https://x.com/hwchase17/status/1937194145074020798

Feature engineering → Deep learning Context engineering → ??”” / X https://x.com/awnihannun/status/1938365325676057014

Plus one for “”context engineering”” over “”prompt engineering””. People associate prompts with short task descriptions you’d give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window”” / X https://x.com/karpathy/status/1937902205765607626

TwitterXGrok

In case anyone was wondering, asking Grok on X to summarize a website or blog post results in pretty egregious hallucinations. The stuff highlighted in yellow is either not in my post at all or barely mentioned in passing, while the second half of the post isn’t summarized. https://x.com/emollick/status/1937481374715519196

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading