Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: Create a 16:9 cinematic split-screen poster. LEFT SIDE (40% width): – A table with a video camera, printed storyboard frames laid out in sequence, and a notebook with simple editing notes that hint at AI assisting cuts and transitions. – The background is a turquoise / teal abstract field made of stylized blue rods or data fibers, representing AI support for video production. – Use warm, cinematic but not dramatic lighting. No glowing screens or neon. RIGHT SIDE (60% width): – A green-toned abstract aerial forest canopy texture, grounding moving images in reality. – Two clean rounded rectangles stacked vertically near the center-right. – The TOP rectangle contains the text: “Video”. – The BOTTOM rectangle contains the text: “2025/10/10”. – Clean sans-serif font in dark green or charcoal. OVERALL STYLE: – Creative and structured, not chaotic. – No brand names. – Maintain the turquoise/forest split-screen.
The nerfing of Sora was inevitable. IP holders just needed time to respond. But now the job is done; Sora is #1 on the iOS app store, flying past Gemini. On the plus side, AI cameos with your social graph is still a lot of fun. https://x.com/bilawalsidhu/status/1974459828308467866
3 quick thoughts on the Sora app 1. OpenAI was smart to feature their team in the Sora launch video memes. High-fidelity output + showing that you buy what you sell + having a sense of humor deflects some slop-dealer hate. 2. “Cameo” feature does what the Ghibli template did:”” / X https://x.com/anuatluru/status/1973125101047451830
Huh, Sora 2 knows a lot of things: “Ethan Mollick parachuting into a volcano, explains the three forms of legitimation from DiMaggio, Paul; Powell, Walter. (April 1983). “”The iron cage revisited: institutional isomorphism and collective rationality in organizational fields”” https://x.com/emollick/status/1974203274342641880
Model – OpenAI API https://platform.openai.com/docs/models/sora-2-pro
New cameo controls are rolling out in today’s Sora update: the restrictions section gives you agency in blocking generations that you don’t want 🍅 https://x.com/turtlesoupy/status/1974969525566415295
openai-cookbook/examples/sora/sora2_prompting_guide.ipynb at 16686d05abf16db88aef8815ebde5c46c9a1282a · openai/openai-cookbook https://github.com/openai/openai-cookbook/blob/16686d05abf16db88aef8815ebde5c46c9a1282a/examples/sora/sora2_prompting_guide.ipynb#L7
People have been asking me “Jake why are there all these AI videos of you on the internet?!” There’s a method to the madness… I’m a proud OpenAI investor @antifundvc @geoffreywoo and have been advising Sora team @markchen90 @billpeeb this year and agreed to become the first https://x.com/jakepaul/status/1976411343025487977
Sam presenting the grand plan for Sora 2 https://x.com/bilawalsidhu/status/1974547020640919952
Sora 2 cameos feel like worldcoin lite https://x.com/bilawalsidhu/status/1974172685350678962
Sora 2 in the API. Used by Mattel for instant sketch to toy concept. https://x.com/gdb/status/1975262920931217552
Sora 2 is incredibly impressive as a video generator but pushed into a narrow niche: 1) Optimized for viral short form video, both in UX & output 2) Built to be one-and-done, when most video gen is selecting among variants 3) Makes fun stuff the first time, at the cost of control https://x.com/emollick/status/1973939293803733074
sora hit 1M app downloads in <5 days, even faster than chatgpt did (despite the invite flow and only targeting north america!)! team working hard to keep up with surging growth. more features and fixes to overmoderation on the way!”” / X https://x.com/billpeeb/status/1976099194407616641
Sora hit 1M downloads faster than ChatGPT | TechCrunch https://techcrunch.com/2025/10/09/sora-hit-1m-downloads-faster-than-chatgpt/
Sora update #1 – Sam Altman https://blog.samaltman.com/sora-update-number-1
Video generation with Sora – OpenAI API https://platform.openai.com/docs/guides/video-generation
I think people are still unprepared for a world where you cannot trust any video content, despite years of warning. Even when Google & OpenAI include watermarks, those can be easily removed, and open weights AI video models without guardrails are coming.. https://x.com/emollick/status/1976004133296685165
This seems like a pretty big finding: If you train an AI model on enough video, it seems to gain the ability to reason about images in ways it was never trained to do, including solving mazes & puzzles. The bigger the model, the better it does at these out-of-distribution tasks. https://x.com/emollick/status/1974096724445503827
The challenge: create the most over-the-top Hallmark movie clip that can fit into 10 seconds. I managed to cram in a humble neighborhood baker, a prince, a cruel rival princess, and the holiday season in this one. https://x.com/emollick/status/1974589833520771146
🚨 Passive video is dead. Welcome to real-time AI video. With Synthesia 3.0, your videos don’t just play — they engage, respond, and act. Add your content → and create fully interactive AI-powered experiences in minutes. ✅ Video Agents ✅ Realistic Avatars ✅ Express Voice https://x.com/lax97981/status/1974742019420696588
Launch your ADK project in minutes! 🚀 In the latest Compose for Agents video, Reynald Adolphe shows how to set up a multi-agent fact checker with Docker Compose, whether local or in the cloud. 👉 Watch: https://x.com/Docker/status/1973810365835231462
I made a Sora MCP 🎬 The server can generate, remix, check video status, and even download videos to a folder of your choice Repo on GitHub 👇 https://x.com/skirano/status/1975972309291946392
[2509.17803] Effect of Appearance and Animation Realism on the Perception of Emotionally Expressive Virtual Humans https://arxiv.org/abs/2509.17803
WAN 2.2 Animate does some cool stuff with lighting and flame behavior🔥 Check out the workflow and tutorial below 👇 https://x.com/heyglif/status/1976259706214592747
felixtaubner/cap4d: Official repository for the paper “”CAP4D: Creating Animatable 4D Portrait Avatars with Morphable Multi-View Diffusion Models”” https://github.com/felixtaubner/cap4d
[2509.17748] “”I don’t like my avatar””: Investigating Human Digital Doubles https://arxiv.org/abs/2509.17748
HuMo https://phantom-video.github.io/HuMo/
Self-Forcing++ for minute-scale video generation ByteDance’s new method generates high-quality videos up to 4 min 15 sec! It scales diffusion models without long-video teachers or retraining, preserving fidelity and consistency. https://x.com/HuggingPapers/status/1974688371340648857
OpenAI Dev Day – API updates: • GPT-5 Pro now in API • gpt-realtime-mini, lightweight voice model for live use • Sora 2 preview API now available”” / X https://x.com/SRKDAN/status/1975260951776530679
OpenAI is betting that your real creative copilot is actually your social graph.”” / X https://x.com/bilawalsidhu/status/1973879654105952429
A busy week for OpenAI’s social video machine. | The Verge https://www.theverge.com/news/795908/a-busy-week-for-openais-social-video-machine
Meta, Google, and TikTok all had the pieces. OpenAI just beat them to the punch. OpenAI launched Sora 2, an AI-native TikTok competitor that’s already #1 on the App Store and triggered $20B in social media stock losses. Here’s what makes it addictive (and dangerous): ✅ https://x.com/bilawalsidhu/status/1975754926040256875
Scrolling the Sora 2 video stream just reinforces this truth from AI image generators – when given a tool that can make anything, people basically just make videos of cats, celebrities & anime characters (also Sam Altman). A feed that highlighted creativity might look different.”” / X https://x.com/emollick/status/1973986569595089346
Model – OpenAI API https://platform.openai.com/docs/models/sora-2
Sora 2 and 2 Pro are now available in the API: Sora 2 Pricing: $0.10/s @ 720p Sora 2 Pro Pricing: $0.30/s @ 720p $0.50/s @ 1024p https://x.com/scaling01/status/1975260371226362292
Sora update #1: https://x.com/sama/status/1974272833875329113
🎬 New video models in the Arena! ☁️ Sora 2 and Sora 2 Pro by @OpenAI are now available in the Video Arena Learn more about how to access these and all the best AI video models in thread 🧵”” / X https://x.com/arena/status/1975618056106995944
For a limited time, you can use sora 2 with text to video generation for free on Hugging Face https://x.com/_akhaliq/status/1976096764781646028
New addiction: Opening a long Youtube video (podcast, interview) on Comet, not listening to it linearly, banging question after question on Comet Assistant (Option + A), and only listening to parts I really want to listen to (which Comet can link me to exact time stamp). Eg:”” / X https://x.com/AravSrinivas/status/1975056122433446265
Alibaba has released Qwen3 Omni and Qwen3 Omni Realtime – two natively end-to-end “”omni””-modal models that process text, images, audio, and video in a single unified architecture. Artificial Analysis benchmarking shows competitive Speech to Speech performance, as well as https://x.com/ArtificialAnlys/status/1975904190061834602
Most popular local models in Cline are qwen3-coder & GLM-4.5-Air (guide on how to use them is linked below)”” / X https://x.com/cline/status/1976101061753700400
Qwen3-VL secured 2nd place in the vision leaderboard and became the first open-source model to rank first in both the pure text and visual leaderboards.”” / X https://x.com/Alibaba_Qwen/status/1975360868092420345
More generally: if all of your experiments are “”RL on math with Qwen””, I’m not interested in any outlandish claims you want to make. Qwen’s base models have been (appropriately) aggressively mid-trained for math for a long time. Stop drawing conclusions purely from this.”” / X https://x.com/lateinteraction/status/1976761442842849598
Qwen3-30B-A3B-Instruct-2507-4bit generation on MLX: 473 tokens per sec on M3 Ultra! 🚀 https://x.com/ivanfioravanti/status/1976153645658898453
Thank you @ArtificialAnlys ! 🙏 Qwen Image Edit 2509 ranks #3 overall and leads all open-weight models — enabling multi-image editing with precise control. Try it now: https://x.com/Alibaba_Qwen/status/1976119224339955803
Intelligence performance: The Qwen3 Omni 30B reasoning variant achieves an Artificial Analysis Intelligence Index score of 40, surpassing similarly-sized models like Qwen3 30B, but still trailing Alibaba’s flagship LLM, Qwen3 235B 2507, which scored 57. The Qwen3 Omni 30B https://x.com/ArtificialAnlys/status/1975904195426537596
Z ai’s updated GLM 4.6 (Reasoning) is one of the most intelligent open weights models, with near DeepSeek V3.1 (Reasoning) and Qwen3 235B 2507 (Reasoning) level intelligence 🧠 Key intelligence benchmarking takeaways: ➤ Reasoning Model Performance: GLM 4.6 (Reasoning) scores 56 https://x.com/ArtificialAnlys/status/1975425594679496979
HF demo: https://x.com/Alibaba_Qwen/status/1974290412602040532
📢VideoRAG: Redefining Long-Context Video Comprehension In this week’s deep dive, we explore another interesting approach for performing RAG on videos. VideoRAG, a groundbreaking framework that brings RAG to the world of extremely long videos. Unlike traditional LVLMs that https://x.com/LearnOpenCV/status/1975593558523715921
Flexible generation times for Runway video models are now available via the Runway API. Choose any duration from 2-10 seconds using Gen-4 Turbo. Pay only for what you generate. https://x.com/RunwayMLDevs/status/1975999491049463972
sora update: cameo and safety improvements inbound! 1. cameo restrictions: we’ve heard from lots of folks who want to make their cameos available to everyone but retain control over how they’re used. starting today, you can now give instructions to sora that restrict the type”” / X https://x.com/billpeeb/status/1974969638300901817
New benchmark? https://x.com/emollick/status/1974547693184995429
today, we’re launching Mosaic, the agentic video editor. in a world tending towards AI slop, create something real. no waitlist — public beta is now live at mosaic [dot] so. comment “”MOSAIC”” to get 1,000 free credits dropped into your account. this release comes with 7 key https://x.com/_adishj/status/1973432845436854418




