Image created with OpenAI GPT-Image-1. Image prompt: rich crimson, bright ivory, deep navy Independence-Day palette, vibrant, celebratory, wholesome, authentic, photorealistic mountain hike reaching peak with flag scene featuring a camcorder on tripod filming fireworks in the valley; natural lighting, subtle film grain, high detail
The race for LLM “cognitive core” – a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing.
Its features are slowly crystalizing:
– Natively multimodal text/vision/audio at both input and output.
– Matryoshka-style architecture allowing a dial of capability up and down at test time.
– Reasoning, also with a dial. (system 2)
– Aggressively tool-using.
– On-device finetuning LoRA slots for test-time training, personalization and customization.
– Delegates and double checks just the right parts with the oracles in the cloud if internet is available.
It doesn’t know that William the Conqueror’s reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can’t recite the SHA-256 of empty string as e3b0c442…, but it can calculate it quickly should you really want it.
What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty (“not your weights not your brain”). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.https://x.com/karpathy/status/1938626382248149433
Neuralink now implanted chips on 7 individuals. The Implantation Intervals Drop Sharply: From 6 Months to Just a Week : r/singularity https://www.reddit.com/r/singularity/comments/1lm2vnv/neuralink_now_implanted_chips_on_7_individuals/
Woah! Two people playing Call of Duty using their minds. Neuralink is getting good. https://x.com/bilawalsidhu/status/1938662975226654933
GaVS: 3D-Grounded Video Stabilization via Temporally-Consistent Local Reconstruction and Rendering TL;DR: Video stabilization task with feed-forward 3DGS reconstruction, ensuring robustness to diverse motions, full-frame rendering and high geometry consistency. https://x.com/Almorgand/status/1940449877001183717
A fun little automation/MCP one-two punch I’ve rigged up. Turn YouTube videos into blog posts and publishes them to your custom built site. No CMS, no context switchting other apps, all in Claude. All written in your voice. Here’s a step-by-step walkthrough on how to do https://x.com/per_simmons_/status/1933552285696610383
Runway now has its sights on the video game industry with its new generative AI platform https://www.engadget.com/ai/runway-now-has-its-sights-on-the-video-game-industry-with-its-new-generative-ai-platform-192350294.html
how we accidentally solved robotics by watching 1 million hours of YouTube | atharva’s blog https://ksagar.bearblog.dev/vjepa/
🚀New from Meta FAIR: today we’re introducing Seamless Interaction, a research project dedicated to modeling interpersonal dynamics. The project features a family of audiovisual behavioral models, developed in collaboration with Meta’s Codec Avatars lab + Core AI lab, that https://x.com/AIatMeta/status/1938641490512851290
🚨 NEW LABS EXPERIMENT 🚨 Introducing Doppl, a new mobile app that lets you upload a photo or screenshot of an outfit and then creates a video of you wearing the clothes to help you find your ✨aesthetic ✨ Available on iOS and Android in the US to users 18+, download the https://x.com/GoogleLabs/status/1938284886277951916
Try on looks and discover your style with Doppl
https://blog.google/technology/google-labs/doppl/
Exciting update: our state-of-the-art video generation model Veo 3 is now shipping globally for all @GeminiApp Pro users, instructions on how to access in the thread below.”” / X https://x.com/demishassabis/status/1940616072304251152
🎬 VEO 3 videos are getting TENS OF MILLIONS of views But paying Google $250/month for access? Nah. I just built a complete automation system using n8n Here’s how to build your own video factory 🧵 https://x.com/xzinft/status/1932442248412569995
Introducing Higgsfield Soul Inpaint.
Same Soul-style high aesthetic, now with pixel-perfect control.
Inpaint anything you want: clothes, hair, objects, and keep the Soul. https://x.com/higgsfield_ai/status/1940835284104761454
Blockbench MCP is here! 🔥 Let’s create a sci-fi sniper with animations in under 3 minutes using AI. Get ready for HYTOPIA -the future is now. https://x.com/PhaxyHytopian/status/1936293530101575756
Okay it works! 🤯 I built an MCP server for Premier Pro. It helps you edit videos using just prompts! Here’s a demo of me editing a Tiktok video in just a few sentences 👇. https://x.com/itstundealao/status/1940098517394932109
I built a YouTube strategist AI system using no code in n8n. It finds trending patterns, analyzes my comments, and tells me what videos to make next. Full breakdown + free template: https://x.com/nateherk/status/1937510643227328549
SnapMoGen: Human Motion Generation from Expressive Texts https://snap-research.github.io/SnapMoGen/
now wouldn’t that be something…”” / X https://x.com/demishassabis/status/1940248521111961988
Neuralink + Optimus ⦿ Alex, a patient who lost the ability to write or draw after a spinal cord injury, was able to write again using Neuralink and a robotic arm. ⦿ The team also successfully decoded Alex’s brain signals to control each finger in real time, allowing him to https://x.com/TheHumanoidHub/status/1939194406562881960
RT @joshwoodward: The wait is over. @GeminiApp is now shipping Veo 3 *globally* for all Pro members! That means India, Indonesia, all of E…”” / X https://x.com/GoogleDeepMind/status/1940702321287299541
Veo 3 coming shortly to Max users”” / X https://x.com/AravSrinivas/status/1940507473095623068
Nano is a depth-aware atmospheric haze plugin that uses ML depth estimation to add physically accurate fog and light scattering to your footage. Works *best* on log footage with visible light sources – it analyzes scene highlights then creates airlight (atmospheric scatter) and https://x.com/bilawalsidhu/status/1938421841753772434
Strange cities. (I find working with Midjourney video to be really interesting, the ability to develop weird styles especially) https://x.com/emollick/status/1940163669276729380
FlashDepth: Real-time Streaming Video Depth Estimation at 2K Resolution”” TL;DR: Depth estimation (2044×1148) streaming video at 24 FPS; careful modifications of pretrained single-image depth models, these capabilities are enabled with relatively little data and training. https://x.com/Almorgand/status/1939724839004037617
SynMotion https://lucaria-academy.github.io/SynMotion/
AI generations are getting insanely realistic : r/singularity https://www.reddit.com/r/singularity/comments/1ll5k3d/ai_generations_are_getting_insanely_realistic/




