Exclusive interview: Inside Meta’s AI glasses master plan https://www.therundown.ai/p/inside-metas-ai-glasses-master-plan
Happy to see a failed live demo 100/100 times rather than a BS scripted demo Making new technology is hard Having to demo it live takes balls Big props to Meta for giving it a shot 👏”” / X https://x.com/mrdbourke/status/1968506328613347797
Meta AI’s live demo failed for the entire minute 😢 https://x.com/nearcyan/status/1968468841786126476
Surface EMG is magical. Meta managed to squeeze this tech into a svelte wristband to pair with their Ray-Ban AR glasses. Here’s how it works. Imagine walking around your home controlling any display / device with Vision Pro like touch gestures. https://x.com/bilawalsidhu/status/1967805086379413821
The Meta Raybans thing is very cool regardless of live demo failures”” / X https://x.com/aidangomez/status/1968609969848164641
The new Meta Rayban glasses w/ an in-lens display are cool, but the EMG wristband is INSANE. Signals through the wrist are so clear that surface EMG can understand finger motion of a millimeter. Literal sci-fi gesture input is here. https://x.com/bilawalsidhu/status/1967690327751528451
This is an insanely large world created using our 3D world generation model. It blew my mind!”” / X https://x.com/drfeifei/status/1968027077820682598
This is really cool glimpse into the future — reskinning reality in 3d w/ nano banana (to restyle the living room), world labs (img to 3d) and vps (to anchor it to the real world) https://x.com/bilawalsidhu/status/1968376656579674126
Abyss meets t-1000 but make it 🤪 https://x.com/bilawalsidhu/status/1966876256906994011
Introducing SpatialVID: A massive new video dataset for 3D spatial intelligence Crucial for training next-gen models, it features over 7,000 hours of diverse, in-the-wild video with dense annotations like camera poses, depth maps, and dynamic masks. https://x.com/HuggingPapers/status/1967260292569845885
“How a rock learns to think.” These STEM reels are amazing! Is this the antidote to short form brainrot? https://x.com/bilawalsidhu/status/1966073881103606133
Is there any doubt Apple will make the best augmented reality glasses? Google/Samsung or Meta/Luxottica could be a close second w/ their combined software + hardware prowess. But Cupertino is the king of atoms as the iPhone Air illustrates. And the strongest brand by far. https://x.com/bilawalsidhu/status/1966763356699980218
Introducing: Hyperscape Capture 📷 Last year we showed the world’s highest quality Gaussian Splatting, and the first time GS was viewable in VR. Now, capture your own Hyperscapes, directly from your Quest headset in only 5 minutes of walking around. https://x.com/JonathonLuiten/status/1968474776793403734
Now this is what you call a large-scale 3d generation. You can use world labs marble to generate 3d scenes and stitch them together to create expansive 3d environments. Makes me wanna take a stroll through this world in a vr headset. https://x.com/bilawalsidhu/status/1968027838982001092
Building the spatial OS for the real world. https://x.com/bilawalsidhu/status/1967666838957068448
create… explore… repeat https://x.com/theworldlabs/status/1968023354918736350
Generating Bigger and Better Worlds https://www.worldlabs.ai/blog/bigger-better-worlds
tbh if i find it’s easy to integrate this with my own software i will buy it instantly”” / X https://x.com/nearcyan/status/1968538685147889765
We’re thrilled to launch our new Hunyuan3D 3.0! It features 3x higher precision, 1536³ geometric resolution, and 3.6B voxel ultra-HD modeling for stunning detail.🔥🔥🔥 🌟Highlights: ✅Creates faces with lifelike facial contours and natural poses, creating truly realistic, https://x.com/TencentHunyuan/status/1967873084960260470
what do you guys think ppl will do with this? https://x.com/nearcyan/status/1968502999854235864
Meta scrapping Unity to build their own game engine (Horizon Engine) is really interesting. I doubt it has as much to do with the Unity tax and more so to allow them to vertically integrate with all their own layers of ~SOTA AI starting with gaussian splatting”” / X https://x.com/nearcyan/status/1968475789021852075
i found a ‘real’ recording (rare because difficult to capture with a camera) one thing i underestimated was realizing you can do the gestures behind your back, under your covers laying in bed, etc (as this is rarely done in a demo). very cool https://x.com/nearcyan/status/1968581348706189726
the bracelet is ON lets go https://x.com/nearcyan/status/1968467271694549111
feeling really bad for the Meta OS team https://x.com/nearcyan/status/1968473003592990847
wow, a live demo of silently writing a message with Meta neural band on the Meta Ray-Ban Display, pretty cool https://x.com/iScienceLuvr/status/1968471538350583993
NavFoM: Embodied Navigation Foundation Model • Trained on 8M samples across quadrupeds, drones, wheeled robots & vehicles • Handles VLN, ObjNav, tracking, & driving in one unified model • Outperforms on each domain + real-world deployment https://x.com/arankomatsuzaki/status/1967806725387588069
We can generate some pretty big worlds now – all in persistent 3D that you can explore for as long as you want”” / X https://x.com/jcjohnss/status/1968043646923768307
As an 11 year old learning after effects I could’ve never imagined how many ways we have to edit and remix reality on demand. Now Reve enters the fold. Increasingly, the only limit we have is our imagination to put these primitives to work. https://x.com/bilawalsidhu/status/1967866994965069912
PractiLight: Practical Light Control Using Foundational Diffusion Models”” TL;DR: large diffusion models understand light transport well; no need to finetune for plausible relighting; careful consider layers and timesteps to add guidance. https://x.com/Almorgand/status/1966445130736443619
I unironically think this is good for meta team. They managed to * prove that generally their live demo are not fake * lower the expectation for meta products, so next time they deliver banger it will look like massive improvements”” / X https://x.com/cloneofsimo/status/1968484339416453344
The Hyperscape Capture on Quest 3 is as impressive as it looks in the demo, although maybe a bit of blur with very fast head movements? I’ve just downloaded the software (thanks US VPN!) and have had a look around Gordon Ramsay’s kitchen. Very cool. I’ll be capturing my own when https://x.com/TomLikesRobots/status/1968647034589585686
Sergey Levine says making a robotics foundation model is more like the Apollo program than it is like a science experiment. https://x.com/TheHumanoidHub/status/1967960666217566694
Unitree first open-source world-model on @huggingface! UnifoLM-WMA-0 is Unitree‘s first open-source world-model–action architecture spanning multiple types of robotic embodiments, designed specifically for general-purpose robot learning. Its core component is a world-model https://x.com/ClementDelangue/status/1968001710770520135
What if humanoid robots could learn new tasks just by watching people play? ❗️That’s the idea: Instead of relying on expensive teleoperation data, MimicDroid trains robots using regular human play videos. This makes it cheaper, faster, and more scalable for robots to adapt to https://x.com/IlirAliu_/status/1968216390155841582
(1/N) How close are we to enabling robots to solve the long-horizon, complex tasks that matter in everyday life? 🚨 We are thrilled to invite you to join the 1st BEHAVIOR Challenge @NeurIPS 2025, submission deadline: 11/15. 🏆 Prizes: 🥇 $1,000 🥈 $500 🥉 $300 https://x.com/drfeifei/status/1962971299246178664
Kling-Avatar https://klingavatar.github.io/
Also a fan of ominous civilian spheres https://x.com/bilawalsidhu/status/1966585032106983746
Playing with Reve. Genuinely feels like the best conversational editing experience. Going back/forth is exceedingly fun. The AI suggested edits are actually useful — like working with a creative partner. Wish the character consistency was better, though. Still, impressive given https://x.com/bilawalsidhu/status/1967970254350585961




