Image created with OpenAI GPT-Image-1. Image prompt: 1966 Kodachrome photo-look, thin white frame, forest-green title band in upper left with stacked yellow/white serif text reading “AR / VR”, tight portrait, chestnut goat nuzzling camera lens scene featuring a pair of 60s-style VR goggles held by a band member; gentle film grain, overcast daylight
The beta 3D photo integration with Instagram is very well done! Every static photo becomes an AI generated stereoscopic 3D photo, and there is a “3D” button that lets you toggle the feature on and off for comparison. Every photo I looked at “just worked”, with no glaring”” / X https://x.com/ID_AA_Carmack/status/1933199948759146810
The update to Personas looks so impressive that they should’ve dedicated a few more minutes on this alone. This deserved more. https://x.com/chrisoffner3d/status/1932158766314893719
Voice cloning is now trivially easy with open source tools, while live avatar videos of real people are easy with proprietary tools & a variety of open source tools are getting there. Very limited time to adjust legal & financial safeguards to new ways of authenticating people”” / X https://x.com/emollick/status/1931364236304830675
We just launched our biggest update yet. Meet Higgsfield Speak — the fastest way to make motion-driven talking videos. Pick a style, choose an avatar, type a script. We do the rest — cinematic motion, voice, emotion. Comment Speak to get the full guide + promo code in the DM. https://x.com/higgsfield_ai/status/1930686472845455417
Used gemini 2.5 pro to build a shot counter for myself + write an after effects script to create this AR style HUD overlay. Footage captured w/ my meta rayban glasses. Insane how much better 2.5 pro is at media understanding vs 1.5 pro (when I last tried this). Using both video https://x.com/bilawalsidhu/status/1931030893772017703
Apple’s “spatial scenes” remind me of Facebook 3d photos from 2018. Take any photo and use AI to give it real depth and parallax. Glad Apple is starting to think beyond stereo photo/video for the Vision Pro; 6dof media needs to be a first class citizen. https://x.com/bilawalsidhu/status/1932286185285750791
Apple’s Visual Intelligence was showcased with a familiar demo for anyone following recent developer conferences: More ways to buy stuff, more swiftly, powered by AI. https://x.com/TechCrunch/status/1932147112164069608
I am a graphics programmer, and here’s my feedback on Apple’s Liquid Glass beta. The idea is cool, but it’s difficult to work with from a UX perspective. Let’s start with the main problems: 1 – Low Contrast: It’s clearly not readable, but there are many different ways to fix it. https://x.com/XorDev/status/1932429551256101328
Interesting to see Apple double down on conventional UIs while ignoring AI when the goal of the big AI firms is to make it so that you just talk to AI to get whatever you want done, without touching a UI.”” / X https://x.com/emollick/status/1932225668487463374
lmfao Apple models sound so 2010ish”” / X https://x.com/cto_junior/status/1932128352036605962
New iOS feels like a junior designer discovered the gradient tool, and are now using it EVERYWHERE. I’ve been there, that was me once.”” / X https://x.com/dzhng/status/1932135452569714863
RT @fkasummer: apple is about to have their windows vista moment”” / X https://x.com/zacharynado/status/1932259455368102098
Updates to Apple’s On-Device and Server Foundation Language Models – Apple Machine Learning Research https://machinelearning.apple.com/research/apple-foundation-models-2025-updates
What could happen at Apple’s WWDC 2025? See latest rumors https://www.usatoday.com/story/tech/2025/06/04/apple-wwdc-2025-rumors/84017268007/
Windows Vista walked so iOS 26 could run.”” / X https://x.com/skirano/status/1932145646963704199
WWDC: Apple opens its AI to developers but keeps its broader ambitions modest | Reuters https://www.reuters.com/business/wwdc-apple-faces-ai-regulatory-challenges-it-woos-software-developers-2025-06-09/
CVPR 2025: GASP: Gaussian Avatars with Synthetic Priors – YouTube https://www.youtube.com/watch?v=KP0CpSb6bE4&list=PLaU7MWI8yG9Uy8P_3K5R4_H6HZ5AsYsxD&index=24&t=13s
Exceedingly crispy multi-view 3D captures reconstructed beautifully (try it in browser below). Instead of the usual “”deform one master model over time”” approach (which breaks w/ fast motion) this FreeTimeGS paper spawns ephemeral gaussian particles as needed. They live briefly, https://x.com/bilawalsidhu/status/1931356216694882319
In-browser full scale AI generated 3D worlds as immersive VR environments. An advanced 3D Gaussian Splatting. Give the Gaussians a high-five as you vibe your way through other realities. https://x.com/rohanpaul_ai/status/1933104009054749018
Edge AI Innovation: Real-Time Pose Detection | Dell https://www.dell.com/en-us/blog/edge-ai-innovation-real-time-pose-detection/
1X CEO Bernt Bornich explained what a world model is on the latest ‘NVIDIA AI Podcast.’ https://x.com/TheHumanoidHub/status/1930762366775693338
alignhuman.github.io https://alignhuman.github.io/
Seedance https://seed.bytedance.com/en/seedance
SyncTalk++ https://ziqiaopeng.github.io/synctalk++/
$321,500 for 30 seconds of this was a hell of a deal. (Star Wars budget as AI comparison cost) https://x.com/imPatrickT/status/1931161880258662671
CVPR 2025: Exclusive Talk with Paul E. (CEO / CTO @ 3LC.AI) – YouTube https://www.youtube.com/watch?v=msbPP5td0cA&list=PLaU7MWI8yG9Uy8P_3K5R4_H6HZ5AsYsxD&index=13&t=1s
CVPR 2025: Exclusive Talk with Tom Bishop (Chief Technology Officer at GLASS Imaging) – YouTube https://www.youtube.com/watch?v=97flahWCRGA&list=PLaU7MWI8yG9Uy8P_3K5R4_H6HZ5AsYsxD&index=15&t=1s
CVPR 2025: Exclusive Talk with Yotam Azriel (CEO & CTO & Co-Founder at TensorLeap) – YouTube https://www.youtube.com/watch?v=x0csNvsdrvA&list=PLaU7MWI8yG9Uy8P_3K5R4_H6HZ5AsYsxD&index=12&t=107s
CVPR 2025: Graph Neural Network Combining Event Stream and Periodic Aggregation…… – YouTube https://www.youtube.com/watch?v=8_AkywWE9GE&list=PLaU7MWI8yG9Uy8P_3K5R4_H6HZ5AsYsxD&index=8&t=3s
CVPR 2025: Motion Prompting: Controlling Video Generation with Motion Trajectories – YouTube https://www.youtube.com/watch?v=LpnOZv4ziGA&list=PLaU7MWI8yG9Uy8P_3K5R4_H6HZ5AsYsxD&index=22
CVPR 2025: SoundVista: Novel-View Ambient Sound Synthesis via Visual-Acoustic Binding – YouTube https://www.youtube.com/watch?v=uqjDXUgjR3c&list=PLaU7MWI8yG9Uy8P_3K5R4_H6HZ5AsYsxD&index=17&t=8s
CVPR 2025: The PanAf-FGBG Dataset: Understanding the Impact of Backgrounds in Wildlife Behaviour… – YouTube https://www.youtube.com/watch?v=HGMjyhWGx-4&list=PLaU7MWI8yG9Uy8P_3K5R4_H6HZ5AsYsxD&index=16&t=43s
As part of the partnership, we’re excited to share that @UnitreeRobotics is deploying Roboverse as the infrastructure to develop robot policies efficiently. Key highlights include: Roboverse Deployment: Unitree is deploying Roboverse, our comprehensive framework that combines a https://x.com/reborn_agi/status/1929534043709665480
GALBOT announced OpenWBT – an open-source, whole-body humanoid VR teleoperation system using Apple Vision Pro. It supports Unitree G1 and H1 robots, enabling operators to control movements like walking, squatting, bending, grasping, and lifting. https://x.com/TheHumanoidHub/status/1932121215604523290




