“Woah. Snapchat just launched an AI video generator! Plus, AI portrait generation that needs just one photo of you. Snaps are about to get a lot more expressive.
“A little experiment I worked on using LivePortrait, Viggle, and ComfyUI. Will be doing a breakdown soon!
“CogVideoX image-to-video is really good for timelapse videos
“BTS to my latest video.
“Introducing AI Product Commercials on @flairAI_ 🔥 1. Upload your product to Flair 2. Generate an AI product photo 2. Click “Animate” to turn your photo into a video Rolling out access to all pro and pro+ users in few days. rt and comment to try for free
“Pudu Robotics announced their 1st generation ‘semi-humanoid’ Wouldn’t consider this a good form factor for a humanoid but Pudu is a global leader in delivering robots that are already commercially available
AnySkin: Plug-and-play Skin Sensing for Robotic Touch
“tl/dr: building humanoid robots is harder than you think; and for whatever use case you have in mind, it’s probably a mistake to make it humanoid in the first place
“Every time I see one of these bots tackle just one part of a long construction supply chain I wonder when humanoid robots that can handle the entire chain will arrive.
“Disney is turning data into dance moves. The team from Disney Research have made it possible for robots to learn how to move from random motion data. They have achieved this by using a machine learning technique to break down short pieces of motion data and create a simpler
Adobe
“Adobe has just previewed its new Firefly Video Model, set to revolutionize video editing in software like Premiere Pro. Available in beta later this year, this tool promises enhanced workflows, allowing editors to experiment, fill gaps, and even add new elements seamlessly.
Google (Veo)
“Our most advanced generative video model Veo is coming to @YouTube Shorts to help creators bring their ideas to life. 🪄
Kling
“Kling AI def nailed it with their motion brush implementation — a good way to exert control and tame the chaos / slot machine nature of video diffusion models. The UX reminds me of DragGAN, but better thanks to the segmentation. Hope all other AI video tools learn from this.” / X
“Prompt: [Soldier stands up and walk to the left] This is insane!
Luma
“🚀 Introducing the Dream Machine API. Developers can now build and scale creative products with the world’s most popular and intuitive video generation model without building complex tools in their apps. Start today
Runway
“Runway’s new video-to-video AI is amazing for reskinning classic video game cut scenes. Here’s N64 Golden Eye remastered. Wish we could pass in an object id / semantic segmentation pass to get the finer details right e.g. Bond’s gun NOT catching fire 😂
“Gen-3 Alpha Video to Video is now available on web for all paid plans. Video to Video represents a new control mechanism for precise movement, expressiveness and intent within generations. To use Video to Video, simply upload your input video, prompt in any aesthetic direction
Runway News | Runway Partners with Lionsgate
“Today we’re announcing the Runway API, allowing developers to easily integrate Gen-3 Alpha Turbo into their apps and products. Already in use by trusted strategic partners like @Omnicom, you can now learn more and sign up for the waitlist:
“Today we are excited to announce that we have entered into a first-of-its-kind partnership with @Lionsgate to bring our next generation of storytelling tools into the hands of the world’s greatest storytellers. Learn more:





Leave a Reply