Javi Lopez, Founder of Magnific AI (my favorite upscaler) put his vacation videos into an array of four AI tools and pushed them to stitch it all together into a wild hallucination (first video below). The tools are consumer-grade and available for anyone: Magnific AI, Luma AI Dream Studio, Runway Gen 3, and Kling (by Kuaishou Technology).
Even if you’re not technical, you can follow trends and see patterns and clusters of tools: object segmentation, generative video stitching (like this example), video-to-image mapping (Viggle, LivePortrait), Gaussian splatting and NeRFs, context windows v. RAG… agents, multimodality, embodiment.
This is an incredibly fun time to be a curious person. Now that we’re into year two of public discourse, these are beginning to converge. I’m giving it two years until things get weird – not from deepfakes, but from embodied machines and agents. I’m ready to get rid of keyboards, personally.
I’m on week 42 of my weekly newsletter. It’s paying off, because this is going to be too hard for laypeople to catch up. Instead of saying, “Oh what a trippy video” you could be pondering “I bet he used Luma? Runway? Maybe Kling. Oh, all three!”.
After seeing the first video, my friend Billy asked me, “What is the best consumer text-to-video software to make a short video of a panda frying an egg in a modern kitchen with a Dodgers hat on?” My instinct was MidJourney -> Runway.
On its own, Runway wouldn’t generate text-to-video, giving an error that the prompt violated the community guidelines—even without the Dodgers hat.
I gave MidJourney the prompt: “A panda frying an egg in a modern kitchen, wearing a Dodgers hat –chaos 20 –ar 4:3 –style raw –personalize r1jz2lj –v 6.1.”
I cleaned up the logo on the hat using Photoshop, then ran it through Runway. Other than the “LA” on the hat, this was all one-shot results—no edits or second attempts. (second video)
I tried to pick relevant instrumental music, and “Fire Emojii” is Dodger pitcher Blake Treinen’s walk-up song.





Leave a Reply