This was my third time (!) presenting to the Monument Sotheby’s team, and as usual, we had a blast. It’s neat that the same audience has been attending for almost a year now, because we have continuity and we’re becoming friends!
Here’s a link to the first presentation from February 2025, and here’s a link to the second presentation from September 2025.
I can’t believe we’re now in January 2026…Round three!
If you missed it, here are the presentation files from January 14, 2026: Keynote version | PowerPoint version | Google Slides
This time, we hit the ground running and jumped right into the headlines, with recaps of the major categories in artificial intelligence.
I’m not including the recaps on this page, but they are available in-depth in the slides (linked above).
Here’s the automated podcast if you have 12 minutes in the car or at the gym. I gave Notebook LM my slides, and it did a pretty great job recapping everything (with a few exceptions). You can interact with the Notebook and ask it questions about the slides (try it).. .at this link.
Here are the presentation highlights, along with links and demos that we saw, if you’re looking to go back and re-watch or share them!


We started with generative AI (chat, imagery, and video), and then we moved into multimodal AI. Then we focused on agents and the future of the internet.






We talked a lot about how the length of tasks that AI can do doubles every seven months, and how agents will change the world over the next year or two.

Zillow In GPT
We demoed the Zillow application inside of ChatGPT, where you can simply reply to it.
https://chatgpt.com/apps/zillow/connector_68d579f7b0948191a7da3124a3b560f7

We reviewed Google’s Nano Banana, also known as Gemini 2.5. We looked at Adobe’s Project Light Touch. We saw how there was some emerging intelligence coming from image models, and just how much that compounds when you look at video models.
Google Nano Banana Gemini 2.5





https://gemini.google/overview/image-generation
https://blog.google/innovation-and-ai/products/nano-banana-pro/
https://deepmind.google/models/gemini-image/pro/
Adobe Project Light Touch
NotebookLM
We also demoed NotebookLM, which was a surprise hit, and I’m glad I brought it up. It’s been out for a little while, and sometimes I’m too afraid to talk about things people may already know. In fact, I was really impressed that there is now a Google NotebookLM podcast about the panel prior to my presentation.
https://notebooklm.google.com
Here is a Google Notebook LM video recap of the introduction portion of the presentation… it’s a good barometer of what Notebook LM does well v. what it misses.
Segmentation
We did a deep dive into object segmentation, which is my favorite topic, and it caught me off guard how excited about it the audience was.
It’s also notable that one week after my presentation, I ran into the latest segmentation model from Moondream. I hope everyone checks it out, because we did not see it in the presentation.
Here’s Moondream!
https://moondream.ai/skills/segment

https://ai.meta.com/blog/segment-anything-model-3/
https://ai.meta.com/blog/sam-3d/
We then did some video demonstrations and saw even more examples of emerging intelligence through multimodal models.
Video Demos
This is a fun paper to skim. Video models are zero-shot learners and reasoners
https://video-zero-shot.github.io/
3D Mapping For Remote Viewing
We looked at the Meta Quest mapping out a room and, without really saying it out loud, we learned what a Gaussian splat is.
We then did a demonstration of browser use using the Chrome extension for Claude and found a waterfront home with a pool—and actually emailed the listing agent. Here’s the email!

And finally, we enjoyed an encore screening of our fun HeyGen demos from February. Here they are, if you need them.
Bonus content: Robots (see my slides at the end)
I can’t wait to see everyone again in a few months. You are all officially AI experts!





Leave a Reply