Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Using the provided reference image, keep the exact left-anchored close-crop composition, deep blue-purple cinematic lighting, wispy atmospheric smoke bleeding rightward, and emotional gravity, but replace the central subject with professional studio headphones in dramatic profile with glitter particles catching light on the black matte surface, maintaining the same melancholic post-session stillness and replacing the title text with ‘audio’ in thin lowercase white sans-serif on the right two-thirds.
Gemma 4 E2B on iPhone 17 Pro Max in AI Edge Gallery! Using skills to query wikipedia. 🔥 App link below. [cr: @mweinbach]
https://x.com/_philschmid/status/2041171039598543064
Insane I’m running Gemma 4 on my iPhone 16 pro max Vibe coded the app in under 1h Singularity is here
https://x.com/enjojoyy/status/2040563245925151229
Gemma 4 E4B is impressive for an on-device LLM. GPT-4ish quality, and expect hallucinations. Here is: “List five sociological theories starting with u and what they are. Then describe them in a rhyming verse” Its in real time, the last is a little bit of a stretch, but not bad!
https://x.com/emollick/status/2040851723774808310
Conversations tend to go better with a face and a voice. That’s why we’re thrilled to release the beta version of the first video chat skill for ANY agent, powered by our new real-time model, PikaStream1.0. The skill preserves memory and personality, and enables real-time
https://x.com/pika_labs/status/2039804583862796345?s=20
Cool to see pika reinventing itself. Now I kinda wanna embody my open claw agent and jump into a real time video call.
https://x.com/bilawalsidhu/status/2039892706508333305
You’ve probably heard a Mist voice already. We’re powering some of the leading brands’ voice agents. Today we’re launching Mist v3 at @rimelabs Same voices. New everything underneath. ~40ms TTFB. Pronunciation control that doesn’t guess on brand names. Throughput built for
https://x.com/lilyjclifford/status/2041545072265543736
There were some exceptionally cool demos from @ollama and omlx using MLX to run Qwen 3.5 and Gemma 4 on Apple silicon. The capabilities of local LLMs and the surrounding ecosystem have come a long way in the past couple years.
https://x.com/awnihannun/status/2042456446122803275
Gemma-4 finetuning 2B, 4B, 26B, 31B all work in Unsloth! We also fixed a few issues: 1. Grad accumulation no longer causes losses to explode 2. Index Error for 26B and 31B for inference 3. use_cache=False had gibberish for E2B, E4B 4. float16 audio -1e9 overflows on float16
https://x.com/danielhanchen/status/2041516671119327590
Introducing Gemma 4, our series of open weight (Apache 2.0 licensed) models, which are byte for byte the most capable open models in the world! Gemma 4 is build to run on your hardware: phones, laptops, and desktops. Frontier intelligence with a 26B MOE and a 31B Dense model!
https://x.com/OfficialLoganK/status/2039735606268314071
People underestimate the level of collaboration that needs to happen for a model such as Gemma 4 to land Before the launch, we worked with HF, VLLM, llama.cpp, Ollama, NVIDIA, Unsloth, Cactus, SGLang, Docker, CloudFlare, and so many others This ecosystem is amazing 🔥
https://x.com/osanseviero/status/2041154555530932578
Gemma 4 31B, quantized and evaluated. Instruction following evals are live on our NVFP4 and FP8-block model cards. Results look great. Reasoning and vision evals coming later this week. NVFP4:
https://t.co/GIc7y1Abkc FP8:
https://x.com/RedHat_AI/status/2040766645480628589
Gemma 4 is #1 on @huggingface!
https://x.com/ClementDelangue/status/2040911131108069692
Gemma 4 is a beast.
https://x.com/Yampeleg/status/2040495537598648357
Speculative decoding for Gemma 4 31B (EAGLE-3) A 2B draft model predicts tokens ahead; the 31B verifier validates them. Same output, faster inference. Early release. vLLM main branch support is in progress (PR #39450). Reasoning support coming soon.
https://x.com/RedHat_AI/status/2042660544797110649
Gemma 4 is the #1 trending model on @huggingface 🤗
https://x.com/GlennCameronjr/status/2040529333794824456
Ace Step 1.5 XL is here Suno 5+ quality at home, open source & fine-tuneable
https://x.com/multimodalart/status/2041563576876327048
etn. & @ElevenLabs at 10 Downing Street
https://x.com/lukeknight/status/2042221068425785526?s=20
v5.5 is the best music model on the planet. Here’s why.
https://x.com/suno/status/2041541160015937995
ChatGPT is now available in CarPlay. The voice mode you know, now available on-the-go. Rolling out to iPhone users running iOS 26.4+ where CarPlay is supported.
https://x.com/OpenAI/status/2039748699350532097?s=20
OpenClaw 2026.4.5 🦞 🎬 Built-in video + music generation 🧠 /dreaming is now real 🔀 Structured task progress ⚡ Better prompt-cache reuse 🌍 Control UI + Docs now speak 12 more languages Anthropic cut us off. GPT-5.4 got better. We moved on.
https://x.com/openclaw/status/2040998570317197607
Hooked up GPT 5.4 to auto-translate our docs via automatic GitHub trigger; way better than what google translate gives you. (also takes significantly longer, so happens once a day)
https://x.com/steipete/status/2040831898620932397
Avatar V: Scaling Video-Reference Avatar Generation
https://www.heygen.com/research/avatar-v-model





Leave a Reply