Image created with gemini-3.1-flash-image-preview with claude-opus-4.7. Image prompt: Using the provided reference image, preserve every detail of the warm marigold-orange studio backdrop, the seated woman with closed eyes and faint smile in her purple-and-white windbreaker, and the tattooed beanie-wearing singer leaning into her, but replace only the black handheld microphone with a glossy three-dimensional Facebook-style thumbs-up ‘Like’ icon held in the exact same grip and position to his mouth, rendered with photographic realism and matching studio lighting so it reads as a seamless microphone stand-in. After generating the image, overlay the text “Meta” in the upper-left corner of the frame in large, bold, all-caps ITC Avant Garde Gothic Pro Medium (or a near-identical geometric sans-serif if unavailable), pure white (#FFFFFF), with no date, subtitle, drop shadow, or outline. The text should be substantial in scale — taking up a meaningful portion of the upper-left area — with comfortable margin from the top and left edges, set against the negative space of the orange backdrop so it does not overlap or obscure the singer, the seated woman, or the replaced object.
Ravid Shwartz Ziv on X: “I took the new Muse Spark to the ultimate test: filing my taxes – 3 different workplaces, consulting, stocks, foreign bank accounts and assets, and kids. One hour later, I had everything done. AGI is here… cc: @alexandr_wang” / X
https://x.com/ziv_ravid/status/2044237898351030538
this is not investment or tax advice… but very cool!
https://x.com/alexandr_wang/status/2044269086771921326
OpenAI Stargate Execs to Join Meta’s New Compute Unit — The Information
https://www.theinformation.com/briefings/openai-stargate-execs-join-metas-new-compute-unit
OpenAI StarGate People Move To Meta Amid Data Center Boom
https://www.forbes.com/sites/johnwerner/2026/04/15/openai-stargate-people-move-to-meta-amid-data-center-boom/
Ollama
https://ollama.com/
r/localLlama + r/localLLM + r/sillytavernAI preferred models list – apr 2026
| Model | Size/Class | Format | Hosted Provider | Best Local Path | Notes |
|---|---|---|---|---|---|
| Huihui Gemma 4 E2B Abliterated v2 | E2B | GGUF | No | Ollama / llama.cpp | Gemma 4 MoE with ~2B active params. Multimodal (image+text in, text out). Abliterated for reduced refusal. Lightweight enough to run fast, but MoE active-param sizing means quality punches above its weight class. |
| Huihui Gemma 4 E4B Abliterated | E4B | GGUF | No | Ollama / llama.cpp | Same Gemma 4 MoE family as E2B but with ~4B active params. Multimodal. Better quality ceiling than E2B at the cost of more compute per token. |
| SultrySilicon V2 | 7B | GGUF | No | Ollama / llama.cpp | Roleplay-focused 7B model. Smallest in the set. Good for quick creative/RP sanity checks, not for reasoning or instruction-following benchmarks. |
| Huihui-GLM-4.6V-Flash-Abliterated | 9B | GGUF | No | Ollama / llama.cpp | Based on Z.ai GLM-4.6V-Flash. Vision-language model (image+text). Abliterated. Bilingual Chinese/English. Fast inference variant of the GLM-4.6V family. |
| Gemma-2-Ataraxy-9B | 9B | GGUF | No | Ollama / llama.cpp | Merge of Gemma-2-9B-SimPO and Gemma-2-Gutenberg-9B. Creative writing and roleplay oriented. Scored well on EQ-Bench. Good balance of instruction-following and literary quality at 9B. |
| MythoMax-L2-13B | 13B | GGUF | No | Ollama / llama.cpp | By Gryphe. Llama 2 merge of MythoLogic-L2 and Huginn using experimental per-tensor gradient merging. One of the most downloaded RP/creative models ever (~59k GGUF downloads). Strong at both roleplay and storywriting. Alpaca format. The OG. |
| Dan’s PersonalityEngine V1.3.0 | 24B | GGUF | No | Ollama / llama.cpp | Fine-tuned from Mistral Small 3.1 24B Base. Trained on a massive mix: roleplay, storywriting, tool use, math, reasoning, code, medical, legal, and survival topics. Multilingual (EN, AR, DE, FR, ES, HI, PT, JA, KO). A genuine generalist with personality. |
| SuperGemma4 26B Abliterated Multimodal | 26B multimodal | GGUF | No | custom multimodal stack | Based on Gemma 4 26B-A4B. Multimodal (image-text-to-text). Abliterated with low refusal. Optimized for Apple Silicon (MLX). Supports Korean + English. Tool use and coding tags. |
| Gemma 3 27B Abliterated | 27B | GGUF | No | Ollama / llama.cpp | Abliterated version of Google’s Gemma 3 27B instruct. Multimodal (image-text-to-text). Reduced refusal behavior while preserving instruction-following quality. |
| Huihui Gemma 4 31B Abliterated | 31B | GGUF | No | Ollama / llama.cpp | Abliterated Gemma 4 31B instruct. Multimodal (any-to-any pipeline tag). Dense 31B, not MoE. Strongest Gemma 4 dense abliterated option. |
| Gemma 4 31B Abliterated | 31B | GGUF + safetensors | No | Ollama / llama.cpp | Same base as above (Gemma 4 31B-it) but different abliteration method using mlabonne’s harmful_behaviors + harmless_alpaca datasets. Both formats in one repo. |
| Huihui-Qwen3.5-35B-A3B-Claude-4.6-Opus-Abliterated | 35B A3B | GGUF | No | Ollama / llama.cpp | Qwen 3.5 MoE (35B total, ~3B active). Distilled from Claude 4.6 Opus reasoning. Chain-of-thought and reasoning-focused. Abliterated. Multimodal. Punches well above its active param count on reasoning tasks. |
| Midnight Rose 70B v2.0.3 | 70B | GGUF | No | Ollama / llama.cpp | By sophosympatheia. Complex multi-stage SLERP/DARE-TIES merge of WizardLM, Tulu-2-DPO, Dolphin, and earlier Midnight Rose versions. Uncensored. Designed for roleplay and storytelling. Scored surprisingly high on EQ-Bench even at low quants. ~6k context sweet spot. |
| Midnight Miqu 70B v1.5 | 70B | GGUF | No | Ollama / llama.cpp | Llama-family merge of Midnight-Miqu v1.0 and Tess-70B. Creative writing and roleplay focused. 32k context. Known for strong prose quality and character consistency at 70B scale. |
| Midnight Rose 103B v2.0.3 | 103B | GGUF | No | heavy self-host | Same lineage as the 70B but scaled up. Importance-matrix GGUF by mradermacher. Firmly in the “need real hardware” category. |
| DeepSeek V3 | 671B A37B | safetensors | Yes: DeepInfra, Novita | Hosted preferred | Massive MoE. 671B total, 37B active. Strong on code, math, and instruction-following. Pre-trained on ~15T tokens. Use via OpenRouter, not locally. |
| DeepSeek V3.2 | 685B A37B | safetensors | No confirmed provider yet | Hosted preferred | Successor to V3. Same general architecture class. Not a local play. |
| Behemoth-123B-v1 | 123B | GGUF | No | heavy self-host | Mistral-family 123B. Creative/RP community model. Massive parameter count makes it impractical for casual local use but prized for output quality in the r/LocalLLM community. |
| Monstral-123B | 123B | GGUF | No | heavy self-host | Mistral-family 123B. Text generation and chat focused. Same weight class as Behemoth, different training mix and community lineage. |
| BlackSheep-Large | ~27B | GGUF | No | Ollama / llama.cpp | By TroyDoesAI. Canonical repo is gated. Q8_0 is ~29.5 GB, placing it in the 27B-class. Community RP/creative model. |
After playing with it a bit, Meta’s Muse Spark Thinking is fine so far, but really doesn’t match the current Big Three models. It also is a bit… weird. Like some strange language & tone, a little loose with facts, etc. And here is how it does on the neo-gothic shader test.
https://x.com/emollick/status/2042040840554451286
I think Muse Spark came in far better than most were expecting as the first new model attempt from Meta, especially given the fact that it has been a year since Llama 4 with no models at all (and that Llama 4 was generally considered a dead end).
https://x.com/emollick/status/2043209068890763334
Mark Zuckerberg is reportedly building an AI clone to replace him in meetings | The Verge
https://www.theverge.com/tech/910990/meta-ceo-mark-zuckerberg-ai-clone
As we develop more capable models at the frontier, MSL is committed to safety and preparedness for AI. To demonstrate this commitment, we will be publishing preparedness reports for our models, in line with our new Advanced AI Scaling Framework. See our Muse Spark report below:
https://x.com/alexandr_wang/status/2044454230614999441
check out Contemplating mode for your most complex reasoning queries!
https://x.com/alexandr_wang/status/2043177308803215811
cool to see people finding new emergent capabilities within Muse Spark!
https://x.com/alexandr_wang/status/2042360886195581330
honestly I didn’t even know our model could do some of these
https://x.com/alexandr_wang/status/2042805863979626574
i find muse spark is very good at data analysis–both finding relevant open-source data and analyzing it. for example, here’s my results for analyzing global share of GDP over past century:
https://x.com/alexandr_wang/status/2043432483006615806
Meta AI is up to #6 in the App Store overnight, and still growing 🙂 Also who knew the 7-Eleven app was so popular
https://x.com/alexandr_wang/status/2042254047244398978
MSL *really does* run like a startup 🙂 join us if that sounds exciting to you!
https://x.com/alexandr_wang/status/2043176328170705036
muse spark is impressively multimodal!
https://x.com/alexandr_wang/status/2042362366784881011
muse spark is the best model I’ve personally used for Design & UI great to hear the community experience it as well!
https://x.com/alexandr_wang/status/2042610847520809295
okay this is too exciting 🙂 meta AI is now #2 in the app store, top AI app! we are so back!
https://x.com/alexandr_wang/status/2043016694910587228
people are finding all the cool things we built into muse spark 🙂
https://x.com/alexandr_wang/status/2043175802578346466
the muse spark API will be coming soon! we have been thrilled with the amount of excitement amongst developers who want to try muse spark inside their agentic harnesses stay tuned!
https://x.com/alexandr_wang/status/2042614906059387211
up to #3, coming for the crown 👑 that being said, MONOPOLY GO!Chat is now #1, so i’m learning a lot about the App Store
https://x.com/alexandr_wang/status/2042808439852630073
we are excited for people to try muse spark!
https://x.com/alexandr_wang/status/2042142866697548189
Meta commits to 1 GW with Broadcom, Hock Tan to leave board
https://www.cnbc.com/2026/04/14/meta-commits-to-one-gigawatt-of-custom-chips-with-broadcom-as-hock-tan-agrees-to-leave-board.html
Meta’s new AI can predict your brain better than a brain scan. TRIBE v2 is a foundation model trained on 1,000+ hours of brain imaging data from 720 people. You feed it a video, sound clip, or text, and it predicts: > Which brain regions light up > How strongly > And in what
https://x.com/rowancheung/status/2042260621274861756





Leave a Reply