Image created with Ideogram V2. Image prompt: A vibrant spring meadow with exaggerated blooming flowers in bright colors. Hidden comically in the middle is a United Nations-style meeting hall with flags from many nations sticking out like strange plants. Globe sculptures try to disguise themselves as unusual flowers. Translation headsets hang from tree branches. Woodland animals from different continents (pandas, koalas, foxes) gather in diplomatic circles. International currency symbols are embedded in the landscape. The whole scene is bathed in golden sunshine with lens flares. Vibrant colors and high detail. The word “INTERNATIONAL” integrated into the scene.
“Transforming government service delivery in Abu Dhabi with LangGraph The Abu Dhabi government’s AI Assistant, TAMM 3.0, now delivers 940+ services across all platforms with personalized, seamless interactions. Built on LangGraph, their key workflows include: 🔍 Fast, accurate” / X https://x.com/LangChainAI/status/1912207364448743797
“one of my favorite benchmarks: WeirdML GPT-4.1-mini outperforms GPT-4.1 it ranks 6th in the overall ahead of Grok-3, DeepSeek-R1 and Sonnet 3.5 I have seen GPT-4.1-mini overperforming relative to GPT-4.1 in other benchmarks too. GPT-4.1-mini might be the hidden gem of this” / X https://x.com/scaling01/status/1912117156751229268
AI code suggestions sabotage software supply chain • The Register https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/
Canada Has Answer to Energy Needs in AI Race, Ex-Google CEO Says | Financial Post https://financialpost.com/pmn/business-pmn/canada-has-answer-to-energy-needs-in-ai-race-ex-google-ceo-says
“384 Huawei Ascend 910Cs > GB300NVL72. 300 PFLOPS/server? I guess they compare to NVL72’s 180, that’s TF32, naively means 600 PFLOPS FP16, and 1 910C being 3.2x slower than 1 Blackwell Ultra. Or… 1.6x? should be possible to make 2000 such units with TSMC loot as reported by CSIS. https://x.com/teortaxesTex/status/1911683572953493750
“Huawei’s new AI server is insanely good People need to reset their priors This is why banning H20 without banning tools and sub components is idiotic because Huawei is not far behind H20. The admin needs to act fast to slow down Huawei’s ramp or the H20 ban will be useless” / X https://x.com/dylan522p/status/1912373100668137883
“DeepSeek has incredible market penetration. This is more like the dynamics of messenger apps or something than normal software/service adoption. Give them compute and they become unkillable. https://x.com/teortaxesTex/status/1912211514162831864
NATO inks deal with Palantir for Maven AI system | DefenseScoop https://defensescoop.com/2025/04/14/nato-palantir-maven-smart-system-contract/
“This may be the biggest speed bump in the progress of AI. 😨 China just halted exports of heavy rare earth metals and magnets, targeting industries like autos, semiconductors, aerospace, and defense. → And in this sector, China’s dominance is absolute: 99% of global heavy https://x.com/rohanpaul_ai/status/1911555680730820862
“Vietnam caved before everyone. They, unlike China, were existentially threatened by tariffs. People still don’t get how Xi has flipped the board with his retaliation. This is no longer about China vs. USA, this is USA vs the world where China can say “this crazy mfer isn’t tough” https://x.com/teortaxesTex/status/1911709475284582673
Jack Ma Advocates for AI to Serve Humanity, Not Dominate | AI News https://opentools.ai/news/jack-ma-advocates-for-ai-to-serve-humanity-not-dominate
“The Mogao Reveal: Congratulations to ByteDance Seed on launching Seedream 3.0, the new leading model on the Artificial Analysis Image Leaderboard, beating out GPT-4o, HiDream-I1-Dev, and Recraft V3 Seedream 3.0 is the latest in the Seedream family of bilingual image diffusion https://x.com/ArtificialAnlys/status/1912122278722379903
Making AI Work Harder for Europeans | Meta https://about.fb.com/news/2025/04/making-ai-work-harder-for-europeans/
“meow :3 https://x.com/qtnx_/status/1912588116252057873
“ByteDance is killing it across all multimodal paradigms. Absurd. This is just a Transformer. In fact this is *any* Transformer. Slap a VQGAN tokenizer, add 8192 embeddings, post-train, you get cross-modal transfer, understanding and generation have synergy, it just werks. https://x.com/teortaxesTex/status/1912239801463341097
“GPT-4.1 still underperforming DeepSeekV3 in Coding but 8x more expensive” / X https://x.com/scaling01/status/1911830809679368248
“GPT-4.1 underperforming DeepSeek-V3-0324 by over 10% on AIME (also slightly underperforming on GPQA) at 8x the price” / X https://x.com/scaling01/status/1911831700964872531
“What goes on inside the mind of a reasoning model? Today we’re releasing the first open-source sparse autoencoders (SAEs) trained on DeepSeek’s 671B parameter reasoning model, R1—giving us new tools to understand and steer model thinking. Why does this matter? https://x.com/GoodfireAI/status/1912217312566137335
“🙏 @deepseek_ai’s highly performant inference engine is built on top of vLLM. Now they are open-sourcing the engine the right way: instead of a separate repo, they are bringing changes to the open source community so everyone can immediately benefit! https://x.com/vllm_project/status/1911669255428542913
“Long mistral, their models are great. Their latest 24B model is very competitive https://x.com/casper_hansen_/status/1911382474640220546
Classifier Factory | Mistral AI Large Language Models https://docs.mistral.ai/capabilities/finetuning/classifier_factory/



