Image created with OpenAI GPT-Image-1. Image prompt: Cheesy late-night infomercial freeze-frame—rotating pedestal packed with gadgets under starburst “TECH TOTAL-TREASURE™”; silver rim, NTSC halos, high-resolution
If you are beginning a software engineering career, you MUST be at the the top 1%, else it will be very difficult. OpenAI’s SWE AI coding agent Codex merged 352K+ Pull Request with 85.5% success rate. And this number is just for previous 35 days. This repo tracks the opened https://x.com/rohanpaul_ai/status/1936041618554954142
Lots of interesting stuff in this paper, plus, as of the end of 2024: “the annual value of AI-assisted coding in the United States at $9.6−14.4 billion, rising to 64−96 billion if we assume higher estimates of productivity effects reported by randomized control trials””” / X https://x.com/emollick/status/1933206622483939491
II-Medical-8B-1706 is our latest state of the art open medical model 💡 Outperforms the latest @Google MedGemma 27b model with 70% less parameters 🤏 Quantised GGUF weights, works on <8 Gb RAM 🚀 One more step to the universal health knowledge access that everyone deserves ⚕️ https://x.com/ii_posts/status/1934959488710094990
Intelligent Internet introduced II-Medical-8B-1706, an updated version of its open medical model Capable of running on <8GB RAM, the AI outperformed Google’s MedGemma 27B across benchmarks despite 70% fewer parameters https://x.com/rowancheung/status/1935247303524114645
🚀 Autonomous Agents That Think, Remember, and Evolve An incredible project by Aaron Brown (from AWS team), showcasing how autonomous agents can move beyond simple tasks to reason, remember, and adapt – powered by Mem0, @awscloud Bedrock, and Strands Agents SDK 💡 From https://x.com/mem0ai/status/1932098203706613888
New Insights for Scaling Laws in Autonomous Driving https://waymo.com/blog/2025/06/scaling-laws-in-autonomous-driving
Waymo shows that scaling data and compute lifts autonomous-vehicle forecasting and planning in a predictable way. • Similar to LLMs, motion forecasting quality also follows a power-law as a function of training compute. • Expanding data volume elevates model performance. • https://x.com/rohanpaul_ai/status/1934564747078508994
II-Medical-8B-1706, an 8B param health model reached GPT-4o/4.1/4.5 benchmarks, surpassing physicians. Intelligence is gradually ceasing to be human-exclusive. https://x.com/rohanpaul_ai/status/1935309527832056065
in the last 35 days, @OpenAI codex has merged 345,000 PRs on github. 345,000. AI is eating software engineering https://x.com/AnjneyMidha/status/1935865723328590229
Who did it best? Simple svg prompt, one-shot : r/singularity https://www.reddit.com/r/singularity/comments/1l486ji/who_did_it_best_simple_svg_prompt_oneshot/
The progress of Gemini over the last year + https://x.com/OfficialLoganK/status/1935136191927501235
RT @rohanpaul_ai: This is really BAD news of LLM’s coding skill. ☹️ The best Frontier LLM models achieve 0% on hard real-life Programming…”” / X https://x.com/sainingxie/status/1934994111536251361
A useful piece on criticizing AI: “its all PR” and “they are just parrots” are increasingly dead ends in a world where AI clearly can do effectively novel & important tasks. AI calls for robust criticism, but that criticism needs to be more grounded in the current state of LLMs.”” / X https://x.com/emollick/status/1935020901172387979
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task https://arxiv.org/pdf/2506.08872v1
When people are delighted… it when humanity loses to AI…..There’s a handful of personal use cases that make me really bullish on hyper personalized utility. One of them is using ChatGPT as a personal running coach. Fed it all my run stats going back years, and said hey I have a race on this date, I wanna run x pace and keep my HR”” / X https://x.com/raizamrtn/status/1935781113513091107
RT @NielsRogge: “”Hugging Face is basically the equivalent of Github in the era of software 2.0″” – Karpathy, 2025, colorized https://x.com/reach_vb/status/1935970251004313788
🛠️ @AnthropicAI researchers just found out a way to teach language models to fine-tune themselves. So basically, models now fine-tune themselves by rating their own answers. i.e. self-grading frees models from human bottlenecks. Internal Coherence Maximization lets the model https://x.com/rohanpaul_ai/status/1933881685256335815
this repo is gold! a collection of LLM apps with multi-agents, MCP, RAG and so much more. the best way to learn is by building, and this repo provides the blueprint. https://x.com/Hesamation/status/1932742625511289058
[2506.11763] DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents https://arxiv.org/abs/2506.11763
Introducing ALE-Bench, ALE-Agent! Towards Automating Long-Horizon Algorithm Engineering for Hard Optimization Problems Blog: https://x.com/SakanaAILabs/status/1934767254715117812
One challenge no AI model has been able to do well: “”create a coherent, thematic puzzle for a D&D game. The puzzle should be challenging, but solvable”” The current big models are much more on theme than older ones, but still are either too easy or hard (And love similar puzzles) https://x.com/emollick/status/1934854293649006837
Sakana AI developed a new coding agent, ALE-Agent, trained to solve NP-hard optimization problems. Our agent participated in a live coding competition, the challenging AtCoder Heuristic Contest, and ranked #21 out of 1,000 human participants! Learn more: https://x.com/hardmaru/status/1934767617895747862
Switching LLMs wastes tokens. That’s why I built an n8n AI Agent that picks the best LLM for best performance & cost. • n8n JSON template • Prompts included • Setup guide It’s 100% FREE! Just: • Follow • Like • Reply “”ROUTER”” I’ll DM you. https://x.com/TheVeller/status/1922023956980101205
Why did the new coding LM Kimi-Dev-72B have a 43% accuracy drop when used in a different harness? The reason lies in the difference between agentic and agentless approaches to doing bug-fixing on repos, as explained in the thread!”” / X https://x.com/gneubig/status/1935028296565309807
Error checking is a great application of generative AI capabilities and there are low-hanging fruits in just about every domain: – Software: automatic detection of security vulnerabilities – Writing: identifying logical gaps, unclear structure, and weak arguments (can be seen as”” / X https://x.com/random_walker/status/1935311882857947507
Tracing + Evals w/o LangChain/Graph How to get the benefits of LangSmith (evals + tracing) + Studio (testing) w/o using LangChain or LangGraph? Here, we walk through the from scratch, using a non-LangChain/Graph agent as an example! 📽️: https://x.com/LangChainAI/status/1935706402896707657
RT @googleaidevs: 🩺 Get started with MedGemma, a collection of Gemma 3 variants built for medical text and image comprehension. Choose betw…”” / X https://x.com/osanseviero/status/1936096973691539652
Fully autonomous AI agents encounter issues with reliability, transparency, and understanding human needs. This paper proposes LLM-based Human-Agent Systems (LLM-HAS), integrating AI with human guidance and control. This collaboration enhances trustworthiness and adaptability https://x.com/rohanpaul_ai/status/1934409086193574174
Sakana AI just introduced Text-to-LoRA, a hypernetwork that generates task-specific LLM adapters It is like a magic wand that, when you describe what you want the AI to do, it quickly adjusts itself to do that task without needing extra data and time https://x.com/rowancheung/status/1933072197116993853
Self-Adapting Language Models https://jyopari.github.io/posts/seal
We’re excited to introduce Text-to-LoRA: a Hypernetwork that generates task-specific LLM adapters (LoRAs) based on a text description of the task. Catch our presentation at #ICML2025! Paper: https://x.com/SakanaAILabs/status/1932972420522230214
Investment analyst Mary Meeker has released “Trends — Artificial Intelligence (May ’25),” her first tech market survey since 2019. 👉 The 340-page, data-rich report argues that AI’s breakneck adoption and escalating capital spending are fueling both record opportunities and https://x.com/DeepLearningAI/status/1934823324183396517
Inside the Secret Meeting Where Mathematicians Struggled to Outsmart AI | Scientific American https://archive.md/tom60
RT @lmarena_ai: 🚨Breaking: New DeepSeek-r1 (0528) just tied for #1 in WebDev Arena, matching Claude Opus 4! More highlights: 💠 #6 Overall…”” / X https://x.com/ClementDelangue/status/1934714392693588415
New test can help driverless cars make ‘moral’ decisions https://techxplore.com/news/2025-06-driverless-cars-moral-decisions.html
📚 We just enhanced YourBench with support for cross-document question generation! Now you can create evaluation datasets where questions span across multiple documents, not just one Just add a cross_document section to your config and set `enable: true` • you’ll get a https://x.com/ailozovskaya/status/1934962439247851889
60.4% on SWE-bench Verified in a 72B package? https://x.com/scaling01/status/1934746243286319435
98.5th percentile for 10 cents is now considered “”BAD NEWS for LLMs”” https://x.com/scaling01/status/1935058896806457427
Almost all randomized controlled trials on the impacts of AI on innovation, productivity, & job performance pre-dates reasoner models. The ones we do have (a couple of tests of o1-preview in law & medicine) suggest they may lead to a large jump in many fields, but we don’t know”” / X https://x.com/emollick/status/1933282450949689851
stop using VLMs blindly ✋🏻 compare different VLM outputs on a huge variety of inputs (from reasoning to OCR!) 🔥 > has support for multiple VLMs: Gemma 3, Qwen2.5VL, Llama4 > recommend us new models or inputs, we’ll add 🫡 https://x.com/mervenoyann/status/1935708014645784713
o3-pro does by far the best so far at my benchmark (scroll quote tweet thread for others): “”create a visually interesting shader that can run in twigl app make it like the ocean in a storm”” It did take 21 minutes for o3-pro to think (and another 19 to fix a small shader error) https://x.com/emollick/status/1932995067091800066
For years, we’ve been saying that bigger isn’t always better for AI and that smaller specialized models are usually faster, cheaper and more accurate for your specific constraints. So super happy to release the long-overdue capability of finding the best model based on size on https://x.com/ClementDelangue/status/1934672721066991908
Here’s a downloadable preview of the first chapter of our book on AI Evals written by @sh_reya and I, with a full table of contents. We are currently using this in our course and plan on eventually expanding it into a book. Feedback on TOC welcome! https://x.com/HamelHusain/status/1933912566910378384
It seems like benchmarking papers are increasingly being discussed as if they were proofs of AI limitations in the long run. Benchmarks are not useful if they are already saturated (and the fact that AI is at the 98.5% of human coders on a task set seems to be important as well)”” / X https://x.com/emollick/status/1935107962835444080
Part 2 of this mystery. Spotted on reddit. In my test not 100% reproducible but still quite reproducible. 🤔 https://x.com/karpathy/status/1935404600653492484
Ten years ago today, @OriolVinyalsML and I published this paper on arxiv. Back then, we didn’t know how to evaluate chatbots so we chatted with model and showed the samples in the paper. Glad that 10 years later, chatbots are still cool – and vibe-checking is going strong :-)”” / X https://x.com/quocleix/status/1936170043332825164
By surveying workers and AI experts, this paper gets at a key issue: there is both overlap and substantial mismatches between what workers want AI to do & what AI is likely to do. AI is going to change work. It is critical that we take an active role in shaping how it plays out. https://x.com/emollick/status/1933533198014660803
This paper shows the same effect as other studies of “”cheating”” with AI – if you use AI to do the work (as opposed to using it as a tutor), you don’t learn as much. But note: the results are specific to the essay task – not a generalized statement about LLMs making people dumb.”” / X https://x.com/emollick/status/1934798590737588294
Excited to release AbstentionBench — our paper and benchmark on evaluating LLMs’ *abstention*: the skill of knowing when NOT to answer! Key finding: reasoning LLMs struggle with unanswerable questions and hallucinate! Details and links to paper & open source code below!
https://x.com/polkirichenko/status/1934730967446638644
There’s a tech report for gemini 2.5 now. If i don’t see this citation on the first page of any LLM paper i’m not gonna read it. 😃”” / X https://x.com/agihippo/status/1935015620250305018
With just 5 nodes in n8n + Apify, I’ve automated Instagram market research. Here’s exactly how I built it: https://x.com/samruddhi_mokal/status/1924379123096420366
It is intuitively obvious that reasoning in continuous embedding space is dramatically more powerful than reasoning in discrete token space. This paper from @tydsh and team show that it is the case theoretically.”” / X https://x.com/ylecun/status/1935253043676868640
RT @arankomatsuzaki: From Bytes to Ideas: Language Modeling with Autoregressive U-Nets Presents an autoregressive U-Net that processes raw…”” / X https://x.com/ylecun/status/1935481174673223717
Human-like object concept representations emerge naturally in multimodal large language models https://arxiv.org/pdf/2407.01067
o3-pro is rolling out now for all chatgpt pro users and in the api. it is really smart! i didnt believe the win rates relative to o3 the first time i saw them.”” / X https://x.com/sama/status/1932532561080975797
RT @MilesKWang: We found it surprising that training GPT-4o to write insecure code triggers broad misalignment, so we studied it more We f…”” / X https://x.com/OpenAI/status/1935385627085914437
Redditor says ChatGPT saved his wife’s life by correcting a doctor’s fatal misdiagnosis. Comments are filled with people sharing their own stories. I don’t understand the AI haters at all. This technology saves lives. https://x.com/deedydas/status/1933370776264323164
Sam says Zuck🦎 is luring OpenAI researchers with $100M signing bonuses and $100M+ yearly salaries : r/ChatGPT https://www.reddit.com/r/ChatGPT/comments/1leciub/sam_says_zuck_is_luring_openai_researchers_with/
Back-of-the envelope it seems like each ChatGPT 4o query costs less than a cent, given the .34 Wh energy use per average prompt & a billion prompts a day & public GPU costs/per hour (training costs of $100M+ are basically meaningless per query). Pretty profitable at $20/month.”” / X https://x.com/emollick/status/1933020534498865574
OpenAI Is Phasing Out Its Work With Scale AI After Meta Deal – Bloomberg https://www.bloomberg.com/news/articles/2025-06-18/openai-is-phasing-out-its-work-with-scale-ai-after-meta-deal?embedded-checkout=true
It’s both surprising and worrisome that broad misalignment emerges simply from training models on insecure code. Great to see @OpenAI publishing research investigating how this happens and how to mitigate it!”” / X https://x.com/polynoamial/status/1935411224281534756
Moonshot AI launched Kimi-Dev-72B, a new open-source coding model for software engineering tasks It achieves SOTA results on SWE-bench Verified software tasks, surpassing open-source rivals like DeepSeek R1, V3, and Devstral https://x.com/rowancheung/status/1934881573490331768
Thrilled to introduce Kimi-Dev-72B, our new open-source coding LLM for software engineering tasks. Kimi-Dev-72B achieves 60.4% resolve rate on SWE-bench Verified, setting a new SoTA result among open-source models. (1/5) https://x.com/yang_zonghan/status/1934652763985838585
if you have done novel AI research, and considered publishing it on arXiv, but the idea makes you nervous: over 2000 papers about AI were posted last week. the worst case is no one notices. best case is you contribute something meaningful to the field you should just do it”” / X https://x.com/jxmnop/status/1934981357010030991
UPDATE: This video was from last Saturday – robot speed was 4.05 seconds/package Yesterday, I saw it running at 3.54 seconds/package That’s a 13% speed-up in just 6 days 🤯 https://x.com/adcock_brett/status/1933970257028530514
Biomedical and clinical encoders have limited domain adaptation and long-context support. This paper develops BioClinical ModernBERT, pre-trained on a large, diverse corpus, offering long context and state-of-the-art performance. Methods 🔧: → Training resumed from https://x.com/rohanpaul_ai/status/1934080672500502794
24 TRILLION tokens of ultra high quality dataset! 💥 https://x.com/reach_vb/status/1935444297966604539
A Neural Conversational Model”” is 10 years old, w/ @quocleix . TL;DR you can train a chatbot with a large neural network (~500M params!). Samples 👇 This paper was received with mixed reviews, but I’m glad all the critics are now riding the LLM wave 🌊 https://x.com/OriolVinyalsML/status/1936157090164187285
Backpropagation in Neural Network – GeeksforGeeks https://www.geeksforgeeks.org/machine-learning/backpropagation-in-neural-network/
Does does it require reasoning to answer this question, given I require the first letter of each word in your response to spell in words the count of letters in its own SHA1 hash? https://x.com/goodside/status/1934833254726521169
Everyone knows uv makes it easy to handle Python dependencies. But did you know that if you have dependencies as a header, you can then `uv run https://x.com/nrehiew_/status/1933932062198636700
Fuller, robust control of memory is absolutely critical for all business AI use cases. Context rot definitely accrues over time in black-box systems. This is why Embra’s AI memory looks more like a CRM than a black box. User’s explore and begin work inside the AI brain. https://x.com/zachtratar/status/1935491439028531293
GRPO quirk that contradicted my intuition: If you train on a group with rewards [0, 0, 0, 1] And then you train on another group with rewards [0.99, 0.99, 0.99, 1] Because of how GRPO normalizes within groups, the last trajectory will be equally reinforced in both cases!”” / X https://x.com/corbtt/status/1935810380850511945
Here’s my growing list of writing guidelines in my prompts. I feel that the RLHF isn’t working well on writing? I have to fight slop aggressively. Rule of thumb: delete 50% of what AI writes because at least that much is low value fluff (but AI is still helpful) 1. Do not”” / X https://x.com/HamelHusain/status/1934029394391228818
Hot take: at current token prices you should *always* ask your LLM-as-judge to explain its CoT first before answering. Makes them way easier to debug when it inevitably make a judgement you disagree with.”” / X https://x.com/corbtt/status/1935061614149128616
How Human-in-the-loop (HITL) grounds synthetic data: ▪️ Validating and curating synthetic data: HITL here is a continuous loop: generation → human review → correction → curation. Human experts vet the generated datasets, discarding unrealistic examples, correcting factual https://x.com/TheTuringPost/status/1934748339792474140
Large language models lack mechanisms to adapt weights based on new data or tasks. This paper presents Self-Adapting Language Models (SEAL), allowing models to generate their own finetuning data and updates. SEAL improved knowledge recall accuracy to 47.0% on SQuAD with https://x.com/rohanpaul_ai/status/1934476530991972637
Many people have been asking for an interface to OpenHands that is: 1. easy to install (no docker) 2. can be used in your standard development environment This new CLI checks both of these boxes, and is fun to use!”” / X https://x.com/gneubig/status/1934990765119410225
Optimizing the data mixture for training LLMs is a slow, computation-intensive process. This paper presents DOMAIN2VEC which converts datasets into vectors, enabling efficient, training-free identification of the optimal data mixture by aligning training and validation dataset https://x.com/rohanpaul_ai/status/1934096023368221087
Overemphasis on simplicity gives you a big bag of tricks that don’t add up. Parsimony is a better goal. Find the smallest number of pieces that cohesively fix a whole range of problems. Put differently, you need to invent Unix first before it makes sense to create lots of small”” / X https://x.com/lateinteraction/status/1935525945806590425
RT @arcee_ai: Today, we’re thrilled to unveil the @arcee_ai Foundation Models, a new family of GenAI models designed from the ground up for…”” / X https://x.com/code_star/status/1935439879506424295
RT @ashVaswani: Check out our latest research on data. We’re releasing 24T tokens of richly labelled web data. We found it very useful for…”” / X https://x.com/eliebakouch/status/1935137555923493257
RT @charliermarsh: The Python Steering Council has voted to remove the “”experimental”” label from the free-threaded (“”nogil””) builds for Pyt…”” / X https://x.com/jeremyphoward/status/1934837032079274446
RT @dianaabagyan: 🚨New pretraining paper on multilingual tokenizers 🚨 Super excited to share my work with @Cohere_Labs: One Tokenizer To R…”” / X https://x.com/sarahookr/status/1934739893906964684
RT @eliebakouch: 24T token dataset but the best part is that it’s labelled with a 12-category taxonomy covering subject, reasoning depth, e…”” / X https://x.com/code_star/status/1935203602903207963
RT @essential_ai: [1/5] 🚀 Meet Essential-Web v1.0, a 24-trillion-token pre-training dataset with rich metadata built to effortlessly curat…”” / X https://x.com/ClementDelangue/status/1935146797229555857
RT @mathusmassias: New paper on the generalization of Flow Matching https://x.com/jeremyphoward/status/1935826496297615483
RT @MelMitchell1: New paper: “”Large Language Models & Emergence: A Complex Systems Perspective”” (D. Krakauer, J. Krakauer, M. Mitchell). W…”” / X https://x.com/ecsquendor/status/1934818424372531434
RT @percyliang: Wrapped up Stanford CS336 (Language Models from Scratch), taught with an amazing team @tatsu_hashimoto @marcelroed @neilbba…”” / X https://x.com/NandoDF/status/1935833111889133597
RT @tenderizzation: the FP8 values in your model after 50 layers of quantize/dequantize operations https://x.com/TheZachMueller/status/1935434078435819925
RT @tobi: I really like the term “context engineering” over prompt engineering. It describes the core skill better: the art of providing…”” / X https://x.com/imjaredz/status/1936099226104004866
Someone thought it would be useful to quickly write up a note on my thoughts on scalable oversight research, e.g., research into techniques like debate or generally improving the quality of human oversight using AI assistance or other methods. Broadly, my view is that this is a”” / X https://x.com/RyanPGreenblatt/status/1935407345888280938
The Illusion of the Illusion of Thinking A Comment on Shojaee et al. (2025) https://arxiv.org/html/2506.09250v1
the model keeps on training even when the underlying infra keeps failing….out-of-the-box PyTorch”” / X https://x.com/soumithchintala/status/1936136796963823848
There have been *thousands* of optimizer papers published. But the SOTA has only improved *once* (Adam -> AdamW; others are just better implementations of those 2, eg. FSDP). Therefore, we should stop writing those papers. No need to cite AdamW. Everyone knows where it’s from.”” / X https://x.com/hyhieu226/status/1934290217516793947
This work uncovers a profound connection between continuous and discrete (non-absorbing) diffusion models, allowing transfer of advanced techniques such as consistency distillation to the discrete setting! Also: amazing title, no notes! 🧑🍳😙🤌”” / X https://x.com/sedielem/status/1934730362476712043
Time series forecasting methods lack explicit reasoning, relying on fast pattern matching and struggling with complex temporal logic. Current LLM methods also lack deep, step-by-step reasoning and generalization. Time-R1 trains LLMs with a two-stage reinforcement fine-tuning https://x.com/rohanpaul_ai/status/1934172779500142827
Training LLMs is prohibitively expensive. Insights from small-scale experiments do not transfer to large systems, hindering innovation. Farseer, a refined scaling law, offers enhanced predictive accuracy across scales to bridge this gap. It achieves better fits to empirical https://x.com/rohanpaul_ai/status/1934461179835117692
Understanding and Coding KV Caching From Scratch — The Extended Edition https://x.com/rasbt/status/1935328683113464169
Would you also like the strongest pretraining data in the world? We can help @datologyai https://x.com/code_star/status/1935462275906945428
3 things that make Reasoning Models (RLMs) different: ▪️ Post-training with Reinforcement Learning (RL): PPO, GRPO, RAFT and other RL methods help RLMs develop skills by exploring reasoning strategies through trial and error, rewarding correct steps and answers. ▪️ https://x.com/TheTuringPost/status/1935736112515080314
We now have a better dataset for chain of thought unfaithfulness, on the kinds of prompts a user might actually give! This is a really important area, and I hope this can support further research. And it’s great to see our results hold up (though weaker)”” / X https://x.com/NeelNanda5/status/1935411492146368559
What Does “”Attention”” Mean in AI? The term attention gets thrown around a lot in conversations about AI, but it often carries assumptions from human psychology that don’t fully apply to how models work. So let’s unpack it — and look at where human and AI attention overlap, and https://x.com/TheTuringPost/status/1935814653210509507
From Bytes to Ideas 💡 @AIatMeta ‘s new paper. Model invents its own tokens while it learns, by having one network pools bytes into words on the fly. LLMs rely on a fixed split chosen before training, which limits how far they can look ahead and how well they transfer across https://x.com/rohanpaul_ai/status/1935540383171174495
Fine-tuning LLMs on social media data makes the generated text much more realistic and substantially decreases detection accuracy. Methods 🔧: → Researchers created a large dataset of 505,159 AI social media texts from multiple LLMs. → They fine-tuned models using small, https://x.com/rohanpaul_ai/status/1934188382084624795
Model Once, Represent Everywhere: UDA (Unified Data Architecture) at Netflix | by Netflix Technology Blog | Jun, 2025 | Netflix TechBlog https://netflixtechblog.com/uda-unified-data-architecture-6a6aee261d8d
Our models need to run in real time on real robots, but inference with big VLAs takes a long time. We developed Real-Time Action Chunking (RTC) to enable real-time inference with flow matching for the π0 and π0.5 VLAs! More in the thread👇 https://x.com/physical_int/status/1932113398961201245




