I decided to try a theme with this week’s cover imagery to see how creative MidJourney could be with simple prompts. Each category cover image is a name tag + art style. It was pretty neat to see the variances. The goal is not perfection. By posting the mistakes, we’ll get to see how imagery improves over time. Here is the prompt for the cover:

a Bauhaus name tag that reads “Tech” –ar 5:3 –style raw

“We’ve trained a 70b model that achieves >1000 tokens/s using a custom inference technique called speculative edits. It beats GPT-4o performance on an important task in Cursor called: “fast apply”. We explain in detail how we do this in our blog: 

“As art evolves, it continues to push the boundaries of creativity. Leading global talents will compete at the Global Prompt Engineering Championship, turning innovative prompts into art, literature and coding masterpieces. Join us at the Global Prompt Engineering Championship 

“Arvind is a great rational skeptic on AI, and this thread is worth reading. I don’t know if we are technically plateauing on LLMs (many insiders I talk to feel that we aren’t), but even if LLMs stop scaling up, we have 5-10 years of massive changes from ancillary tech & adoption.”

“A great illustration of a failure mode of LLMs that many people don’t know. Because the AI is trained on patterns, if a particular piece of text is everywhere on the web, the AI will overfit to that pattern. It is why it is more likely to give “42” when asked for a random number.”

“Our paper about reliably finding under-trained or ‘glitch’ tokens is out! We find up to thousands of these tokens in some #LLMs, and give examples for most popular models. 

Falcon LLM

“We need to improve our benchmarks the same way we improve our models. Super interesting work from TIGER-Lab with an upgraded version of MMLU with 12k complex questions (vs. 16k for MMLU) and additional reasoning problems. – It is more discriminative among frontier models (see”

Unlock enterprise knowledge with Atlassian Rovo – Work Life by Atlassian

Near-Instant Full-File Edits Editing Files at 1000 Tokens/second

Perturbed-Attention Guidance

“How does online iterative RLHF improve LLMs?🤔 A team at @salesforce released a paper with a reproducible recipe for online iterative RLHF, showing online RLHF methods, such as online iterative DPO, outperform offline methods. 😍 Implementation 0️⃣ Train or select a Reward Model 

Heads up! You’ve scrolled to the end of this category. There may have been just one or two links (above), so go back up and double check to be sure you didn’t quickly scroll down past it.

Be Sure To Read This Week’s Main Post:

This week’s executive overview and top links are here:

AI News #33: Week Ending 05/17/2024 with Executive Summary and Top 58 Links

The post you just read is an deep dive extension of my weekly newsletter, This Week In AI, an executive summary of the top things to know in AI. Each week, I create an accessible overview for laypeople to feel confident they are conversant with the week’s AI developments. I include a curated list of must-click links of the week, to offer everyone a hands-on opportunity to explore the most intriguing updates in artificial intelligence across various categories, including robotics, imagery, video, AR/VR, science, ethics, and more. Beyond the overview, I post these topic-based deeper dives (below). If you haven’t read this week’s overview, I recommend starting there.

Credits/Sources

Most of these weekly links come from just a few prolific oversharing sources. Please follow them, as they work hard to find the news each week and they make it a lot easier for me to compile.

For previous issues, please visit the archives!

Thanks for reading!

One response to “Tech and Development: Week Ending 05/17/2024”

  1. […] a Bauhaus name tag that reads “Tech” –ar 5:3 –style raw […]

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading