Hey y’all,

I want to call out those of you who subscribe.
A quick ask, a passing of the mic.

Why are you here? Why subscribe?
What are you most interested in?

Coming into August, I made a purposeful decision to overhaul withAgents. Less “How to AI your social posts” type content and generic news, more focus on the research and philosophy shaping how we live with AI. I like thinking about that stuff more.

So why have you stuck around? As expected, there’s been a subscriber decrease since the change — I didn’t expect the earlier subscribers to all share this interest and did anticipate a reset of sorts. But man, I REALLY want you to love this newsletter. Hit me with thoughts, and let’s actually connect.

P.S., if you legitimately love this thing already, do us a solid and slap the share button. It’s powerful.

Clay

Today’s story
“States Rights and the Brussels Effect on AI”

Yesterday’s AI news cycle was all about declining investor expectations and drops in stock prices from Nvidia, Pantir, etc.

Not trading advice… but today we buying.

With reports of Sam Altman using the word “bubble” last week, the Meta drama and the picked over MIT report making the rounds, volatility is in the air. (I’ve reached out to get our hands on this report — which is not public. Let’s dig in more once we get it.) The market hates volatility / the news cycle loves it.

But none of this is as clear as the headlines make it seem, and Morgan Stanley’s newest report yesterday expects ~$1T (with a T) in costs savings for business.

I think the doom headlines are just easy shares. The same can be said for the buying opportunity around these news dips. (See what I did there?)

So honestly, the news is kind of uninteresting today…

…. so let’s talk about something we missed from a few weeks ago.

Want to stop AI from putting out harmful content? Remove the harmful content from training.

This works, and it’s proven to work, without noticeably hurting AI performance otherwise. That according to a recent study from the University of Oxford, EleutherAI, and the UK AI Security Institute.

Basically, if you don’t want an openly available AI/language model to become a terror manual, don’t teach it the terror ingredients in the first place. “Shocking” (sarcasm), yes, but actually pretty important.

By removing 8–9 % of biology‑related content from the pre‑training data, their model resisted thousands of adversarial prompting attempts and still performed just as well on ordinary tasks.

You might not remove such training data for lab setting models, but you definitely might for models used by the general public. That’s the idea, anyway.

Deceptively simple, but seems to work. Garbage in, garbage out.

Interesting human/AI news
Past 24 hours

Extra credit
Something I think is cool

Last week, Google announced that their new Flight Deals product would be launching this week. I got it yesterday, maybe you did too.

I'd love to hear from you!

Passionate about the future? Help shape it, give feedback on the newsletter:

Login or Subscribe to participate



Thanks as always, human.

Keep Reading

No posts found