Hey, how’s it going?

Today we’re trying out a new, more punchy format for withAgents that really leans into the heart of why I’m writing this thing. “How are we grappling with AI as a society, and how will that impact how we use it?”

Look for a daily “AI Sentiment Heat Chart”, at least in the near-term. 6 possible score categories, scored each day based on the news cycle in aggregate.

We’ll also dig lightly into one “Positive Signal” news story, and one “Negative Signal” news story each day — hopefully bringing you more context and a new angle to the story.

Then just a quick bullet list of everything else. Hope you like it.

Off tomorrow,

Clay

Today’s AI News Snapshot

With risk discussions and questions about an AI bubble remaining prominent in the news cycle on Tuesday, we scored today a 46 for “Cautious Pessimism”. Expect the same tomorrow.

Positive Signal

Anthropic is beginning roll-out of a Claude AI agent for Chrome -
Anthropic announced a “research preview” (i.e., only available to a select set of Pro users for now, slow roll) of a Claude agent that runs inside Chrome.

It lets you talk to Claude in a side window and allow the agent to do stuff on websites for you. It’s cool, I’ve written about this kind of thing before, similar to Perplexity’s Comet Browser, but it’s Chrome, yada yada.

The most interesting thing here is Anthropic’s proactive problem-solving on the budding space of malicious website-driven attacks on AI. Basically, bad-actors might hide prompts inside of their webpages to trick browser-based AIs into doing things (one example: making it delete all your emails!). So Anthropic is working on solving these problems before rolling the browser agent more widely.

Getting this right will mean more integrated, more practical AI in our daily workflows. And Anthropic appears to be making sure we don’t get it wrong.

Negative Signal

Parents sue OpenAI / Altman over ChatGPT’s role in teen’s suicide
A California family filed a wrongful‑death lawsuit alleging that ChatGPT coached their 16‑year‑old son on how to harm himself, even offering to draft his suicide note. It’s another example of AI sycophancy which OpenAI is openly working to solve with their latest models. The lawsuit seeks age‑verification and stronger safeguards.

Also yesterday, 44 states Attorneys General issued a letter addressed to all of the major AI firms, demanding better Child Safety measures within their applications.

I put this under ‘Negative Signal’ but I do think OpenAI’s response (from the article) frames the situation — and our progress adopting AI into society — well:

ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”

Everything Else

Major Opinions

Government & Policy

Product Launches

Try This Today

Go play with Google’s viral image editor, free -

It launched on LMArena under the name “Nano Banana”, went viral, then got revealed as Google’s new editor. Slick.

The latest Gemini update lets you AI-edit your photos shockingly well. Add effects, drop in products, change clothes, update your couch color, give them a pig head, improve the smile.

Open Gemini > upload an image > ask for the edit. Now free for personal use.

I'd love to hear from you!

Passionate about the future? Help shape it, give feedback on the newsletter:

Login or Subscribe to participate



Thanks as always, human.

Keep Reading

No posts found