
Hey you,
Another day, another news cycle. There was a ton of it, honestly, and it can be hard to weed through all of the opinion pieces flying around right now to find the golden nuggets in AI worth sharing here. But mission accomplished I think.
The big thing I’m seeing right now is that we’re far more focused on the use-case not the tech, in AI. The “emergent” needs (often unexpected), side-effects, product design decisions and their alignment to human behavior (again, often unexpected). You’ll see it here in today’s writeup.
As always,
Clay

Today’s story
“Show me 370,000 private diaries.”
I really expected this to be some kind of security leak or hack.
Nope. Just good old fashioned product negligence.
Yesterday, Forbes reporters discovered that xAI’s Grok chatbot quietly publishes certain user conversations, to the open internet, when they’re ‘shared’ those conversations privately with friends. 370,000ish of them, so far.
Basically if somebody wants to send a link to their Grok chat to a friend, they can hit the share button. Normal stuff. That also, without any heads-up, makes it crawlable for the search engines. Google’able. Alexa’able, even.
So what’s exposed? Basically what you’d expect. Tons of boring conversations, cute relationship banter, assassination guidance, bioterrorism stuff, the usual.
I thought Sam Altman’s comments on Theo Von’s podcast in late July were pretty interesting here. He suggests interest in AI chats having the same privacy rules as Attorney/Client and Doctor/Patient interactions.
What do you think?
This episode continues the growing convo about how AI devs handle consent, and whether chat platforms should be treated like private diaries or public forums.
Key points:
Over 370 000 indexed chats ranged from everyday questions to sensitive data; marketers have already exploited this for SEO (i.e., making and sharing chats to spam new content online)
The share button generated public links without explicit warnings or user consent, a design choice that could violate privacy laws
xAI hasn’t said whether the shared conversations fed training data or how it will protect users moving forward. ChatGPT dropped a similar feature last year after bad press
People increasingly treat chatbots as therapists and journals, making transparent data policies and easy deletion rights essential

Interesting human/AI news
Past 24 hours
Meta pauses headline-grabbing AI hiring binge
Don’t buy into the hype headline - this doesn’t read to me like cynicism at Meta towards AI. They’re getting some focus. After spending billions and poaching Scale AI’s Alexandr Wang, Meta reorganized its AI efforts into four teams under the new “Superintelligence Labs” and temporarily halted recruiting yesterday. Wang’s Words: “Superintelligence is coming, and in order to take it seriously, we need to organize around the key areas that will be critical to reach it”Always‑on AI glasses hit the market
I guess I thought Meta glasses already did this. Turns out, the Halo X team is explicitely going after Meta because they recognize Meta can’t play as fast and loose as they can, due to recent controversy. Basically Halo X glasses record everything, targeting a generation of AI users who really cares less and less about privacy and surveillance overall. The glasses record and transcribe everything you see/hear without an indicator light and whisper real‑time prompts to the wearer.OpenAI’s biggest problem is hardware, not cash
With July revenue topping $1 billion, OpenAI’s CFO says the company still can’t get enough GPUs to meet demand. Altman plans to spend “trillions” on data centers while courting Oracle and CoreWeave. Compute scarcity is a new geopolitical choke point.New NSF AI Materials Institute for next-gen materials science
From the announcement: “A major investment in AI-driven materials discovery by the federal government… In partnership with the technology company Intel, the National Science Foundations’s new Institute creates a coalition of scientists, materials researchers and data scientists from Cornell, Princeton, the City College of the City University of New York and Boston University.” Having read The Three Body Problem, this interests me.Education report shows AI fluency gap
Microsoft’s survey finds 86% of education organizations now use generative AI, with student use up 26%. Less than half of students and educators feel confident in their AI knowledge and skills, despite high adoption. There’s a gap in formal training.
Extra credit
Something I think is cool
Google’s upcoming Pixel 10 phones will let you talk to your photos. They’re also bringing that feature to iOS and other devices. You can ask, “restore this old photo” or “put a party hat on grandma,” and the Gemini‑powered editor does it. The feature includes C2PA content credentials baked into the camera and Google Photos so anyone can see when AI touched an image.
That’s the bit that most struck me. Verbally editing photos is neat, but it’s not that big of a breakthrough. Making editing with AI more accessible + integrating fact-checking for AI-generated content though? Good progress.

“How this was made” Credentials
I'd love to hear from you!
Thanks as always, human.