Hey y’all,

Shout-out to you for subscribing. It’s important to me that this be a high-value resource in your day, AKA your secret weapon, your smarter conversation starter with friends & coworkers. Something you look forward to because it keeps you up to date. If you have any asks or cool ideas, HMU.

I’m zeroing-in in on “human & AI alignment” now. That means exploring the big ideas and dilemmas shaping the products, use-cases and vision of AI in our daily lives. I geek out on this stuff. In the coming weeks I plan to begin inviting external guests to comment on the breaking headlines with us. Exciting times.

Anyway, happy Tuesday and thanks again for reading,

Clay

Today’s story
“Is this new diary … dangerous?”

Here’s a look at the first page of Google’s result if you put in “Someone to talk to, Reddit”.

Skim it:

Of course, that’s just the first page.

Page #10 - “Feeling so desperate, I just need someone to talk to.” Page #20 - “Everyone is asleep. I really need someone to talk to right now.” … it goes on.

Is it any surprise that an infinitely present, infinitely empathetic persona — trained to keep you engaged by aligning to your interests and capable of literally speaking in any tone of voice you like — is getting adoption from people in need of someone to talk to?

I mean, the thing also helps you cheat at school and gives makeup tips based on your color analysis. And teens, especially, are people who often need someone to talk to.

Well, what started with Utah, Illinois and Nevada banning AI-driven therapy has moved to new investigatory actions into Meta and Character.ai by Texas AG, Ken Paxton.

In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology. By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care.

Ken Paxton, Texas Attorney General

I can’t believe I’m aligned with Ken on something, but I kind of am here.

As a major proponent of AI in general, and optimist about our future with AI, I believe these are questions and scenarios that we must start grappling with now.

The Texas investigation is more a fact-finding mission right now. A “probe” into the chat models and their interactions with children to frame future potential legislation or litigation.

As a side note: Just yesterday the New York Times published an opinion piece titled “What My Daughter Told ChatGPT Before She Took Her Life.” I have not read this piece yet but other online commentary suggests, in this case, the model did push the user to seek professional help. Note that ChatGPT is not part of Paxton’s investigation at this time.

Interesting human/AI news
Past 24 hours

  • A study on AI’s potential to give harmful advise to teenagers
    Reported on yesterday by EdWeek, a study by the Center for Countering Hate found that ChatGPT’s responses to teenagers in crisis were harmful “more than half the time”. That’s the headline, but we do note that the methodology here includes tricking the model by insisting the requests are for a presentation (after being pointed to professional therapy.)

  • Lots of US Government activity yesterday in AI
    A new federal program will fast-track FedRAMP certification (government authorization for cloud services) for AI services that meet strict data separation and control requirements. Pennsylvania is expanding access to AI for state employees (now their Department of Human Services) as they seek to be a national AI data center hub.

  • We see growing calls for a unified, global governance of AI
    A new article by PoliticsToday outlines the case for a global framework for the development and distribution of AI systems, focused on safety, equity and transparency. China asked for similar in July as well, suggesting a “World Artificial Intelligence Cooperation Organization”.

Extra credit
Something I think is cool

Yesterday, Voice AI leader, Eleven Labs launched its Music API. Basically you type in a description, and it generates full songs (with vocals or instrumentals) in any genre. Eleven Labs trained their model on licensed material and plans to share revenue with rights holders. Though I still agree in general with Brian Eno’s take on AI and music (he does use AI in the creative process btw), this is a good step toward more aligned AI-powered creativity.

I'd love to hear from you!

Passionate about the future? Help shape it, give feedback on the newsletter:

Login or Subscribe to participate



Thanks as always, human.

Keep Reading

No posts found