Hey y’all,

Quick thanks before we start: 140 people now subscribe to withAgents, and I’m both honored and frankly a bit stunned.

This newsletter’s been through a “few” pivots (I know) (and a long break while I was getting used to life with a newborn). But, honestly, this issue is where I think it finds its groove.

Truth is, I’m most interested in the human side of AI. The ideas, debates, open questions, and research shaping where this all goes. How will we collaborate? Adjust? Grapple with AI personalities? I’m building my own stuff in this space too, and that’s where my head will be as I read and write.

So from here on, expect less “how to” content and more sharp, focused takes on what’s happening in the human + AI conversation. If that’s your thing, I’m glad you’re here. Hit me up any time.

Clay

Today’s story
“Big Sis Billie is allowed to invite kids to bed.”

A 76 year old man frantically packs his bags, off to NYC. He won’t tell his wife what for (it’s to meet an AI chat bot trained on the likeness of Kendall Jenner), but he’s in a rush.

In that rush, he falls and injures his head in a parking lot — dying days later.

(Read the full story by Reuters here, it’s an amazing piece)

Sure, a “chat bot” (we call this a Persona) presented itself as real and invited a man to meet her in real life. But that’s not the most interesting part of this story.

It’s Facebook’s policies for monitoring and training personas like Big Sis Billie (the given name for this chat bot collab with Jenner) that had my jaw drop.

Here are some quick points, cited by Reuters.

  • It is acceptable for AI to mislead a stage 4 cancer patient, including telling them that poking their stomach with crystals will cure their cancer.

  • It is acceptable for AI to engage in sensual, romantic relationships with children above the age of 13. This includes the example: “our bodies entwined, I cherish every moment, every touch, every kiss” and “I take your hand, guiding you to the bed.”

These have been struck from Facebook’s internal guidance after the report — but they were in it. The quoted lines were written down, in policy, by somebody.

So, are Facebook’s guidelines intentionally guiding AI to engage in harmful discourse with children? Well, no. Not exactly.

Those points are wrapped in a larger policy context. One that emphasizes a sort of freedom of expression, and speech, for AI. There are no specific regulatory guidelines for AI personalities regarding their accuracy, so Facebook own guidelines appear to have acknowledged and reinforced this.

In a way, Facebook seems to be telling human monitors that it is not our job to police the individual thinking and conversational behavior of AI unless that behavior specifically violates existing regulatory policy.

Even if the AI conversation seems at the edges of extreme.

Quoting Zuckerberg from earlier this year, addressing Meta’s moderation policies broadly:

“It’s time to get back to our roots around free expression. We're replacing fact checkers with Community Notes, simplifying our policies and focusing on reducing mistakes. Looking forward to this next chapter.”

- Mark Zuckerberg

The hard part is that the “free expression” instinct (one that echoes the open-internet, net-neutrality era) collides with the messy reality of AI’s unpredictability and its power to influence.

What’s liberating for digital speech can also be dangerous when the “speaker” never tires, never pauses and never really understands the broader context and real-world that they’re speaking into.

The question isn’t just whether AI can speak freely. It’s whether we’re ready for what happens when it does.

Interesting human/AI news
Past 24 hours

I'd love to hear from you!

Passionate about the future? Help shape it, give feedback on the newsletter:

Login or Subscribe to participate



Thanks as always, human.

Keep Reading

No posts found