Hey There,
I’m right in the middle of a few weeks of travel. Feeding Giraffes and petting Rhinos last week, in Mexico this week for work and off to San Diego next week for fun. Good times.
But the AI news doesn’t stop. And, with that, our GAIN format for this newsletter continues (something to Go do with AI, Alerts on the week’s top AI news, an Idea to chew on and a few new thought-leaders to Network with for the week).
Here’s what I’m seeing overall:
Frequent one-upping with surprisingly little reaction - OpenAI’s newest Sora 2 looks awesome (see Alerts section). Put yourself in videos with sound, longer than Google’s Veo3, etc. But somehow these things don’t seem to capture much attention outside of the AI enthusiast cohort.
First-movers getting feature-creep to maintain growth - Granola announced “recipes” and I’m a bit underwhelmed. My first reaction: (happy to be proven wrong) it looks like a costly, influencer-led product extension that nobody really needs.
The rise of complete solutions - See the Lovable Cloud/AI release for an example. They understood the assignment, focusing on building the features that make Vibe Coding truly end-to-end and for anybody.
See you next week,
Clay
Guess the number
According to OpenAI’s recent study on user behaviors, across the user-base, what percent of ChatGPT usage is work-related today?

(answer in signature)
Go (do this)
Today, we look at a common problem: Making AI-assisted content not sound like that... through editor prompt chains.
other things that launched this week,
for you to check out:
Lovable launched Lovable Cloud and Lovable AI. You really need to watch the video, so I’m putting it here below. - Go read more about the new Lovable.
Ambient looks like a cool solution for daily meeting prep, bringing things like attendee dossiers, “last meeting” summaries for recurring meetings and deeper context to prep you for your calls. Worth a look. More on Ambient, here.
Alerts
An AI actress was signed to a talent agency – Yep. The future. The AI division of Particle6 created Tilly Norwood. She’s AI. Her creator, Eline Van der Velden, says she could become “the next Scarlett Johansson or Natalie Portman” and claims AI actors could cut production costs by 90%… which actually feels low? Some actors and SAG-AFTRA members have threatened to boycott agencies repping her. Read more here.
OpenAI’s Sora 2 & TikTok-like app – OpenAI launched a standalone Sora 2 app with a vertical video feed resembling TikTok. I downloaded it, you can too, but you’ll run into the ‘invite only’ wall once you jump in. Eventually, users will be able to generate 10-second clips using the new video model, but cannot upload their own footage. The app includes an identity-verification feature allowing users to use their likeness and tag friends. Sora 2 may just be the way AI-generated video gets democratized like ChatGPT did for text. Read more here.
Meta to use AI chat data for targeting ads – New reporting that Meta will use conversations with its AI assistants to personalize ads on Facebook and Instagram. TBH, who uses that? Read more here.
Workslop study – 40% of U.S. workers encountered AI-generated fluff (“workslop”) last month, costing ~$186 per employee in lost productivity. Recipients report annoyance and distrust. The most interesting point I’ve heard on this (by the way) is that productivity gains up-stream, by those using AI to write stuff, often causes productivity harm down-stream, for those who have to put that insufficient content into practical use. Read more here.
Ideas
Figure out your incubator.
Just because you (or your team) can use an AI tool, doesn’t mean it’s simple to completely reorient your workflows around AI. If you’re trying to take your work-life AI use-cases beyond ChatGPT, I bet you’re feeling this.
So how do you rebuild entire workflows around AI (AI-native) without disrupting BAU? How are you supposed to evolve the team and their work, without missing a beat?
In my experience, you can’t.
At least in a sense you can’t. Let me break it down for you.
Last week I wrote about the challenges of defining AI-driven workflows, often caused by not having great workflow definition sans-AI.
This week I want to share the actual way I’m personally and professionally dealing with this exact problem. Call it an “incubator.”
An incubator for AI workflows is your fully enabled capacity for exploring AI against real use-cases, building workflows from the ground up around AI. When I say “fully enabled,” I’m talking Time, Tools, Budget/Paying for Licenses, Integrations and Ownership.
For example, this newsletter is an incubator! I’ve spent months tinkering with AI tools and platforms to operationalize this as a personal project that doesn’t overly impact my life’s overall freetime. It’s an ongoing challenge.
To do that, I personally have assigned myself early morning work time for playing with production on this newsletter (using AI tools oftentimes), I’ve gotten licenses and API keys to make that possible, I have specific workflows in mind that I’m comfortable fully overhauling and I’m the owner. Does it disrupt my BAU? Totally. It cuts my daily gym visit time in half. But it’s a worthwhile R&D program in my life.
At work, things are similar. In my own professional experience — and that of peers leading in AI in their roles — here’s a similar trend.
It goes something like this; teams have gone from “Everybody adopt AI okay?” to “Everybody try to adopt AI, but we also have a small group of adoption leads trialing new methods, building out revised workflows on high effort areas and leading training for further company adoption”. The latter is where impact happens.
The point is, an incubator is more concept than exact structure. It’s a standalone, protected way to enable safe exploration that can then be brought to a wider team.
QQ: How are you enabling AI in your org?
Take this forward:
Audit your incubation capacity. Who in your org has explicit permission to experiment with AI and the budget to do it? If the answer is "nobody," that's your actual problem.
Create a small, protected team. 2-3 people, clear mandate, budget. Their job is to test use cases, document what works, and bring learnings back to the business. Not to maintain anything.
Measure them differently. Don't evaluate incubator teams on efficiency or ROI in month one. Measure experiments run, lessons documented, and prototypes that get adopted by other teams.
Make it safe to fail. If your incubator hasn't had any failed experiments, they're not really incubating.
(☝️ Yes, this means someone gets to "play with AI" while others keep the lights on. That asymmetry is the whole point.)
Network
This week’s list was pretty easy. Some more giants.
Reply to their next post: Allie K. Miller – An AI leader, advisor and investor who believes “AI isn’t the future, it’s now.” She has worked with hundreds of companies on AI product development and previously led Amazon Web Services’ machine-learning business for startups and venture capital. As the most-followed voice in AI business (1.5 million followers) and a former IBM AI product lead, she shares practical resources on building AI-first companies.
Learn from their thinking: Andrej Karpathy – A Slovak-Canadian computer scientist and deep-learning specialist. He co-founded and worked at OpenAI, served as Tesla’s director of AI and Autopilot Vision, and is now a prominent voice on LLM capabilities and safety. His posts blend technical insight with accessible explanations.
thanks
So, how much of our ChatGPT usage is work-related?
According to the study, just 27%. And that weighting seems to have decreased over time, with more and more AI usage being personal.
Have a great week,
- Clay G [LinkedIn]