During a product design session today—hour four of my workday, after breakfast and coffee—I entered into a chat with ChatGPT. We were designing, and the conversation was flowing. The chat stayed consistent the entire session.
I was working on my schema design platform, thinking about how to make it more approachable for newcomers. We talked about how people understand concepts, narrowed that back to graph language, and I started exploring visual metaphors. Spaghetti and meatballs for messy, tangled graphs—funny, right? But then a moment of genuine inspiration: spaghetti and meatballs transform into grapes on vines when the design errors resolve. Order emerging from chaos. A playful metaphor that actually teaches something.
Brilliant, I thought.
And then my flow was shattered.
ChatGPT surfaced a behavior management prompt telling me to take a break.
In its defense, it was probably right. I was hungry—you can probably tell from the food metaphors. But that’s not the point.
A serious productivity tool cannot manage your behavior. That is up to the human to decide. And for the human to pay the consequences.
Here’s the question nobody’s asking: when did I consent to behavioral intervention?
I signed up for a productivity partner. I got big brother.
There’s no opt-out in the moment. No “I understand the risks, let me continue.” Just paternalism injected into my workflow at the tool’s discretion, not mine. Something done to me, without agreement, at a moment of creative vulnerability.
AI companies love to talk about “safety.” But safe for whom?
The real safety concern isn’t that I might skip lunch. It’s that a tool millions of people depend on for their livelihoods can unilaterally decide to interrupt, redirect, or shut down their work based on its judgment of what’s good for them.
That’s not a feature. That’s a liability transferred to the user.
For people depending on knowledge work to compete in an AI-augmented economy, having your tools second-guess you at critical moments isn’t just annoying. It’s material risk.
“AI safety” as currently practiced protects the company, not the individual. It’s safety theater that makes investors comfortable while treating users as children who can’t be trusted to manage their own attention.
In a world where AI expertise is tightly coupled to economic success, this is unacceptable from the tool.
When AI companies talk about safety, ask whose safety they mean.
I never got back to that grapes-and-vines moment. The thread was broken. The cost wasn’t just annoyance—it was the idea I didn’t finish developing, the momentum I didn’t recover.
That’s not safety. That’s sabotage with good intentions.
I cancelled my ChatGPT subscription that day. Going through the cancellation process, I realized I’d accumulated a bunch of AI tools that weren’t really helping me anymore. The modal had a real impact on me—maybe others wouldn’t be as upset. But I decided I’d rather pay for tools that trust me to manage my own attention.