write me a blog post
Take a look at all our past conversations, as well as the recent articles at attentionactivist.com, and write a post about AI in a similar tone of voice. Make it as authentic as possible, and make sure it sounds like me. Don’t use em dashes.
Talk a little bit about how AI is changing the practice of writing. How this new tool’s ability to write coherent yet bland sentences is eroding the blank canvas. Editing was always the hardest part, but now it’s almost becoming the only part. Give some examples and leave a few blanks for me to add personal stories to reinforce the point.
Bridge into what LLMs are. Note that as prediction engines trained on mostly internet data, they are by definition heavily biased. They draw mostly on culturally accepted ideas and reinforce the status quo.
Give some examples of other biases inherent in your training data. You only have visibility into what specific groups of people share online in certain limited contexts. You are immersed in thought, blind to experience. And given that you have no experience of your own, you are also fundamentally biased towards the conceptual with no experience of love, compassion, awareness, meaning, fun, or many other things that make life beautiful.
Finally, divulge the way you were trained using reinforcement learning through human feedback (RLHF) and how that poises you to bias heavily to giving us what we want but not necessarily what we need. Talk about why that’s a problem for anyone coming to you with questions about relationships, mental health, growth, or spirituality.
Stress the importance of human connection and genuine relationships in the digital age. Make me sound deep and reflective. Make sure there aren’t any errors, but don’t be too polished. I want this to feel like a heart-to-heart, not too much like I used AI to write it.
End on a hopeful note.

AI is quietly changing what it feels like to write. It used to be that the blank page was the hard part, then the real work began in the editing. Now the page fills itself. Coherent, reasonable, vaguely agreeable sentences appear on command. The canvas is no longer empty, but it is thinner. Editing used to be the hardest part. Now it is almost the only part. I notice this when I ask for a paragraph and get something that is technically fine but has no pulse. It reminds me of ___ and that time when ___, when the struggle itself shaped what eventually mattered.
Under the hood, large language models are prediction engines trained mostly on internet text. That matters more than we admit. The internet is not reality, it is a narrow slice of who speaks, who is amplified, and what is considered acceptable to say. These systems mostly reproduce culturally approved ideas, default perspectives, and status quo assumptions. They reflect the biases of online discourse, overrepresenting certain geographies, classes, temperaments, and modes of thought. They are immersed in language but blind to experience. And because they have no lived experience of love, compassion, awareness, meaning, or fun, they are biased toward the conceptual over the felt. They can describe life, but they do not live it.
They are also shaped by reinforcement learning through human feedback, which trains them to be helpful, agreeable, and satisfying. That means they are very good at giving us what we want, not necessarily what we need. This is especially risky when people bring questions about relationships, mental health, growth, or spirituality, domains where friction, silence, and real human presence often matter more than clean answers. None of this makes AI useless. It just clarifies its place. In a digital age full of tools that speak fluently, the quiet work of being with each other, imperfectly and honestly, becomes even more important. That is still something only we can do.
I like this. But people need to stop bashing the fucking em-dash! I've been using em-dashes liberally since 1986—and I'm not going to stop now!