when *not* to use AI
Thriving in the next few years is not just about learning to use AI, it’s also about learning when *not* to use it.
ICYMI, I had a blast on the Mind-Bod Adventure Pod with Jeff and Tasha, looked through a magical lens on The Inner Frontier with Jonny, and helped launch a Job Board for the spiritually inclined. -JV
—
AI is shuffling the deck. I know, I know, the headlines have been telling us for a while now, but I’m really starting to really feel it as a designer, enterpreneur, and, well, as a human being. I already reach for LLMs for anything more complicated than a simple search, and I can’t remember the last time I looked at Wikipedia. I’m also not shy to admit that I’ve benefitted from more than a few conversations with AI on surprisingly personal topics.
It’s only been a few years since LLMs came on the scene, but at work, I already can’t escape the proliferation of generative AI tools that handle every step of my team’s design process (or at least, claim to). As I reemerged from my summer sabbatical into a new season of working life, I knew I needed to make a choice: either lean hard into mastering new generative AI tools and position our expertise on the vanguard, or pivot away entirely before LLMs eat my lunch.
I’ve been looking into this deeply over the past few months, and I’m starting to see more clearly: thriving in the next few years is not just about learning how to use generative AI, it’s also about learning when not to use it.
These tools can’t replace us, but they aren’t useless either. There are a lot of reasons to be concerned about AI, but dismissing it outright is a surefire way to get left behind. On the other hand, going “AI-first” on everything treats AI as more than it currently is.
The reality is unpredictable. Nobody knows how this technology is going to evolve in the next decade. But in the short term, not only do we need to know how to prompt it to get exactly what we need, but we also need to master how to articulate its limitations.
When I was writing my book, I experimented with ChatGPT and Claude only to find out they were terrible writers. Anyone who thinks otherwise is probably not an avid reader. I found myself taking just as much time editing an AI-generated paragraph as it would have taken me to write it from scratch, usually with worse results. So I continued to write the old-fashioned way.
One day, I was hitting writer’s block on a specific topic, so I went for a walk and hit record. I gave voice to all my thoughts about the issue, transcribed it, and fed it to an LLM. It tried to clean things up and completely sucked all the life out of my ideas. Fail.
I tried again, this time asking it to keep my exact vocabulary, stories, examples, and phraseology, but to simply clean it up into a bullet list outline. It created a well organized structure for me to expand upon, maintaining my humanity and voice while helping me get organized. Nice! I’ve ended up using this AI-voice-transcription-to-outline process to great effect ever since (including on this article).
These tools can generate paragraphs of text, sure, but that doesn’t mean they will produce anything worth reading. Where this new tech shines is in more bounded roles: helping you get unstuck, asking better questions, or organizing your messy thoughts into a high-level draft outline. It’s all about learning the relay handoff—knowing when to pass the machine the baton, and when to take it back.
I’m seeing a similar pattern in design. Generative UI design tools like Figma Make and UXPilot are far from replacing my design team, but they’re not useless. At this stage, they’re very powerful sketchpads to explore ideas quickly. They’ll spit out more fully fleshed out interface variations faster than a human ever could. Yet, the minute you try to shape the details or bridge them into higher-fidelity production work, they become more trouble than they’re worth.
The engineers I work with cite similar patterns with AI-generated programming tools like Cursor and Copilot. Incredibly powerful when you understand how they fit into the software development process, but they quickly spin into chaos if you try to use them without the discernment of an experienced human.
If you want to create value in this next era, along with new skills like “prompt engineering” or “AI-first product design,” you also need to be able to accurately answer the question everyone’s asking: “can AI do this?” When you’re on a deadline, it really helps to know the answer conclusively without hours of in-the-moment experimentation.
Things are changing very quickly, but at least for 2025 and probably 2026, it’s important to see our new tools clearly. We need to know what AI can help with and what it absolutely cannot do (yet). Only then can we get the best of both worlds.
This is true for personal use too. Many are already using AI for companionship, therapy, healthcare, and more. This can feel scary, but I’d say their value in intimate areas depends on our ability to see them clearly for what they are.
If we truly believe an LLM is a romantic partner, friend, therapist, or doctor, we’ll take it way too seriously. But if we see it as a connectionist prediction engine trained to generate unpredictable responses based on an internet full of data, we’ll treat it with an appropriate level of discernment.
Knowing these tools intimately gives their responses the appropriate level of weight in your mind: Not useless, but not necessarily truth, either.


"If we truly believe an LLM is a romantic partner, friend, therapist, or doctor, we’ll take it way too seriously. But if we see it as a connectionist prediction engine trained to generate unpredictable responses based on an internet full of data, we’ll treat it with an appropriate level of discernment." So, so true! 😂 Great piece, Jay!
I have my own fair share of frustrating moments with ChatGPT and Claude lol but I notice it works better when I did my initial thinking first instead of it trying to think for me.