The first time AI really impressed me, it wasn’t because it spit out a perfect answer in 30 seconds to a problem that would’ve taken me an hour to figure out.
It was because it spit out an answer in 30 seconds to a problem that would’ve taken me an hour that looked perfect… and then quietly betrayed me three hours later.
It was dangerously convincing at a surface level, but underneath it was obvious it lacked any true understanding.
You know the moment. You ask for a webiste. “Make no mistakes.” You paste the code in or let Cursor run wild for an hour. It compiles. The demo works. You feel like you just discovered fire.
Then you change one tiny thing. A harmless little tweak. And the whole thing collapses.
That’s when you learn a harsh reality. The AI didn’t give you a solution. It gave you a mystery. And now you’re the detective… with none of the clues… because you didn’t actually understand the case when you accepted it.
That experience taught me a rule I wish more people cared about. If you can’t evaluate the output, you can’t trust the output.
That’s the deal AI offers. Speed in exchange for certainty. If you don’t notice the trade, you pay later.
There’s even a name for the most extreme version of this. Andrej Karpathy described “vibe coding” as a mode where you “fully give in to the vibes, embrace exponentials, and forget that the code even exists.”
That can be fun for throwaway weekend projects. But it becomes a problem when you confuse it for building something you have to maintain, debug, secure, and defend.
One missing ingredient in almost every AI take is stakes. The rules change depending on what you are doing.
When you’re experimenting, speed matters. When you’re learning, understanding matters. When you’re shipping, correctness matters.
Most people use AI the wrong way, not because they’re lazy in some moral, Victorian sense, but because AI makes it incredibly easy to outsource the hardest part of being human: thinking clearly.
They treat it like a vending machine for intelligence. Insert prompt. Receive result. Move on. No thinking required.
And sure, sometimes that’s fine. But when it becomes your default relationship with the tool, you don’t just save time. You start losing skills. You stop building a mental model. You stop doing the internal work that turns “information” into “understanding.”
You don’t grow. You actually start to rot.
The most common mistake: asking AI to do what you can’t already do
There is one majorly important qualification that gets ignored in almost every “AI will make you 10x” conversation.
The danger isn’t using AI to do things for you. The danger is using AI to do things for you that you couldn’t otherwise do, or couldn’t meaningfully check.
Because if you can’t do the thing (or at least understand it), you can’t catch when the AI is confidently wrong, subtly wrong, or wrong in a way that only shows up later when the stakes are higher.
In an interview with Lex Fridman, DHH (David Heinemeyer Hansen) said, “The capacity to be a good editor is the reward you get from being a good doer. You have to be a doer first.”
If you skip the doer part, you get:
- Laziness disguised as productivity.
- A shrinking attention span for hard problems.
- Bugs you can’t debug.
- “Success” that isn’t repeatable.
AI can feel like a cheat code. But cheat codes don’t make you better at the game. They just get you to the next level… where you immediately die because you never learned the mechanics.
So what’s the alternative?
A better framework: Tutor, Playground, Intern
In my experience, AI is at its best when it does one of three jobs.
1) AI as a Tutor
Make AI teach you, not replace you.
This is the highest-leverage use case, and the one most people skip because it’s less exciting than “done.”
Going back to DHH, he recognizes this despite not using AI for everyday coding. In the same interview, he said, “I’m getting smarter every day because of AI because I’m using AI to have it explain things to me.”
Ask the AI to help you understand something and explain its reasoning in a way that upgrades your mental model.
Not just “what to do,” but:
- Why this approach works
- What assumptions it depends on
- Where it usually fails
- How to test it
- What a beginner misunderstandings looks like
- What an expert would watch for
This is especially powerful in coding.
A program is an argument you make to reality: if these conditions are true, then this outcome should follow. When it doesn’t, you don’t get to blame the universe. You interrogate your premises, follow the evidence, revise the claim, and try again.
AI can speed that process up dramatically… but the goal isn’t always to get answers faster. The goal is to become the kind of person who needs fewer answers. And that doesn’t happen when you’re just a copy-paste monkey.
Once AI is teaching you instead of replacing you, the next win is obvious.
2) AI as a Playground
Make expensive iteration cheaper.
This is the most misunderstood part of the conversation, because people jump from “AI can generate images and code” to “AI will replace all artists and developers,” as if those are the only two options.
But there’s a third option, and it’s the most interesting one.
Use AI to lower the cost of production so more people can create.
James Cameron has talked publicly about using AI to speed up effects-heavy workflows so big films can be made for less. Not by cutting crews, but by increasing throughput so artists can move faster from shot to shot. That’s the version of AI in creative work that actually excites me. Not replacing the artist, but removing some of the brutal cost barriers that keep “expensive” art locked behind rich budgets.
It’s the difference between letting the robot paint and building better brushes so more humans can paint.
This is the same in almost any field. Iteration can be expensive. AI makes it cheaper.
Cheaper iteration means trying new things. And more experimentation is always a win, but it’s particularly great when it hardly costs anything.
You ever hear of the saying “you can just do things”? It’s true with AI. You can just do things, and you can just do things that otherwise would take you years of practice.
Developers can generate images with Nano Banana that are plenty good enough for a prototype. Designers can generate code with Codex that brings their designs to life.
When creatives are just doing things, everyone wins. Use AI like a playground and have some fun.
But eventually you stop playing around. You ship. And the relationship has to change again.
3) AI as an Intern
Offload drudgery you already know how to do.
This is the “time saver” everyone talks about, and it’s real… when you use it correctly.
Let AI handle the trivial but time-consuming stuff you already understand:
- Rename a pile of files using a consistent convention
- Summarize meeting notes you also attended
- Convert a CSV into a different format
- Generate unit test scaffolding (that you review)
- Draft the first version of documentation (that you edit)
- Produce ten variations of marketing copy (that you choose from)
- Create boilerplate code you could write yourself, just slower
This is where AI is like an intern you’re supervising. Fast, enthusiastic, and fully capable of making a mess if you don’t check the work.
The key is that you remain the person responsible for correctness.
Bad prompts and good prompts
A fast way to detect whether you’re using AI like a tool or like a crutch is asking, “Could I defend the result to a skeptical expert without the AI in the room?”
If the answer is no, you’re not using AI. You’re hiding behind it.
Here are some examples.
Examples of the wrong way to prompt AI
These prompts have one thing in common. They ask the AI to be the brain.
- “Make me the perfect app for my business.”
- “Write my entire business plan.”
- “Build a complete SaaS with authentication, billing, and analytics.”
- “Create the marketing strategy that will guarantee I hit $10k/month.”
- “Tell me exactly what to invest in this year.”
- “Write my personal statement for college.”
- “Fix this bug in my code” (with zero context, and you don’t read the fix).
- “Design my workout plan and diet so I get shredded in 30 days.”
- “Negotiate this contract for me.”
- “Tell me what to believe about [political / moral / philosophical issue].”
Notice what’s missing: ownership, constraints, tradeoffs, context, and a way to verify the output.
These prompts don’t create a collaboration. They create dependency.
Examples of the right way to prompt AI
Good prompts treat AI like a thinking partner. They force clarity. They invite tradeoffs. They demand explanations.
Here’s a strong template (and you can reuse it for almost anything):
“Help me think through X. Ask me the questions you need first. Then propose 3–5 options, list pros/cons and risks, and recommend one based on my constraints. After that, help me implement it step-by-step, explaining each step so I understand it and can maintain it later. Include ways I can test/verify the result.”
That prompt keeps you in the loop as the owner of the idea.
The prompt isn’t super important. Create whatever prompt you want. But make sure you’re learning, playing, or delegating something you already know how to do.
A practical “AI ruleset” you can actually follow
When I’m trying to stay honest with myself, I use these rules:
- If it matters, I don’t ship what I can’t explain.
- If I’m learning, I make the AI teach, not just answer.
- If I’m experimenting, I use AI to expand iteration, not replace taste.
- If I’m saving time, I delegate only what I could do myself.
- If I’m stuck, I ask for options and tradeoffs, not certainty.
AI is powerful, but it’s also very good at sounding right even when it’s wrong. It’s fluent. It’s confident. It’s eager to please.
Your job isn’t to fear the tool. Your job is to stay epistemically awake to what you know, what you don’t know, and what you’re tempted to outsource.
Because the real risk isn’t that AI becomes smarter than you.
The real risk is that you stop becoming smarter than you were yesterday.
The human part we can’t outsource (without paying for it later)
There’s something uniquely human about the struggle of building.
Not the suffering-for-suffering’s-sake version. The useful version. The kind where you wrestle with the problem long enough that you come out the other side with a stronger internal compass.
That’s where taste comes from.
That’s where judgment comes from.
That’s where creativity stops being “output” and becomes “voice.”
When we hand that over completely, and let AI do the thinking, choosing, and composing, we don’t just lose skills.
We lose ownership.
And ownership is a weirdly underrated ingredient in a meaningful life. There’s a difference between consuming a result and crafting a result.
So yes, use AI. But use it in a way that makes you more you, not less.
The future belongs to builders, not button-pressers
I don’t think the question is “Should we use AI?”
We already are. That ship has sailed.
The better question is, What kind of people will AI turn us into?
If we use it like a shortcut around thinking, we’ll become more dependent, more fragile, and more easily fooled.
If we use it like a tutor, a playground, and an intern, and we insist on understanding, on iteration, on taste, we get something better.
We get to move faster and get sharper. We get to create more and stay human. We get to build things we can actually defend.
That’s the version of “AI-powered” I’m interested in.