Back to Philosophy
Reflection

AI as a Tool, Not a Crutch

A reflection on using AI tools responsibly as a new developer—building understanding, not just shipping code.

1

The Reality

Let's be honest—AI is everywhere now. ChatGPT, Copilot, Claude—these tools have become part of the development landscape. I use them. Most developers I know use them. They genuinely speed things up, help unstick thinking, and make certain tasks more efficient. Pretending otherwise would be disingenuous, and I think it's important to start from a place of honesty.

2

The Risk for New Developers

That said, I've noticed some patterns that concern me—especially for those of us still early in our careers. It's easy to copy-paste generated code without really understanding what it does. When something breaks, debugging becomes harder because you didn't write it, so you don't know where to look. There's a kind of false confidence that comes from shipping working code you can't fully explain. Over time, this can lead to shallow mental models—you know how to get things done, but not why they work. I'm not judging anyone for this. I've caught myself doing it too. It's just something worth being aware of.

3

My Personal Rule: Accountability

I've developed a simple rule for myself: if I can't explain what the code is doing, I don't consider it finished. This sounds basic, but it changes how I work. It means reading AI-generated code line by line, asking myself what each part does and why. It means refactoring output to match my understanding and the project's patterns. Sometimes it means scrapping the AI suggestion entirely and writing it from scratch—not because the AI was wrong, but because I need to internalize the logic. And it always means testing my assumptions, not just whether the code runs.

4

How I Actually Use AI

I find AI most useful for brainstorming approaches to problems I haven't seen before. It's great for explaining unfamiliar concepts—like when I needed to understand how Ollama handles local model hosting. It helps me spot bugs I've been staring at too long. And honestly, it's a solid rubber duck—talking through logic with it often clarifies my own thinking. What I try not to do is blindly generate full solutions. If I ask for a complete implementation, I treat it as a starting point, not a finish line.

5

A Real Example

When building my AI Todo App, I used Claude to help me understand how to structure the API calls to Ollama. The initial suggestions worked, but I didn't fully grasp the request/response flow. So I spent time tracing through the code, adding console logs, breaking things intentionally to see what happened. I ended up rewriting the fetch logic myself—not because the AI version was broken, but because I needed to own it. That rewrite taught me more than the working code ever would have.

6

What I'm Still Working On

I'm not claiming to have this figured out. I'm still building deeper fundamentals—the kind of intuition that comes from years of practice. My debugging speed isn't where I want it to be yet. And I'm actively learning how to make better architecture decisions, understanding not just what works but what scales and what's maintainable. These are ongoing goals, not boxes I've checked. I think the developers who thrive with AI will be the ones who stay curious about what's happening under the hood—who use these tools to accelerate learning, not replace it.

"The goal isn't to avoid AI—it's to use it in a way that makes me a better developer, not a more dependent one."

More Reflections

Taylor Vaughn

Partnering with clients to build high-quality, accessible websites that follow industry standards.

© 2026 Taylor Vaughn. All rights reserved.