First Principles: Why You Must Learn Before You Prompt
By Michael Doornbos
- 9 minutes read - 1779 wordsIn my previous article about Dorothy Vaughan, I explored how NASA’s human computers adapted to the IBM era. But there’s a crucial detail I want to expand on: Vaughan’s team didn’t just operate the IBM mainframe. They understood the mathematics it was computing. That understanding is what made them invaluable.
This distinction matters now more than ever.
What Are First Principles?
First principles are the foundational truths in any field—the bedrock upon which everything else is built. In programming, these include:
- How computers actually work: Memory, CPU cycles, the stack, the heap
- Data structures: Why an array access is O(1) and a linked list search is O(n)
- Algorithms: Not just using sort(), but understanding quicksort vs. mergesort
- Logic and mathematics: Boolean algebra, discrete math, basic number theory
- How your language executes: What happens when you write
x = 5?
These aren’t academic exercises. They’re the mental models that let you reason about code, debug impossible problems, and make architectural decisions that don’t explode at scale.
First principles thinking means breaking problems down to their most fundamental truths and building up from there. Elon Musk famously uses this approach: instead of accepting “batteries are expensive,” he asked “what are batteries made of, and what do those materials cost?” The answer led to a completely different conclusion.
In software, first principles thinking means understanding why code works, not just that it works.
The Rise of “Vibe Coding”
There’s a phenomenon spreading through the industry that we call “vibe coding”—and I’m not the only one who’s noticed it. The term has been floating around developer circles, describing a style of programming where you:
- Describe what you want to an AI
- Accept whatever code it generates
- Run it and see if it works
- If not, describe the error to the AI
- Repeat until something appears to function
The code “feels” right. It runs. Tests pass (if there are tests). Ship it.
This approach is seductive. You can build things without understanding them. The AI handles the “boring” parts. You’re moving fast and breaking nothing—or so it seems.
But vibe coding is building on sand.
Why Vibe Coding Fails
The problem isn’t that AI-generated code is inherently bad. Modern LLMs can produce remarkably competent code. The problem is that you don’t understand it.
You Can’t Debug What You Don’t Understand
When vibe-coded software breaks—and it will—you’re stuck. The AI that wrote the code can’t reliably explain why it’s failing. It might generate plausible-sounding explanations that are completely wrong. You’ll find yourself in an endless loop of “try this fix” suggestions, each one addressing symptoms while the root cause remains invisible.
I’ve seen developers spend days debugging issues that anyone with first principles knowledge would diagnose in minutes. Not because they’re less intelligent, but because they lack the mental models to reason about what’s happening.
You Can’t Extend What You Don’t Understand
Software isn’t static. Requirements change. Features get added. Scale increases. When you need to extend vibe-coded software, you’re essentially asking the AI to modify code that neither of you truly understands. The result is layer upon layer of patches, each one making the system more fragile.
This is how you end up with “legacy code” that’s only a few days old.
You Can’t Evaluate What You Don’t Understand
How do you know the AI’s solution is good? Fast? Secure? Scalable? If you don’t understand the fundamentals, you can’t evaluate these qualities. You’re accepting code on faith.
I’ve reviewed AI-generated code that:
- Used O(n²) algorithms where O(n log n) was obvious
- Introduced SQL injection vulnerabilities
- Created race conditions in concurrent code
- Leaked memory in languages with manual memory management
- Made architectural decisions that would collapse at 100x scale
The developers who submitted this code couldn’t have caught these issues. Not because they weren’t smart, but because they hadn’t learned the principles that would make these problems visible.
Security deserves special emphasis here. AI models are trained on the internet, which includes a vast amount of insecure code. They’ll generate SQL injection vulnerabilities, XSS holes, path traversal bugs, and every other OWASP top 10 issue—often while appearing perfectly functional. If you don’t understand security principles, you’re shipping vulnerabilities directly to production with full confidence.
Dorothy Vaughan’s Lesson
Here’s what Vaughan understood that makes her story so relevant: the IBM machine was a tool that amplified human capability. It didn’t replace understanding—it required it.
When her team ran trajectory calculations on the IBM 7090, they knew what the output should look like. They could spot anomalies. They could verify that a negative altitude at T=60 meant the rocket crashed, not that the computer made an error. They could debug the FORTRAN code because they understood the underlying mathematics.
The IBM made them more powerful because they had first principles knowledge. Without that foundation, they would have been button-pushers at best, dangerous at worst.
From my previous article:
But here’s what Vaughan understood that made her invaluable: someone still needed to know that the formula was correct, that the initial conditions made physical sense, that a negative altitude at T=60 seconds meant the rocket had crashed. The IBM couldn’t tell you if the results were nonsense. The human computers-turned-programmers could.
Replace “IBM” with “ChatGPT” and the statement remains true.
AI as Your Intern, Not Your Brain
Here’s where I want to be clear: I’m not anti-AI. I use AI coding assistants daily. They’re remarkably useful tools.
But they’re tools. And like all tools, their value depends on the skill of the person wielding them.
Think of AI as an enthusiastic intern with a photographic memory and no judgment. This intern can:
- Write boilerplate code quickly
- Suggest implementations you might not have considered
- Help you remember syntax you’ve forgotten
- Generate tests based on your specifications
- Explain unfamiliar code
- Refactor for readability
But this intern also:
- Makes confident mistakes
- Doesn’t understand your specific context
- Can’t evaluate trade-offs for your situation
- Sometimes generates plausible-sounding nonsense
- Needs supervision and correction
You wouldn’t let an intern architect your system, make security decisions, or ship code without review. You shouldn’t let AI do these things either.
The key difference between using AI effectively and vibe coding is understanding. When I ask an AI to generate code, I:
- Know what I’m asking for conceptually
- Can read and understand the generated code
- Evaluate whether the approach is appropriate
- Spot potential issues before they become problems
- Take responsibility for the result
The AI accelerates my work. It doesn’t replace my expertise.
The Right Order of Operations
There’s a specific sequence that works:
1. Learn First Principles
Before touching AI coding tools, learn the fundamentals. This means:
- Actually understanding how your language works, not just memorizing syntax
- Building things from scratch to understand how they work
- Reading source code of libraries you use
- Learning data structures and algorithms deeply
- Understanding system design and architecture
- Studying security principles
This isn’t fast. It takes years. There are no shortcuts—and that’s the point. The mental models you build during this process are what make you capable of using powerful tools effectively.
2. Build Without AI First
When learning a new concept or technology, implement it without AI assistance first. Struggle with it. Make mistakes. Debug them. This process builds understanding that you can’t get any other way.
I learned assembly language on a Commodore 64. I learned C before C++. I learned vanilla JavaScript before frameworks. Each layer of abstraction I use now rests on understanding built by working at lower levels.
When I ask an AI to help with code, I can evaluate whether its suggestions make sense. When someone who’s never touched assembly asks the same question, they’re just copying and hoping.
I put 40 years of work into the first principles. It makes me look like a freaking wizard today, but I put in the work.
3. Then Add AI as an Accelerator
Once you have first principles knowledge, AI becomes a multiplier. It handles the tedious parts while you focus on the interesting problems. It suggests approaches you can evaluate. It writes code you can understand, modify, and debug.
This is the Dorothy Vaughan approach: use the new tool to amplify your existing expertise, not to replace it.
What This Means for Learning Programming Today
If you’re learning to program in 2025, resist the temptation to let AI do your homework. Every time you ask AI to solve a problem you should solve yourself, you’re robbing yourself of understanding you’ll need later. You’re also robbing yourself of the joy of the challenge.
Here’s my practical advice:
For beginners:
- Learn your first language without AI assistance
- Type code examples manually instead of copying
- Debug errors yourself before asking for help
- Build projects from scratch, even simple ones
- Read and understand every line you write
For intermediate developers:
- When using AI, always review generated code line by line
- Ask “why” about every AI suggestion
- Implement features manually before asking AI for optimizations
- Study the fundamentals you skipped
- Build at least one project in a low-level language
For everyone:
- Treat AI output as a first draft, not a final answer
- Maintain your ability to code without AI
- Keep learning fundamentals even as you use advanced tools
- Take responsibility for all code you ship, regardless of its origin
The Uncomfortable Truth
Here’s what nobody wants to hear: if you can’t build it without AI, you can’t really build it with AI either. You’re just assembling pieces you don’t understand and hoping they work.
Eventually, they won’t work. And you won’t know why.
The same choice faces every programmer today. You can vibe code your way through, accepting AI output on faith, building a career on a foundation of sand. Or you can invest in first principles, build real understanding, and use AI as the powerful amplifier it can be.
The technology will keep advancing. The tools will keep getting more powerful. But the value of understanding won’t diminish—it will increase. Because as tools become more powerful, the consequences of wielding them without understanding become more severe.
Conclusion
AI coding tools are here to stay. They’re remarkably capable and getting better. Refusing to use them is as foolish as refusing to learn FORTRAN was in 1958.
But using them without understanding is equally foolish. It’s accumulating technical debt you can’t even see. It’s setting yourself up for failure when the inevitable debugging session arrives.
First principles first. Then AI as your accelerator.
That’s the model. Learn the fundamentals. Understand your tools. Use AI to amplify your expertise, not to replace it.
The alternative isn’t just building bad software. It’s becoming the kind of developer who can’t survive without AI—and therefore can’t survive the next technological shift either.
Learn first. Prompt later.