Intellectual Debt

Jun 5, 2025

I think that, like many of developers, I find myself with mixed feelings towards this new trend of delegating more coding responsibilities to artificial intelligence. One part of me--likely the part that's trying to stick to what's known and what I've grown to be comfortable with--feels (and hopes) that this is just a passing fade and that somehow, the AI bubble (if it even exists) will pop and we will all go back to copy-pasting from StackOverflow and everything will be fine. The other part of me, however, tries to be more realistic and embrace the inevitable reality that's taking shape in real time; that the landscape for software engineering is getting a massive revamp, and the way software developers write code is rapidly changing. I hesitate to say that it's changing fundamentally, though, and I'll get to why I wouldn't use a term quite that strong--but the landscape is changing, and obstinately clinging to the old way of doing things is just counterproductive and, really, unrealistic.

Artificial intelligence is not a replacement for humans. Coding agents and LLMs are not replacements to software engineers; they're a replacement to code monkeys, a role increasingly becoming relegated to obsolescence as the industry rightfully prioritizes the capacity to reason and design an efficient solution over blindly writing code for a design. Even though, as humans, our reasoning is sometimes hole-ridden and faulty, we at least maintain our capacity to reason--artificial intelligence does not. LLMs are great at giving the illusion of reasoning, and if you do a back-and-forth conversation with your model of choice you might even feel like you're talking to another engineer, but in reality, you're talking to a complex amalgamation of multiple engineers, or at least their ideas and their reasonings as they were fed into the model during training. It's an incredibly advanced pattern-matching system, adept at reassembling existing knowledge in novel-seeming ways, but it doesn't originate understanding nor does it engineer solutions from first principles like a human can. That core capacity for genuine, adaptable reasoning remains distinctly human.

For many tasks, this is the perfect tool for the job. If you're starting up a web app in a common language like Go or Python, there's probably plenty of relevant data available, different implementations and their corresponding pros and cons. If you tell your coding agent to generate a simple starter template for a CRUD application, maybe specifying which database you're using, what kind of data you're working with, and some basic functionality that you're looking for from the get-go, you'd likely get something that works pretty well. If you want to change the CSS on your web page that's also a perfect use case, just like if you wanted to implement a small widget, reorganize you website, write a few unit tests, or implement a specific data structure. You might not get it to do the task perfectly on the first try, but you iterate and refine your prompt, telling the model that it did X correctly but Y still needs to take place, or that it took action A to refactor your code but now the compiler is giving you error E--you just keep iterating, telling the agent what's going on and what you want to happen. The real problem surfaces when software engineers start being careless and, for the sake of speed and efficiency, take the generated response and use it directly, without scrutiny or review. This approach, while seemingly boosting immediate output, often comes with a significant, albeit unacknowledged, downside.

Engineers often strive to maximize code output, fueled on by flawed metrics such as counting lines of code or number of pull requests, but with the advent of LLMs that give convincing answers, they do so at an unperceived cost of their own intellectual growth. Companies that opt to give entry-level developers, for example, more simple tasks like implementing a simple API endpoint or a basic UI component, will find that these tasks get completed quickly, but if the developer doesn't look into the solution--for example, further investigate an approach given by the LLM that the developer doesn't fully understand or hasn't seen before--then you're fundamentally building your codebase on weak foundation. Companies that give more complex, multifaceted tasks to the wrong kind of developer that relies too much on AI will see, sooner or later, issues in reliability, performance and maintainability, and the engineer--as the main point of contact and subject matter expert on this feature--will be unable to provide a solution without trying to work through the AI's solution, paying the knowledge debt that he accumulated at the worst possible moment. Of course, saying that "Copilot suggested this code to me" when asked about how something works, or even worse, why something doesn't work, is not a great response.

Intellectual debt is maybe a better name for this concept. Just like how technical debt is accrued by opting for a less optimal solution in favor of quickly completing a task and postponing the supposed eventual implementation of the optimal solution, where the more time that passes by, the less you remember what was wrong and what solution you were thinking of, intellectual debt is opting for the quick, readily available LLM solution in favor of completing a task, postponing the fact that eventually you'll need to understand what the LLM did and why. Both of these types of debts, naturally, will eventually be collected one way or another, whether you voluntarily choose to pay it by taking the time out of your day to pay, or involuntarily when your production deployment crashes and burns because of a weird edge case that the LLM was not aware of, and now you have thousands or even millions of angry users now turned debt collectors demanding a fix.

The point is, you should know what you're doing. If you write code yourself, you should be able to explain why you took certain decisions, what certain structures and patterns do, and the trade-offs you accepted in your solution. If you have AI assist you by prompting it to do a task for you, you should be able to evaluate the AI's solution as if it were your own, and you should be able to explain decisions, patterns and trade-offs as if you were the one writing it.