Real Engineers Are Hitting the Same Wall with AI Coding Tools
The Thread That Said the Quiet Part Out Loud
A Hacker News thread recently lit up with over 500 comments from working engineers describing their day-to-day experience with AI coding tools. No marketing. No hype. Just people describing what actually happens when you point an LLM at a real codebase.
The frustrations are strikingly consistent. And they all point to the same root cause: AI coding tools don't understand your codebase.
They can write code. They can follow instructions. But they don't know what calls what, what depends on what, or what will break when something changes. Every engineer we read described some version of this same problem.
Here's what they said, in their own words.
"It got completely lost and broke everything"
robbbbbbbbbbbb works at a 5-person software company with a mature SaaS product:
"For most real day to day problems it's hopelessly confused by the large codebase full of state, external dependency on chunks of Unity, implicit hardware-dependent behaviours, etc. It has no idea how to work meaningfully with Unity's scene graph or component model. I tried using MCP to empower it here: on a trivial test project it was fine. In a real project it got completely lost and broke everything after eating 30k tokens and 40 minutes of my time, mostly because it couldn't understand the various (documented) patterns that straddled code files and scene structure."
The LLM works on trivial projects. It falls apart on real ones. Not because the code is bad, but because understanding a real codebase means understanding relationships across files, and context windows can't hold that.
SuperSmart Coder gives the LLM a pre-computed understanding of every relationship in the codebase. It doesn't need to read 30,000 tokens of context to understand how your code fits together. It already knows.
"700K lines, zero documentation, the Wild West"
nerptastic describes a scenario many engineers will recognize:
"We're working on a 700KLOC legacy monolithic CRUD app with 0 documentation, it's essentially the Wild West. We've found it very difficult to apply AI in a meaningful way. For a small team with lots to do on what is essentially a 'keep the lights on' project, we're in an interesting place, as it feels the infrastructure / codebase isn't set up to handle newer tools."
This is the reality for most engineering teams. You don't have the luxury of a clean, well-documented codebase. You have legacy code, implicit patterns, and tribal knowledge. AI tools today assume you can feed them enough context to understand the system. You can't.
With SuperSmart Coder, your 700K-line monolith gets the same treatment as a greenfield project. Every function, every dependency, every caller is indexed. Claude Code can ask "what depends on this?" and get a complete answer in milliseconds, with no documentation required.
"Circular dependencies, and the developer has no idea"
fizzyfizz describes what happens when engineers trust LLM output without understanding it:
"I get PRs that include custom circular dependency breakers because the LLM introduced a circular dependency, and decided that was the best solution. The ostensible developer has no idea this happened and doesn't even know what a circular dependency breaker is. Another colleague does an experiment to prove that something is possible and I am tasked to implement it. The experiment consists of thousands of lines of code. After I dig into it I realize the code is assuming that something magically happened."
The LLM doesn't know about circular dependencies because it can't see the dependency graph. It can only see the files in its context window. It creates the cycle, then invents a workaround for its own mistake.
SuperSmart Coder's find_cycles tool detects every circular dependency across the entire codebase in one query. And its impact tool shows the blast radius of any change before the code is written, not after. The LLM stops introducing problems it can't see.
"They duplicate logic, defer imports, introduce new abstractions"
yojo articulates the slow decay that AI-assisted codebases experience:
"LLMs rarely if ever proactively identify cleanup refactors that reduce the complexity of a codebase. They do, however, still happily duplicate logic or large blocks of markup, defer imports rather than fixing dependency cycles, introduce new abstractions for minimal logic, and freely accumulate a plethora of little papercuts and speed bumps. These same LLMs will then get lost in the intricacies of the maze they created on subsequent tasks, until they are unable to make forward progress without introducing regressions."
This is the death spiral. The LLM creates complexity because it can't see enough of the codebase to reuse what already exists. Then it gets confused by the complexity it created. Then it creates more complexity to work around its own confusion.
SuperSmart Coder's who_uses tool answers "does this logic already exist?" across the entire codebase. Its check_consistency tool catches inconsistencies and violations before they ship. The LLM stops duplicating because it can see what's already there.
"Context degradation gets much worse with multiple agents"
David-Brug-Ai describes the fundamental problem with context-window-based AI:
"The context degradation problem gets much worse when you have multiple agents or models touching the same project. One agent compacts, loses what it knew, and now the human is the only source of truth for what actually happened vs what was reported done. If that human isn't a coder, they can't verify by reading the source either."
And later:
"I'm running Codex on a Raspberry Pi, and Claude Code CLI, Gemini CLI, and Claude in Chrome all touching the same project across both machines. The drift is constant. One agent commits, the others don't know about it, and now you've got diverged realities."
Context windows are ephemeral. They fill up and get compacted. Knowledge is lost. SuperSmart Coder stores the understanding of your codebase on a server that never forgets and never compacts. Every agent, every session, every team member queries the same source of truth.
"Horrible security issues that no one seemed to see"
VoidWhisperer points to something that should alarm every engineering leader:
"I've also seen cases in my work codebases where code was obviously AI generated before and ends up with gaping security or compliance issues that no one seemed to see at the time."
When the LLM can't see the full picture, it can't enforce security constraints. It doesn't know that every endpoint touching billing data must go through the MFA middleware. It doesn't know that user input from the mobile app flows through three services before hitting the database.
SuperSmart Coder's review tool checks changes against the full architectural context. Its trace tool follows data flow across function and service boundaries. "Are all billing endpoints behind MFA?" becomes a single query with a definitive answer.
"I won't commit code I don't understand"
fizzyfizz draws a line that more engineers should draw:
"The main difference between me and my current team is that I won't commit code I don't understand. While sometimes my colleagues are creating 500-line methods. Meanwhile our leaders are working on the problem of code review because they feel it's the bottleneck."
And mberning agrees:
"I absolutely cannot stand reviewing the mostly insane PRs that other people generate with it."
Code review becomes the bottleneck because the AI generates code faster than humans can verify it. But the verification burden only exists because the AI doesn't understand the codebase well enough to generate correct code in the first place.
SuperSmart Coder's explain tool gives Claude the full context of how any function fits into the architecture. Its impact tool shows exactly what a change affects. Claude generates code that fits because it understands how everything connects.
"I legitimately have no idea what I am doing"
mc-0 says what many people are thinking but afraid to say:
"I legitimately have no idea what the fuck I am doing. I'm making PRs I don't know shit about, I don't understand how it works because there is an emphasis on speed, so instead of ramping up in languages / technologies I've never used, I'm just shipping a ton of code I didn't write and have no real way to vet."
This is the logical conclusion of deploying AI coding tools without codebase understanding. Engineers become approval machines for code they can't evaluate. Speed goes up. Understanding goes to zero. And eventually, something breaks that nobody knows how to fix.
SuperSmart Coder doesn't replace understanding. It creates it. When Claude can explain exactly why a change is safe, exactly what it affects, and exactly how it fits into the system, the engineer can make an informed decision instead of a blind one.
"It'll often run ahead with an over-engineered lump"
robbbbbbbbbbbb again:
"My one criticism would be the 'junior developer' effect where it'll often run ahead with an over-engineered lump of machinery without spotting a simpler more coherent pattern."
And robeym:
"It is obviously not great at system design, so I still need to know exactly what I'm trying to do. If I don't it'll often make things overly complicated or focus on edge cases that don't really exist."
The LLM over-engineers because it can't see the existing patterns. It doesn't know that your codebase already has a clean way to handle this case. It doesn't know that 90% of the infrastructure it's building already exists two directories over.
SuperSmart Coder's ask tool answers "how does the existing codebase handle this?" Its health tool identifies which modules are most coupled and where the architecture is clean. The LLM builds with the existing patterns instead of inventing new ones.
"The AI tools won't know the rules"
abcde666777 explains why they avoid AI entirely for certain work:
"I work on a lot of legacy code for a crusty old CRM package (Saleslogix/Infor), and a lot of SQL integration code between legacy systems. So far I've avoided using AI generated code here simply because the AI tools won't know the rules and internal functions of these sets of software, so the time wrangling them into an understanding would mitigate any benefits."
And olvy0 describes the exhausting reality of hand-holding:
"Hand-holding it is a chore. It's like coaching a junior dev. This is on top of me having 4 actual real-life junior devs sending me PRs to review each week. It's mentally exhausting."
The entire value proposition of AI coding tools collapses when you spend more time teaching the AI about your codebase than you would writing the code yourself. SuperSmart Coder eliminates that teaching. One sync command, and the AI knows every function, every dependency, every convention in your codebase. No hand-holding required.
"You have to think about managing context instead of the task"
konaraddi summarizes the overhead that nobody talks about:
"My complaints: 1. Degraded quality over longer context window usage. I have to think about managing context and agents instead of focusing solely on the task."
And CharlesW recommends a workaround that shouldn't be necessary:
"You should start a new session for the code review to make sure the context window is not polluted with the work on implementation itself."
Engineers are spending their cognitive budget managing the AI's memory instead of solving problems. Context windows are implementation details that should be invisible. SuperSmart Coder makes them invisible. The codebase understanding lives on the server, outside any context window. It doesn't degrade. It doesn't need to be managed. It's always there.
"After several months, the signalling is getting spaghetti-like"
wrs is doing everything right and still hitting the wall:
"To be clear, this is not vibecoding. I have a strong sense of the architecture I want, and explicitly keep Claude on the desired path much like I would a junior programmer. I also insist on sensible unit and E2E test coverage with every incremental commit. I will say that after several months of this the signalling between UI components is getting a bit spaghetti-like."
Even disciplined engineers with strong architecture instincts experience drift over time. The LLM makes small concessions to expedience that compound into architectural decay. Without persistent visibility into how the system connects, entropy wins.
SuperSmart Coder's health tool measures coupling and architectural complexity. Its check_consistency tool enforces patterns. The system doesn't just write code. It protects the architecture from gradual erosion.
The Common Thread
Every comment in that thread points to the same root cause. It's not that LLMs are bad at writing code. They're remarkably good at it. The problem is that writing code is the easy part.
The hard part is knowing what your change will break. Knowing what already exists. Knowing how services connect. Knowing which paths bypass authentication. Knowing that a field rename in one service cascades through three message queues into a mobile app on the other side of the world.
Context windows will never solve this. You can't fit a 5-million-line codebase into 200K tokens. You can't fit it into 2 million tokens. The approach itself is wrong.
SuperSmart Coder takes a different approach. It gives the LLM a complete, pre-computed understanding of the entire codebase. Not the source code. The relationships. The dependencies. The blast radii. The data flows. All queryable in under 300 milliseconds.
The engineers in that thread aren't wrong. The tools they're using are genuinely limited. But the limitation isn't the LLM. It's the context window. Remove that limitation, and everything changes.