Defaults Nobody Revisits

Best practices are borrowed conclusions. They compress someone else's context into a rule you can follow without thinking. That is their value, and that is their cost.

The rule says: keep functions short. So you split a coherent operation into five pieces, each with a name that explains what it does but not why it exists. The code is clean. It follows the linter. It also resists change, because changing the behavior now means understanding the relationship between fragments that were never designed to be separate.

This is what blind rule-following produces. Not bad code — compliant code. Code that passes review but quietly increases the cost of the next decision.

Best practices originate in specific environments. Extract a service when your deployment boundaries require it. Write unit tests when your domain logic is complex enough to justify the maintenance cost. These are reasonable conclusions — inside their original constraints. Detach them from those constraints, and they become defaults nobody revisits.

The problem is not that best practices are wrong. It is that they are incomplete. They encode the what without the why. And when the why changes — a new team, a different scale, a shifted business model — the what keeps running on inertia. Nobody questions it, because questioning a best practice feels like questioning competence.

This is authority-based decision-making. The practice carries weight not because it fits the current situation, but because it was endorsed somewhere credible. A conference talk. A well-known post. A team lead who used it at their last company. The reasoning gets replaced by the reference.

Senior engineers are not the ones who know more best practices. They are the ones who know when to stop following them. Judgment is not the rejection of rules. It is the ability to evaluate whether a rule still applies in the current context — and to accept the cost if it does not.

There is a specific failure mode I have seen repeatedly. A team adopts clean architecture and applies it uniformly. Every service gets the same layered structure. Simple CRUD endpoints get use-case classes, repository interfaces, domain models, and DTOs — a five-layer ceremony around a single database insert. The architecture is consistent. It is also expensive. New features take longer than they should because every change must pass through layers that exist for structural purity, not because the domain demands them.

The cost is not visible in any single file. It is distributed across the entire system as friction. And friction does not trigger alerts. It shows up as slower velocity, longer on-boarding, a growing sense that something is heavier than it needs to be.

Clean code that resists change is not clean. It is rigid. Cleanliness is not a property of the code alone — it includes how the code responds to the next requirement. If every modification requires reshuffling abstractions, the structure is serving itself, not the system.

The alternative is not chaos. It is context-sensitive reasoning. You evaluate a practice against the actual constraints: team size, expected lifetime, rate of change, domain complexity. Sometimes the best practice applies perfectly. Sometimes it applies partially. Sometimes it actively gets in the way.

This requires something harder than knowledge. It requires the willingness to make a judgment call and own the consequences. Best practices offer safety — if the outcome is bad, the decision was defensible. Judgment offers no such cover. You chose, and the result is yours.

That is why organizations default to best practices. They reduce variance. They make decisions reviewable. They create a shared vocabulary. These are real benefits. But they come at a cost: they suppress the reasoning that distinguishes systems that work from systems that last.

Judgment has its own failure mode. Without shared constraints, it becomes preference. I have seen senior engineers override team conventions not because the context demanded it, but because they had the authority to do so. The resulting system is legible only to its author. This is not judgment — it is taste mistaken for reasoning. The difference matters: judgment accounts for the team that inherits the decision. Taste accounts only for the person making it.

Medicine has a useful model here. Clinical guidelines are best practices — standardized treatment protocols backed by research. Experienced clinicians deviate from them routinely, based on patient-specific context. But the field does not treat deviation as rebellion. It treats it as a skill that must be taught, practiced, and documented. The guideline provides the default. Clinical judgment handles the exception. And when a clinician deviates, the reasoning goes in the chart. Software engineering rarely has an equivalent. The decision is made, but the reasoning stays in someone's head, or in a Slack thread that no one will search six months later.

This points to what is actually missing. Not more best practices, and not more individual judgment, but a culture that treats the override as a first-class event. When someone deviates from a convention, the deviation should be visible and the reasoning should be recorded. Not as bureaucracy — as design context for the next person who encounters the same decision point.

The systems that hold up over time are not the ones built with the most discipline. They are the ones where someone understood the constraints well enough to know which rules to follow and which to set aside. That understanding cannot be codified. It is earned through decisions that went wrong and the willingness to examine why.

Judgment is not a replacement for best practices. It is the thing that makes them useful.

I write about software that holds up over time, and the experiences that shape how I think about it. Focused on judgment, constraints, and clarity in systems, work, and life.

No spam, no sharing to third party. Only you and me.