Inherited Complexity
Inherited complexity rarely announces itself as complexity. It usually arrives looking like a working system.
Something that’s in production. Something people are using. Something that, at a glance, appears to do what it’s supposed to do. There are users. There are workflows. There is just enough evidence of intent that nobody questions whether the underlying model still holds.
Then you touch it. And the gap appears.
The pattern is consistent, even if the systems aren’t.
- A small ASP site that’s grown beyond the assumptions it was built on, where early customisations no longer behave the way the platform expects.
- A telephony environment with physical infrastructure intact but no map, no ownership, and no clear understanding of how the pieces connect.
- A SharePoint deployment where the technology works exactly as designed, but the permission model conflicts with how people are actually using it.
- A disaster recovery setup that has links, hardware, and configuration — but no agreed definition of what “recovery” means.
- A platform with hundreds of portals, no ownership records, and just enough history embedded in it to make every decision feel deliberate, even when it no longer is.
What these systems have in common isn’t the technology. It’s the inheritance.
They were:
- built at a point in time
- shaped by constraints that no longer exist
- modified by people who understood their part of the system
- and then handed over without the context that made those decisions make sense
What remains is something that works — but only within boundaries nobody has clearly defined.
That’s where the real work starts.
Not in fixing things. Not immediately. First, in understanding what the system thinks it is.
Because inherited systems tend to have two versions of themselves:
- the version people describe
- and the version that actually runs
Closing that gap is rarely straightforward. It means going further back than expected. Following decisions through layers of implementation. Reconstructing intent from behaviour. Accepting that some parts of the system only make sense if you assume a set of constraints that no longer apply.
The instinct is often to simplify.
But simplification, too early, is dangerous.
Because what looks like unnecessary complexity is often carrying something:
- a workaround for a limitation that still exists
- a dependency nobody documented
- a decision that solved a problem you haven’t discovered yet
Remove it without understanding it, and the system stops being complex. It becomes unstable.
So the approach becomes more deliberate.
- Map what exists
- Test assumptions
- Build a mental model that matches reality, not documentation
- Change one thing at a time
- Watch how the system responds
It’s slower than rewriting. Less satisfying, in some ways. But it works.
Over time, the system becomes more predictable.
Not because it’s simpler, but because:
- its behaviour matches its design
- its design matches its intent
- and that intent is finally explicit
That’s usually the point where people start describing it as “well understood”.
But that’s not quite right. It’s not that the system was unknowable. It’s that the knowledge didn’t transfer with the system. And inheriting complexity is, more often than not, the process of rebuilding that knowledge from scratch — carefully enough that the system keeps running while you do it.