6 oct. 2025
Adapted and summarized from research by Barry Keepence & Mike Mannion (Napier University, 1997, IEEE).
rConfig
All at rConfig
Understanding Complex Systems in Software Engineering
Adapted and summarized from research by Barry Keepence & Mike Mannion (Napier University, 1997, IEEE).
Introduction
Modern organizations — from global enterprises to small firms — face a recurring challenge: large-scale system projects often fail. Cost overruns, missed deadlines, security incidents, and even safety risks are common. Keepence and Mannion (1997) examined why so many computer-based systems collapse under their own weight and argued that complexity itself lies at the heart of these failures.
Their work, published through the IEEE Computer Society, remains one of the early, pragmatic looks at why even well-managed projects stumble when systems grow too complex to understand, test, or control.
What Makes a System “Complex”?
A complex system isn’t defined merely by size — but by interconnectedness.
A small system can be highly complex if its components interact in unpredictable ways, while a massive system might remain simple if its architecture is modular and loosely coupled.
Typical symptoms of complexity-driven project failure include:
Late or incomplete delivery
Unmet specifications or failed functionality
Excessive cost overruns
Fragility and instability during change
Unverifiable safety and reliability
High operational or maintenance risk
As Keepence and Mannion emphasized, complexity increases as we integrate multiple technologies (software, ASICs, programmable logic, AI subsystems) and attempt to push the boundaries of performance and flexibility.
Why Complex Systems Are So Hard to Build
1. Intellectual Tractability
In small systems, one person can understand the entire architecture. Once systems grow, knowledge fragments across teams.
When interactions between modules are too dense, decomposition fails — the interfaces themselves become the problem.
2. Details Slipping Through the Net
Postmortems of failed systems often reveal small oversights that cause massive chain reactions — anything from a minor coding bug to an untested system-level assumption.
Keepence and Mannion observed that documentation sprawl, poor communication, and inadequate testing frequently allow these issues to escape detection.
3. The “Limit-Pushing” Culture
The engineering mindset of “higher, faster, cheaper” pushes teams to add new features or performance goals with every release.
While this drives innovation, it often sacrifices reliability and verification.
Aviation software, for instance, illustrates this paradox — modern aircraft systems (e.g., Boeing 777, Airbus A320) are safer than ever but exhibit increasingly complex failure modes due to human–computer interaction dynamics.
4. The Perfection Culture
There’s a pervasive belief that anything is buildable with the “right” tools or methods.
Keepence and Mannion challenged this assumption, arguing that some system complexities might be inherently unmanageable — not just poorly engineered.
Moving Forward: Coping Strategies
The authors proposed that software and systems engineering must evolve not only in methods but in mindset. Their suggested approaches remain highly relevant today:
1. Abandon the Blame Culture
Failures should not be reduced to individual mistakes. Instead, we must recognize that complex systems inherently fail because they exceed human comprehension and interaction capacity.
2. Design for Failure
Accept that all systems will eventually fail — and build mechanisms for safe degradation, redundancy, and resilience.
Hardware engineers have long practiced this; software engineers must do the same.
3. Make Complexity Reduction a Core Goal
Fewer features, simpler interfaces, and encapsulation often produce better results than adding “smart” subsystems.
Keepence and Mannion advocated for trading unnecessary performance gains for maintainability and predictability.
4. Measure Complexity Early
Most complexity metrics (e.g., KLOC, function points) apply too late — after coding begins.
The authors called for early-stage complexity measurement and classification, integrating risk, safety, and hazard analysis from the outset of design.
Lessons for Modern Engineering
Nearly three decades later, the themes from Keepence and Mannion’s work echo across AI safety, cybersecurity, and autonomous systems development.
We now know that system failure rarely stems from a single bug — it emerges from the interaction between components, humans, and evolving environments.
Key takeaways for today’s engineers and researchers include:
Prioritize simplicity over novelty.
Embed resilience and recovery, not perfection.
Use iterative and spiral development to surface risks early.
Treat complexity as a measurable, controllable property.
Recognize that ethical and legal dimensions (e.g., safety and liability) are intertwined with technical design.
References and Further Reading
While the original IEEE paper remains under copyright, readers can explore related concepts and sources that extend its ideas:
Keepence, B., & Mannion, M. (1997). Complex Systems. Proceedings of the IEEE International Conference on Requirements Engineering.
Leveson, N. (1995). Safeware: System Safety and Computers. Addison-Wesley.
Perrow, C. (1984). Normal Accidents: Living with High-Risk Technologies. Princeton University Press.
Brooks, F. P. (1995). The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley.
Senge, P. M. (1990). The Fifth Discipline: The Art and Practice of the Learning Organization. Doubleday.
IEEE Software Engineering Body of Knowledge (SWEBOK v3.0), Systems Engineering Section.
INCOSE Systems Engineering Handbook, 5th Edition.
Citation Notice
This webpage provides a summary and interpretation of “Complex Systems” by Barry Keepence and Mike Mannion, published by IEEE in 1997. Download Link
Original publication © IEEE. Content here is paraphrased and contextualized for educational and informational use, under fair academic reference.
Why Running Oxidized or RANCID in Production Is No Longer Defensible
For a long time, Oxidized and RANCID were the heroes of network automation. They were community-driven, lightweight, dependable, and genuinely ahead of their time. They helped thousands of organisations survive an era when vendors offered little or nothing in the way of automation or modern configuration management.

Stephen Stack
CTO, rConfig
The End of Script-Driven Networks: Compliance & What’s Next
Script-driven network automation once helped teams move fast — but in 2025, it no longer meets compliance, security, or audit requirements. With NIS2 and DORA enforcing strict configuration governance, unauthenticated or script-based NCM introduces real organisational liability. This article explains why tools like Netmiko, NAPALM, Batfish, Oxidized, and RANCID can’t meet the new bar — and what modern, compliant network automation looks like.

rConfig
All at rConfig
Your Network Automation Might Be a Legal Risk — Here’s Why
Modern networks have outgrown the script-driven tools that once held them together. With regulations like NIS2 and DORA enforcing strict configuration and audit requirements, network configuration management is now a board-level issue. This article explains why scripting isn’t sustainable anymore and how secure, open-source platforms like rConfig Core v8 meet today’s compliance expectations.

Stephen Stack
CTO, rConfig









