6 Oct 2025

Understanding Complex Systems in Software Engineering

Understanding Complex Systems in Software Engineering

Adapted and summarized from research by Barry Keepence & Mike Mannion (Napier University, 1997, IEEE).

rConfig

All at rConfig

Abstract digital network with interconnected glowing nodes and lines against a dark blue background, representing data or technology concepts.
Abstract digital network with interconnected glowing nodes and lines against a dark blue background, representing data or technology concepts.

Understanding Complex Systems in Software Engineering

Adapted and summarized from research by Barry Keepence & Mike Mannion (Napier University, 1997, IEEE).

Introduction

Modern organizations — from global enterprises to small firms — face a recurring challenge: large-scale system projects often fail. Cost overruns, missed deadlines, security incidents, and even safety risks are common. Keepence and Mannion (1997) examined why so many computer-based systems collapse under their own weight and argued that complexity itself lies at the heart of these failures.

Their work, published through the IEEE Computer Society, remains one of the early, pragmatic looks at why even well-managed projects stumble when systems grow too complex to understand, test, or control.

What Makes a System “Complex”?

A complex system isn’t defined merely by size — but by interconnectedness.
A small system can be highly complex if its components interact in unpredictable ways, while a massive system might remain simple if its architecture is modular and loosely coupled.

Typical symptoms of complexity-driven project failure include:

  • Late or incomplete delivery

  • Unmet specifications or failed functionality

  • Excessive cost overruns

  • Fragility and instability during change

  • Unverifiable safety and reliability

  • High operational or maintenance risk

As Keepence and Mannion emphasized, complexity increases as we integrate multiple technologies (software, ASICs, programmable logic, AI subsystems) and attempt to push the boundaries of performance and flexibility.

Why Complex Systems Are So Hard to Build

1. Intellectual Tractability

In small systems, one person can understand the entire architecture. Once systems grow, knowledge fragments across teams.
When interactions between modules are too dense, decomposition fails — the interfaces themselves become the problem.

2. Details Slipping Through the Net

Postmortems of failed systems often reveal small oversights that cause massive chain reactions — anything from a minor coding bug to an untested system-level assumption.
Keepence and Mannion observed that documentation sprawl, poor communication, and inadequate testing frequently allow these issues to escape detection.

3. The “Limit-Pushing” Culture

The engineering mindset of “higher, faster, cheaper” pushes teams to add new features or performance goals with every release.
While this drives innovation, it often sacrifices reliability and verification.
Aviation software, for instance, illustrates this paradox — modern aircraft systems (e.g., Boeing 777, Airbus A320) are safer than ever but exhibit increasingly complex failure modes due to human–computer interaction dynamics.

4. The Perfection Culture

There’s a pervasive belief that anything is buildable with the “right” tools or methods.
Keepence and Mannion challenged this assumption, arguing that some system complexities might be inherently unmanageable — not just poorly engineered.

Moving Forward: Coping Strategies

The authors proposed that software and systems engineering must evolve not only in methods but in mindset. Their suggested approaches remain highly relevant today:

1. Abandon the Blame Culture

Failures should not be reduced to individual mistakes. Instead, we must recognize that complex systems inherently fail because they exceed human comprehension and interaction capacity.

2. Design for Failure

Accept that all systems will eventually fail — and build mechanisms for safe degradation, redundancy, and resilience.
Hardware engineers have long practiced this; software engineers must do the same.

3. Make Complexity Reduction a Core Goal

Fewer features, simpler interfaces, and encapsulation often produce better results than adding “smart” subsystems.
Keepence and Mannion advocated for trading unnecessary performance gains for maintainability and predictability.

4. Measure Complexity Early

Most complexity metrics (e.g., KLOC, function points) apply too late — after coding begins.
The authors called for early-stage complexity measurement and classification, integrating risk, safety, and hazard analysis from the outset of design.

Lessons for Modern Engineering

Nearly three decades later, the themes from Keepence and Mannion’s work echo across AI safety, cybersecurity, and autonomous systems development.
We now know that system failure rarely stems from a single bug — it emerges from the interaction between components, humans, and evolving environments.

Key takeaways for today’s engineers and researchers include:

  • Prioritize simplicity over novelty.

  • Embed resilience and recovery, not perfection.

  • Use iterative and spiral development to surface risks early.

  • Treat complexity as a measurable, controllable property.

  • Recognize that ethical and legal dimensions (e.g., safety and liability) are intertwined with technical design.

References and Further Reading

While the original IEEE paper remains under copyright, readers can explore related concepts and sources that extend its ideas:

  • Keepence, B., & Mannion, M. (1997). Complex Systems. Proceedings of the IEEE International Conference on Requirements Engineering.

  • Leveson, N. (1995). Safeware: System Safety and Computers. Addison-Wesley.

  • Perrow, C. (1984). Normal Accidents: Living with High-Risk Technologies. Princeton University Press.

  • Brooks, F. P. (1995). The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley.

  • Senge, P. M. (1990). The Fifth Discipline: The Art and Practice of the Learning Organization. Doubleday.

  • IEEE Software Engineering Body of Knowledge (SWEBOK v3.0), Systems Engineering Section.

  • INCOSE Systems Engineering Handbook, 5th Edition.

Citation Notice

This webpage provides a summary and interpretation of “Complex Systems” by Barry Keepence and Mike Mannion, published by IEEE in 1997. Download Link
Original publication © IEEE. Content here is paraphrased and contextualized for educational and informational use, under fair academic reference.

+5

Trusted by Leading Enterprises

Want to see how rConfig can transform your network management?

Contact us today to discuss your specific use case and get expert guidance on securing and optimizing your infrastructure.

An isometric illustration of a person standing on a digital platform beside a staircase, interacting with floating holographic screens, symbolizing technological advancement and data analysis.

+5

Trusted by Leading Enterprises

Want to see how rConfig can transform your network management?

Contact us today to discuss your specific use case and get expert guidance on securing and optimizing your infrastructure.

An isometric illustration of a person standing on a digital platform beside a staircase, interacting with floating holographic screens, symbolizing technological advancement and data analysis.

+5

Trusted by Leading Enterprises

Want to see how rConfig can transform your network management?

Contact us today to discuss your specific use case and get expert guidance on securing and optimizing your infrastructure.

An isometric illustration of a person standing on a digital platform beside a staircase, interacting with floating holographic screens, symbolizing technological advancement and data analysis.