
Five core concepts for understanding systems
By Andrei Savu. Originally published on the Integration and Implementation Insights blog.
What concepts are key to understanding systems?
A system is a set of interdependent elements whose coordinated interactions give rise to an outcome none of the pieces can deliver alone. The key word is relationship: change the relationships and the behavior of the whole shifts, even if every component remains identical.
Five core concepts for systems thinking are: purpose, boundary, feedback, leverage and emergence.
Purpose and boundary
Every system exists to fulfill a purpose, defined by boundaries that separate internal elements from external factors. These two fundamental concepts—purpose and boundary—determine how we understand, analyze, and influence systems of all types.
Systems are more than collections of parts – they’re purposeful arrangements that work together to achieve specific outcomes. For example, a heap of bicycle parts scattered across a garage floor is just a collection – random, unorganized, inert. But assemble those same parts with intention—connect the chain to the gears, the handlebars to the frame, the wheels to the axles—and suddenly you have a system: a bicycle that can transport a person from one place to another.
The difference isn’t in the components themselves, but in how they’re arranged and connected. The bicycle’s purpose emerges from the specific relationships between its parts, creating capabilities that no individual component possesses on its own.
Purpose emerges from behavior
A system’s true purpose is revealed by what it actually does, not what it claims to do. Consider two healthcare systems:
- System A optimizes for hospital occupancy rates and procedure volumes.
- System B optimizes for patient wellness outcomes and prevention.
Though both might claim “health” as their purpose, their behaviors reveal different priorities. System A’s metrics and incentives create a purpose focused on treatment volume, while System B’s behaviors align with maintaining wellness.
When analyzing any system, look beyond stated missions to observe what the system actually optimizes for – that’s its true purpose.
Drawing boundaries
Every system analysis begins with a critical decision: where to draw the boundary between system and environment. This choice determines what’s considered part of the system (inside the boundary) versus what’s treated as external (outside the boundary).
Consider a caffè latte’s carbon footprint. Draw a narrow boundary around just the coffee shop, and you’ll count the electricity for the espresso machine and the gas for heating milk. Expand the boundary to include supply chains, and suddenly you’re accounting for coffee bean farming, dairy production, and global shipping networks.
Neither boundary is inherently “correct” – each serves different analytical purposes. A narrow boundary helps optimize local operations; a wider boundary reveals systemic impacts.
Inputs and outputs
Boundaries define what counts as inputs (crossing from environment into system) and outputs (crossing from system into environment). Shifting a boundary changes what we consider within our control versus what we treat as external constraints.
Feedback loops
Feedback loops are the engines that power system behavior, creating either stability or dramatic change. These circular causal relationships determine whether a system maintains equilibrium, grows exponentially, or oscillates – making them essential leverage points for intervention.
Feedback loops are core mechanisms in systems thinking that drive behavior and create complex dynamics. Understanding these loops is essential for analyzing how systems maintain stability or generate change over time:
- Reinforcing loops: amplify change in one direction, creating virtuous or vicious cycles that accelerate over time. They generate exponential patterns until external constraints eventually limit their growth.
- Balancing loops: sense deviation from a target and trigger corrective actions that push the system back toward equilibrium. They create stability when functioning properly, but generate oscillations when hampered by delays or constraints.
- Mixed loops in the wild: real systems contain intertwined reinforcing and balancing loops that compete for dominance, creating complex dynamics. The behavior we observe emerges from this competition, often shifting dramatically when one loop overtakes another.
Leverage points
Not all interventions in a system are created equal. Leverage points are places where small, well-focused actions create disproportionate impact, allowing you to achieve transformative change with minimal resources when you target the right system elements.
The counter-intuitive nature of leverage
Most interventions target what’s visible and measurable – tweaking parameters, adjusting flows, or adding resources. Yet these surface-level changes often produce disappointing results. The highest-impact leverage points typically lie deeper in the system’s architecture, where they’re less obvious but far more powerful.
This counter-intuitive reality explains why doubling a department’s budget might achieve less than rewriting its incentive structure, or why a new information technology system fails while a shift in organizational purpose succeeds. The deeper the leverage point, the more resistance you’ll encounter – and the more transformative the eventual change.
The leverage ladder: Shallow to deep
Systems theorist Donella Meadows (2008) identified a hierarchy of leverage points, arranged from least to most powerful:
- Parameters – Numbers, thresholds, and constants (eg., prices, quotas, standards)
- Buffers – Sizes of stabilizing stocks relative to flows (eg., inventory levels, reserve funds)
- Structure – Physical arrangements and connections between system elements
- Delays – Lengths of time between actions and consequences
- Balancing Feedback – Strength of stabilizing mechanisms (eg., thermostats, market corrections)
- Reinforcing Feedback – Strength of amplifying or accelerating loops
- Information Flows – Who does and doesn’t have access to what information
- Rules – Policies, incentives, punishments, and constraints
- Self-Organization – Power to add, change, or evolve system structure
- Goals – Purpose or function of the system
- Paradigms – Mindsets out of which goals, rules, and structures arise.
As you descend this list, leverage increases dramatically. Changing paradigms and goals can transform entire systems with minimal resource investment, while parameter adjustments typically yield only incremental improvements.
Emergence
Some of the most fascinating system properties cannot be found in any individual component. Emergence explains how interactions between parts create entirely new behaviors and capabilities that transcend the sum of their parts – a phenomenon that challenges our reductionist instincts.
In systems thinking, emergence describes how interactions between parts can create properties, patterns, and capabilities that none of the individual components possess alone.
Emergence is about qualitative novelty – the appearance of something genuinely different than what existed before. When hydrogen and oxygen atoms bond to form water, wetness emerges. Nothing about individual hydrogen or oxygen atoms is wet, yet water flows, splashes, and hydrates in ways neither element can alone.
True emergence has this defining characteristic: the behavior of the whole cannot be predicted or explained by dissecting the parts. You cannot find “liquidity” by examining hydrogen, nor “consciousness” by examining individual neurons. The emergent property exists only at the level of the whole system.
Why reductionism fails here
We’re trained to solve problems by breaking them into smaller pieces. This reductionist approach works beautifully for mechanical systems with linear interactions – take apart a clock, fix the broken gear, reassemble. But emergent behaviors arise from nonlinear interactions between components. These relationships, not the components themselves, generate the system’s behavior.
When we try to “fix” emergent problems by optimizing isolated parts, we often make things worse. A traffic jam isn’t solved by making each car faster; urban housing shortages aren’t fixed by just building more units; ecosystem collapse isn’t prevented by saving single species. Each requires understanding interconnected patterns that live between the components.
Conclusion
The aim of providing these core concepts is to provide the basics for thinking about systems. Do they resonate with you? Do they help you better understand your own research and approach to problems? Are there examples that come to mind that highlight these concepts in action?
To find out more:
Savu, A. (2025). Teach Yourself Systems. Teach Yourself Systems website. (Online): https://teachyourselfsystems.com/
This interactive learning resource also provides examples, models and quizzes. Much of this i2Insights contribution is taken verbatim from this resource.
Reference:
Meadows, D. H. (author), Wright, D. (editor). (2008). Thinking in systems: A primer. Chelsea Green Publishing: Vermont, United States of America.
Use of Generative Artificial Intelligence (AI) Statement: Teach Yourself Systems (TYS) was built with a lot of artificial intelligence assistance – both content wise and from a coding perspective. Most of the code has been written by OAI Codex with some help from Devin early on. A lot of brainstorming on various topics was done with o3 Pro. (For i2Insights policy on generative artificial intelligence please see https://i2insights.org/contributing-to-i2insights/guidelines-for-authors/#artificial-intelligence.)
Biography:
![]() |
Andrei Savu builds data and artificial intelligence (AI) systems and created Teach Yourself Systems (TYS), an interactive site that helps practitioners learn systems thinking and system dynamics through hands on models and examples. He believes that in a world of abundant intelligence, systems thinking is becoming more important than ever. His interests include AI agents, data platforms, and turning systems concepts into practical tools people can use every day. He is based in Menlo Park, California, USA. |
Article source: Five core concepts for understanding systems. Republished by permission.
Header image source: Created by Bruce Boyes with Microsoft Designer Image Creator.




