This week’s newsletter brings episode 20 of the Cyb3rSyn Labs Podcast, featuring Adam Walls and John Flach, the authors of the book, “Do Systems Exist"?
The episode challenges the conventional notions of "systems”. The authors argue that systems are not objective realities but rather mental models that individuals use to make sense of their world. Adam and John advocate for a synthesis-first approach, emphasizing conversations, humility, and the acceptance of multiple perspectives to understand complex phenomena, akin to the analogy of the blind men and the elephant.
They also highlight that in complex, unpredictable environments, overconfidence in models and the pursuit of root causes are detrimental, instead recommending a cautious, iterative approach of testing hypotheses through action and maintaining flexibility in the face of uncertainty.
Join us for a compelling conversation filled with valuable insights and practical wisdom.
“Systems are an expression of an observer’s viewpoint and do not have independent, verifiable existence.”
Table of Contents
Podcast Video
Members of the Cyb3rSyn Community can watch/discuss the podcast episode on the www.cyb3rsynlabs.com portal or the mobile app (iOS and Android).
Key Insights and My Reflections
The book and our discussion were packed with nuggets of insight that truly resonated with me, especially concerning how we approach complex challenges in business and beyond. Here are some of the useful insights from our chat…
Systems: Not Real, Just Mental Models
Adam and John open the book with a provocative question: What if systems don't exist? It sounds counterintuitive, right? Especially for those of us in business, where we often talk about "business capabilities" as defined systems. However, as they clarified, systems only exist as mental models we use to make sense of our world. The core idea is that thinking of systems as "real" can be unhelpful because it leads us to believe that what we see is THE reality (vs “a reality”).
Adam shared a perfect example from his experience with enterprise architects. When architects define a "business capability model" with hundreds of capabilities like "invoice management" and present it to the business, the business often responds with, "We don't care". Why? Because to the people doing the work, it's not an abstract "invoice management capability"; it's "John" or "Jack" who manages invoices. This reductionist, mechanistic thinking, where we define things, is where we often go wrong. Once you define something, it limits discussion and ignores the "fuzzy edges" of reality. Instead, describing what you observe leaves room for a shared understanding.
The Danger of Reductionist Thinking
This brings us to a crucial point about the dangers of reductionist thinking. Rory Sutherland calls it the "Dorman Fallacy," which I like to refer to as the "Silicon Valley syndrome". The idea is simple: if you ask what a doorman does, you might just say, "They open doors." You then think you can automate it with a revolving door, saving money on salary and benefits.
However, a doorman is much more than that – they provide security, hail cabs, carry luggage, engage in small talk. Without them, a hotel might not even be able to charge its premium rates, potentially wrecking the entire business. This is because reductionist thinking ignores the broader context and the nuanced value of human interaction and emergent roles.
This concept is beautifully illustrated by the old Indian tale of the blind men and the elephant. Each blind man touches a different part of the elephant and defines it based on their limited perspective. The problem isn't their partial signal, but their tendency to define what they feel rather than describe it, which shuts down conversation. As John pointed out, everyone has a different perspective – the CEO, CTO, doctor, nurse, janitor all see different parts of a healthcare system.
The only way to differentiate signal from noise is through honest, humble conversations across the organization. We must acknowledge that our representations are not always objective and are influenced by our background and biases. The blind men can move around the elephant, exploring it over time, and if they had eyes, they could step back and see relationships they couldn't perceive by touch alone. The key is to have quality conversations that embrace "both/and" thinking, accepting the validity of different perspectives, and respecting the expertise of everyone, even a doorman.
The Limits of Prediction
One of the big critiques John and Adam had for traditional Silicon Valley approaches is the obsession with quickly solving problems by breaking them down into parts and seeking a single "root cause". This approach, ingrained in our education, often ignores environmental influences and human complexity.
Adam likened complexity to a "swarm of bees". When you're in the middle of it, it's chaotic, but if you step back, you can see the edges. His approach to complexity is to "run away as far away as I can and then walk towards it," allowing important things to emerge and make sense. The rush to analysis and root cause often "precludes making sense of the thing you're looking at". It's about moving beyond "I've got pain, what will solve it?" to "is this really the problem, or a symptom?".
John further elaborated, explaining that traditional thinking often assumes a "clockwork universe" where we can predict the future if we know the initial conditions. However, nature is more like "weather systems" where noise and uncertainty are endemic, and we can only forecast so far into the future. Root cause analysis presumes a controllable future through managing parts, but a complex system requires humility about our ability to predict. The solution? Your models will always be uncertain, so they must be tested through action. We have to move forward, making decisions cautiously, accepting that we might be wrong, and building in flexibility. The problem with root cause thinking is "overconfidence in your models". As Adam pointed out, the higher the consequences of being wrong, such as in business transformations, the more likely you are to be wrong, and the statistics on business transformation failures are "absolutely horrible".
The Power of Synthesis
This leads us to the crucial distinction between the analysis mindset and the synthesis mindset. While our education, especially in STEM fields, often trains us as analytical thinkers, we need both.
Adam highlighted how businesses view change as something "over there," leading to "change projects" and eventually "transformation projects" that are like tsunamis destroying everything. This happens because organizations lack the "requisite variety" to adapt to environmental changes. The synthesis approach, by contrast, starts with "I don't know what the problem is" – turning it into a question rather than a statement. We gather information, form hypotheses, and then apply analysis. The core difference between system thinking and normal thinking is starting with "we don't know" versus "we know".
John provided excellent examples of synthesis in action. Consider text messaging. Early cell phones had the capability, but nobody imagined people would prefer texting over talking. Similarly, in military organizations, warfighters would duct-tape computers to equipment for chat capabilities because they valued it, even though researchers often found voice communication "more effective". The value of chat emerged because it protects the voice channel for truly important things and allows for non-invasive communication, like responding to a spouse about picking up bread without interrupting a meeting. This capability allows users more control.
Education, John argued, is primarily built around analysis, with synthesis often relegated to a "capstone design course" at the very end. He suggests inverting this, giving freshmen a complex problem on day one, like "imagine the potential of digital healthcare". This provides a context that makes learning analytical tools more meaningful, as students actively seek information to solve their problem, fostering creativity.
As Adam shared from his R&D experience, he's given a problem and it's up to him to figure out how to solve it, starting with context and gathering information. John even cited a medical school experiment where students given patients from day one outperformed conventionally educated students on exams, proving that context-driven learning leads to better retention and application of knowledge.
Embracing Uncertainty & Adaptability
The "butterfly effect" beautifully illustrates the inherent unpredictability in complex systems – small differences can lead to vastly different outcomes. This directly challenges the traditional Silicon Valley approach that relies on modeling predictable returns and blindly chasing efficiency and "eternal growth".
Adam pointed out that "everything shifts around all the time," and the idea that we can predict the future, especially based on someone who "predicted right last time," is fundamentally flawed. He argues that much of what we attribute to foresight is often "luck and chance". Our economic models, based on perpetual GDP growth, are built on a "false assumption". Relying on those who profit from specific narratives, whether it's the promise of AI or the fear of it, means we're often sold stories. For organizations to plan effectively in the face of this uncertainty, they must move away from the belief that they can perfectly predict and control the future.
My conversation with Adam and John underscored that navigating complex situations requires a fundamental shift in mindset. We need to acknowledge that our "systems" are merely mental models, embrace humility in the face of unpredictable futures, and prioritize continuous conversation to build shared understanding. Rather than rushing to define and reduce, we should strive to describe and synthesize, allowing for fuzzy edges and emergent possibilities. This means fostering adaptability, testing hypotheses through action, and building in flexibility to course-correct when our models inevitably prove incomplete.
That’s it for this week. Stay tuned for more multidisciplinary insights in Part 2.

