When Jos de Blok looked at Dutch home care, he saw a management model that had become part of the problem.
Home care nursing is not a tidy production process. Every patient brings a different mix of medical needs, family dynamics, living conditions, emotional realities, and sudden changes. Small signals matter and context changes fast. The people closest to the patient often hold crucial tacit knowledge that cannot be captured fully in a procedure manual or escalated up a chain of command in time to matter.
In Cynefin terms, this is a complex environment: there are too many interdependent variables that make it impossible to create an efficient centralized process. But the system that de Blok had known as a nurse and later as a leader was built as if home care were merely complicated. It leaned on specialization, managerial oversight, and layers of coordination designed to create control. But for this complex problem, those layers became part of the problem.
Harvard Business School’s account of Buurtzorg notes that de Blok had seen “counterproductive layers of management” undermine care quality and frontline discretion. So he made a radical wager: if the work itself was complex, the answer was not more hierarchy. It was a different theory of leadership. Buurtzorg organized care around small self-managing neighborhood teams, with minimal middle management and a lean support structure. The center stopped trying to out-think the edges and started enabling them.
That story matters far beyond healthcare. It captures the mistake many organizations now risk making with AI: applying a top-down management model to challenges that are, at least in part, complex.
AI Is Not One Leadership Problem. It Is Two.
Most executive conversations about AI still assume a single challenge: implementation. Buy the tools, train the workforce, hire the experts, and move fast. But AI is creating at least two very different leadership problems.
Some AI problems are complicated. They require expertise, analysis, and disciplined systems. Think data architecture, cybersecurity, privacy, model evaluation, legal compliance, workflow redesign, and technical governance. These are not simple issues, but they are tractable. The right response is rigorous diagnosis, strong standards, and clear accountability. In Cynefin terms, leaders in this domain must sense, analyze, and respond.
Other AI problems are complex. How will customers behave when AI becomes embedded in products and services? Which use cases will create durable value rather than just attention? How should judgment be divided between humans and machines? What happens to culture when some employees trust AI deeply, others distrust it, and many use it informally out of management’s sight? Those are not problems that yield to a leadership memo. They require leaders to probe, sense, and respond.
This distinction sounds abstract until you see its consequences. If leaders treat a complicated problem as complex, they can drift into improvisation where rigor is required. But if they treat a complex problem as complicated, they over-centralize, over-standardize, and under-learn. That second mistake may be the defining leadership failure of the AI era.
The Shift From Answer-Giver to Context-Setter
For decades, many leaders rose by being decisive, analytical, and visibly in control. Those traits still matter. But in complex conditions, they are not enough. The leader who insists on having the answer too early can shut down the very learning the organization most needs.
This is where Buurtzorg offers such a powerful lesson. De Blok did not just become a more empathetic leader. He changed his model of what leadership is for. In a complex system, the leader’s job is to create the conditions in which good judgment can emerge throughout the system. That requires adopting a different mindset about authority.
In the complicated parts of AI, leaders should tighten standards, elevate expertise, and demand rigor. In the complex parts, they should widen participation, encourage small experiments, protect dissent, and reward learning. The critical leadership skill is knowing when to switch.
Why Swarm Intelligence Matters More Than Executive Certainty
Business leaders often talk about “empowering employees,” but complex problems demand something more precise: they demand systems that let intelligence emerge from many places.
Research by Anita Woolley and colleagues found evidence for a general collective intelligence factor in groups. Strikingly, group performance was not tied to the highest individual intelligence in the room. It was more closely associated with social sensitivity and with more equal conversational turn-taking. In practical terms, groups get smarter when more people can meaningfully contribute and when interaction patterns allow insight to surface, not just status to dominate.
That should provoke an uncomfortable question for senior leaders: what if your organization is full of intelligence that your culture cannot hear?
In complex AI environments, breakthrough insights often begin at the edges. A sales manager notices where customers actually trust the tool. A service employee spots a subtle failure mode. A product designer sees that the real opportunity is not automating the old workflow, but redesigning it entirely. A junior analyst challenges the executive team’s favorite use case and turns out to be right. In a complex environment, these become the raw material of strategy.
The organizations that learn fastest from AI will not be those with the most polished top-down vision. They will be those with the richest lateral sensing mechanisms: more experimentation, more challenge, more idea collisions, and more pathways for weak signals to travel upward and sideways.
Culture Is Your Operating Infrastructure.
That is why culture cannot be treated as a side topic in AI transformation. Culture determines how well an organization learns.
Amy Edmondson’s research on psychological safety showed that teams learn more effectively when people believe the environment is safe for interpersonal risk-taking. In sage cultures, people speak up more and admit mistakes sooner. They raise concerns before problems metastasize. Psychological safety is associated with learning behavior because it lowers the social cost of candor.
Why does that matter in AI? Because AI adoption is full of ambiguity. Employees are constantly making judgment calls: when to trust the tool, when to override it, when to disclose its use, when to question the workflow, and when to challenge leadership’s assumptions. In a fearful culture, they will hide uncertainty, perform confidence, and quietly work around the system. In a learning culture, they will surface anomalies, share experiments, and improve the system in public.
Many organizations say they want innovation, but their incentives still reward obedience. They say they want initiative, but punish failed experiments. They say they want challenge, but subtly penalize people who question senior leaders. Instead of an innovation culture, it leads to a compliance culture.
Buurtzorg worked because the shift was structural, not rhetorical. Frontline teams did not merely get permission to speak up. They got real discretion. The system was redesigned around the reality that those closest to the patient were best positioned to respond to complexity.
What Leadership Looks Like in the AI Era
So what should leaders actually do?
First, diagnose the domain. Ask: is this AI challenge primarily complicated, complex, or a blend of both? That question should come before the org chart, the governance model, or the training plan.
Second, match the leadership response to the problem. In complicated domains, clarify ownership, concentrate expertise, and build strong review mechanisms. In complex domains, run more small experiments, widen participation, shorten feedback loops, and let the people closest to the work challenge assumptions early.
Third, redesign incentives around learning. You cannot build collective intelligence in a culture where dissent is risky and failure is career-limiting. If leaders want employees to behave like owners, the system must make it safe to notice, question, and improve.
Finally, rethink the role of middle management. In too many organizations, middle layers still function mainly as transmission belts for approval and control. But in a complex environment, the best middle managers help signals travel. They turn the organization into a smarter sensing system rather than a slower permission system.
The Leadership Advantage That Will Matter Most
The AI era will reward many familiar strengths: technical fluency, strategic clarity, disciplined execution. But over time, the most valuable advantage may be more subtle.
It will belong to leaders who can tell when expertise should dominate and when emergence should. Leaders who know when to act like engineers and when to act like gardeners. Leaders who understand that hierarchy is still useful, but not universally wise. Leaders who stop asking, “How do I get the organization to execute my answer?” and start asking, “How do I build an organization capable of discovering better answers than I could alone?”
That is the deeper lesson of Buurtzorg. Jos de Blok did not save a struggling system by becoming a more forceful commander. He succeeded because he recognized that in a complex human system, the smartest move is to increase the system’s capacity to learn.
AI now puts that same choice in front of every executive team. Some problems will still require experts, precision, and control. But many of the most consequential ones will require humility, experimentation, and trust in intelligence distributed throughout the organization. The companies that thrive will not just deploy better tools. They will build cultures where insight can rise from anywhere, where leadership adapts to the problem at hand, and where the search for the right answer matters more than protecting the illusion that it already lives at the top.
