Boosting AI’s Intelligence with Metacognitive Primitives

Over the past year or so, AI experts, like Ilya Sutskever in his Neurips 2024 talk, have been raising concerns that AI reasoning might be hitting a wall. It seems that simply throwing more data and computing power at the problem is giving us less and less in return, and models are struggling with complex thinking tasks. Maybe it’s time to explore other facets of human reasoning and intelligence, rather than just relying on sheer computational force.

At its core, a key part of human intelligence is our ability to pick out just the right information from our memories to help us solve the problem at hand. For instance, imagine a toddler seeing a puppy in a park. If they’ve never encountered a puppy before, they might feel a bit scared or unsure. But if they’ve seen their friend playing with their puppy, or watched their neighbors’ dogs, they can draw on those experiences and decide to go ahead and pet the new puppy. As we get older, we start doing this for much more intricate situations – we take ideas from one area and apply them to another when the patterns fit. In essence, we have a vast collection of knowledge (made up of information and experiences), and to solve a problem, we first need to identify the useful subset of that knowledge.

Think of current large language models (LLMs) as having absorbed the entire knowledge base of human-created artifacts – text, images, code, and even elements of audio and video through transcripts. Because they’re essentially predictive engines trained to forecast the next word or “token,” they exhibit a basic level of reasoning that comes from the statistical structures within the data, rather than deliberate thought. What has been truly remarkable about LLMs is that this extensive “knowledge layer” is really good at exhibiting basic reasoning skills just by statistical prediction. 

Beyond this statistical stage of reasoning, prompting techniques, like assigning a specific role to the LLM, improve reasoning abilities even more. Intuitively speaking, they work because they help the LLM focus on the more relevant parts of its network or data, which in turn enhances the quality of the information it uses. More advanced strategies, such as Chain-of-Thought or Tree-of-Thoughts prompting, mirror human reasoning by guiding the LLM to use a more structured, multi-step approach to traverse its knowledge bank in more efficient ways. One way to think about these strategies is as higher-level approaches that dictate how to proceed. A fitting name for this level might be the Executive Strategy Layer – this is where the planning, exploration, self-checking, and control policies reside, much like the executive network in human brains.

However, it seems current research might be missing another layer: a middle layer of metacognitive primitives. Think of these as simple, reusable patterns of thought that can be called upon and combined to boost reasoning, no matter the topic. You could imagine it this way: while the executive strategy layer helps an AI break down a task into smaller steps, the metacognitive primitive layer makes sure each of those mini-steps is solved in the smartest way possible. This layer might involve asking the AI to find similarities or differences between two ideas, move between different levels of abstraction, connect distant concepts, or even look for counter-examples. These strategies go beyond just statistical prediction and offer new ways of thinking that act as building blocks for more complex reasoning. It’s quite likely that building this layer of thinking will significantly improve what the Executive Strategy Layer can achieve.

To understand what these core metacognitive ideas might look like, it’s helpful to consider how we teach human intelligence. In schools, we don’t just teach facts; we also help students develop ways of thinking that they can use across many different subjects. For instance, Bloom’s revised taxonomy outlines levels of thinking, from simply remembering and understanding, all the way up to analyzing, evaluating, and creating. Similarly, Sternberg’s theory of successful intelligence combines analytical, creative, and practical abilities. Within each of these categories, there are simpler thought patterns. For example, smaller cognitive actions like “compare and contrast,” “change the level of abstraction,” or “find an analogy” play an important role in analytical and creative thinking.

The exact position of these thought patterns in a taxonomy is less important than making sure learners acquire these modes of thinking and can combine them in adaptable ways.

As an example, one primitive that is central to creative thinking is associative thinking — connecting two distant or unrelated concepts. In a study last year, we showed that by simply asking an LLM to incorporate a random concept, we could measurably increase the originality of its outputs across tasks like product design, storytelling, and marketing. In other words, by turning on a single primitive, we can actually change the kinds of ideas the model explores and make it more creative. We can make a similar argument for compare–contrast as a primitive that works across different subjects: by looking at important aspects and finding “surprising similarities or differences,” we might get better, more reasoned responses. As we standardize these kinds of primitives, we can combine them within higher-order strategies to achieve reasoning that is both more reliable and easier to understand.

In summary, giving today’s AI systems a metacognitive-primitives layer—positioned between the knowledge base and the Executive Strategy Layer—might provide a practical way to achieve stronger reasoning. The knowledge layer provides the content; the primitives layer supplies the cognitive moves; and the executive layer plans, sequences, and monitors those moves. This three-part structure mirrors how human expertise develops: it’s not just about knowing more, or only planning better, but about having the right units of thought to analyze, evaluate, and create across various situations. If we give LLMs explicit access to these units, we can expect improvements in their ability to generalize, self-correct, be creative, and be more transparent, moving them from simply predicting text toward truly adaptive intelligence.

What Bees, Ants, and Fish Can Teach Us About Teaming

In today’s complex and rapidly evolving world, traditional hierarchical leadership models often fall short. What if we could learn from nature’s most efficient problem-solvers? Swarm intelligence, a fascinating area of study in biology and computer science, demonstrates how decentralized systems can achieve complex and effective decision-making without the need for a central authority or “CEO.” By observing the behaviors of social insects like honeybees and ants, and even schools of fish, we can uncover profound principles for fostering more agile, innovative, and resilient teams.

Nature’s Masterclasses in Collective Intelligence

Honeybee Swarms: The Art of Collective Deliberation

Imagine a bustling city of honeybees, thousands strong, looking for a new home. How they settle on a decision is a fascinating collective “debate”.  First, individual scout bees explore potential nest sites and, when they find one they return to the swarm. There they form a “waggle dance” where the intensity, duration and direction of the dance indicates the desirability of the location. But how do the bees choose between different locations?

To prevent premature consensus and ensure a thorough evaluation, honeybee swarms employ quorum thresholds. A significant number of scout bees must independently agree on a site before the swarm commits. Furthermore, the system incorporates “stop signals”—a form of cross-inhibition. If two equally attractive options emerge, scouts from one site might use stop signals to interrupt the waggle dances of those promoting the other. This intricate interplay of positive feedback (more waggle dances for a good site) and negative feedback (stop signals to resolve conflicts) allows for smarter, more robust decision-making.

Ant Colonies: The Power of Pheromone Trails

Ant colonies also demonstrate swarm intelligence in their foraging strategies. They navigate their environment and locate food sources using chemical communication through pheromone trails. When an ant discovers food, it lays down a pheromone trail on its return journey. Other ants encountering this trail are more likely to follow it, reinforcing the chemical signal in the process. This mechanism acts as a powerful form of positive feedback, amplifying promising paths to food sources. The more ants that use a particular trail, the stronger the pheromone concentration becomes, attracting even more ants.

But what about mistakes? The system also incorporates negative feedback through evaporation. Pheromones are volatile and naturally dissipate over time. If a trail leads to a dead end or a depleted food source, fewer ants will use it, and the pheromone will evaporate, effectively “pruning” mistakes. This constant amplification of successful paths and the gradual decay of inefficient ones allows ant colonies to efficiently explore their environment and adapt to changing conditions.

Fish Schools: The Wisdom of the “Uninformed”

In their groundbreaking research on animal collectives, Iain Couzin’s group found a fascinating paradox: while a small, informed minority can indeed guide an entire group, this influence is remarkably susceptible to the presence of “uninformed” individuals. These individuals, who lack strong pre-existing biases or firm convictions, play a crucial role in dampening polarization and fostering a return to democratic consensus. Essentially, their unbiased perspective acts as a counterweight, preventing the group from being unduly swayed or “captured” by the vocal and sometimes extreme views of a passionate minority.

Leadership Principles for Harnessing Collective Intelligence

So, how can leaders apply these natural phenomena to cultivate high-performing teams?

  • Decentralized Control: Unlike traditional hierarchical structures, swarm intelligence thrives on the absence of a single, central command. Decisions and actions are distributed among individual agents, empowering them to respond dynamically to local conditions.
    • For Leaders: Foster autonomy and push decision-making authority closer to the point of action. Trust teams and individuals to self-organize and adapt. This reduces bottlenecks, increases responsiveness, and leverages diverse perspectives.
  • Self-Organization: Swarm systems spontaneously form coherent structures and exhibit complex behaviors through simple interactions between individual agents. There’s no master plan dictated from above; rather, patterns emerge from the bottom up.
    • For Leaders: Clearly articulate overarching goals and boundaries, then step back to allow teams to define their own processes and solutions. This encourages emergent creativity and a sense of ownership.
  • Communication: Effective communication, often indirect and localized, is vital for swarm intelligence. Information flows through interactions, allowing agents to adjust their behavior based on the actions of their neighbors.
    • For Leaders: Emphasize transparent information sharing, create channels for open dialogue, and foster a culture where feedback is actively solicited and shared. Enable peer-to-peer interactions and information dissemination that can influence collective behavior.
  • Strategic Use of Positive and Negative Feedback Loops: Swarms leverage both positive feedback to amplify successful behaviors and negative feedback to correct deviations and maintain stability. This continuous learning mechanism allows the system to adapt and optimize.
    • For Leaders: Establish mechanisms that make it easy to amplify promising ideas and just as easy to unwind bad ones. Celebrate successes, recognize and reward innovative approaches, and crucially, create a safe environment where failures are viewed as learning opportunities rather than punitive events.
  • Counteracting Tunnel Vision: Swarms maintain democratic decision-making and prevent a small minority from exerting undue influence by incorporating a few “uninformed” individuals who redirect power to the collective.
    • For Leaders: Introduce individuals who are less invested in existing paradigms or solutions to foster a more robust and truly democratic decision-making process. These “fresh eyes” are unburdened by historical context or emotional attachments and are more likely to ask fundamental questions, spot inconsistencies, and propose truly novel approaches.

By understanding and applying the principles of swarm intelligence, leaders can build teams that are not only more efficient and adaptable but also inherently more innovative and resilient in the face of modern challenges. The answers to complex organizational problems might just be found in the collective wisdom of a bee swarm, an ant colony, or a school of fish.

Photo credit: Niklas Stumpf on Unsplash

Five Traits That Shape the Entrepreneurial Mindset

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.”  — George Bernard Shaw

In today’s fast-changing world, where AI is not only automating tasks but also reshaping entire industries, innovation has become a necessity. Yet innovation doesn’t emerge from processes alone. It emerges from people. More specifically, it emerges from people who consistently challenge the status quo, spot unseen opportunities, and act boldly to create something new. These are the people who exhibit the entrepreneurial mindset.

Contrary to popular belief, you don’t have to start a company to be entrepreneurial. Within every team, product group, and division of an organization, there are individuals who think like entrepreneurs. They act like internal catalysts — pushing boundaries, taking initiative, and turning ideas into reality. But what exactly distinguishes them? What traits give rise to this mindset?

In this article we look at five specific traits that map closely to the entrepreneurial mindset and fuel innovation, especially in uncertain environments. As AI continues to blur the boundaries between human and machine capabilities, hiring creative and entrepreneurial people creates the differentiating factor for companies.

Openness to Experience: Seeing Possibilities Others Miss

Openness to experience, one of the Big Five personality traits, describes a person’s receptivity to new ideas, perspectives, and experiences. People high in this trait tend to be imaginative, curious, and comfortable with ambiguity.

This openness is a key driver of innovation. It enables individuals to see beyond the obvious, connect dots across disparate domains, and entertain possibilities that others dismiss. A meta-analysis found openness to be one of the strongest personality predictors of entrepreneurial intentions and success. In essence, openness expands your aperture. It increases the range of what you consider possible.

In the workplace, these are the team members who explore emerging technologies, ask “what if?” questions, and propose ideas that seem unconventional at first. They are more likely to perceive AI not as a threat, but as a canvas for experimentation. And in a world being rapidly redefined by AI, such openness is vital for reinventing products, services, and business models.

Proactive Personality: Acting Before the Opportunity Knocks

If openness helps you see opportunities, proactivity helps you act on them. Proactive individuals don’t wait for permission. They initiate change, seek out problems to solve, and constantly look for ways to improve systems around them.

In the context of entrepreneurship, proactivity has been strongly linked with opportunity recognition and business creation. A research study on college students showed that proactive individuals were significantly more likely to identify and pursue entrepreneurial opportunities. 

Within organizations, proactive employees are often the first to spot gaps, propose new initiatives, or pilot AI tools to automate repetitive tasks. They aren’t satisfied with maintaining the status quo. This mindset is especially critical now, as AI is transforming workflows and unlocking new capabilities. 

Willingness to Take (Social) Risks: Daring to Be Different

Risk-taking is often associated with entrepreneurship, but not all risk is created equal. In the workplace, one of the most important forms is social risk-taking: the willingness to propose a controversial idea, speak up against consensus, or pursue a project that might fail publicly.

Entrepreneurs tend to score higher in social risk-taking compared to non-entrepreneurs. Why? Because innovation requires deviation from the norm. It involves challenging established practices, questioning “how things are done,” and putting one’s reputation on the line for a new idea.

In traditional environments, this kind of risk-taking can be seen as troublemaking. But in innovative cultures, it’s a signal of leadership. Especially now, as companies grapple with how to responsibly and creatively integrate AI into their operations, those who are willing to push boundaries and test new approaches are essential. Without this trait, organizations default to caution, and in a fast-moving landscape, caution can become a liability.

Curiosity: The Engine of Discovery

Curiosity is the urge to explore, ask questions, and seek out new information. It’s a cognitive and emotional driver that powers learning and adaptability.

Recent studies have shown that curiosity is a strong predictor of entrepreneurial alertness — the ability to notice opportunities that others miss. Heinemann et al. found that epistemic curiosity (a desire for knowledge) was even more predictive of entrepreneurial outcomes than openness to experience. Curious individuals actively scan the horizon, connect ideas across domains, and pursue learning for its own sake.

In the age of AI, where the pace of technological change can be overwhelming, curiosity serves as an antidote to stagnation. Curious individuals experiment with new tools, explore how machine learning might apply to their field, and continually expand their mental models. 

Resilience: Turning Setbacks Into Fuel

Perhaps no trait is more essential to the entrepreneurial mindset than resilience. Innovation is a messy process. Failure is common. Ideas flop, tools break, people resist. The key is not avoiding failure but rebounding from it.

Resilience is the capacity to absorb stress, recover from setbacks, and maintain focus on long-term goals. Research shows that the three dimensions of resilience (hardiness, resourcefulness and optimism) help to predict entrepreneurial success, with resourcefulness being the most salient. 

This mindset is particularly relevant in a volatile AI-driven environment. As new tools replace old workflows and value chains shift, many teams will face ambiguity, reorganization, and failed experiments. Resilient individuals are more likely to adapt, find new paths, and view challenges as temporary detours rather than dead ends. They persist not because success is guaranteed, but because they believe it’s possible.

Why These Traits Matter More Now Than Ever

These traits — openness, proactivity, risk tolerance, curiosity, and resilience — have long been associated with entrepreneurs. But today, they are no longer limited to founders. In a landscape being redefined by artificial intelligence, every individual contributor and team leader needs to tap into this entrepreneurial mindset.

Why?

Because AI is not just another tool. It’s a fundamental shift in how work gets done. Roles are changing, hierarchies are flattening, and traditional competitive advantages are being eroded. In this new environment, those who can recognize change early, adapt quickly, and innovate boldly will define the future of work.

And here’s the good news: these traits are not fixed. While some people may naturally exhibit them, organizations can cultivate them through intentional design. Encouraging experimentation, rewarding initiative, providing psychological safety, and investing in learning are just a few ways to nurture the entrepreneurial spirit within teams.

As the workplace continues to evolve, the most valuable employees won’t just be the most skilled or the most efficient. They’ll be the most entrepreneurial — the ones with the vision to imagine what’s possible, the courage to pursue it, and the resilience to see it through.

How Generative AI is Reshaping the Future of Tech Work

Every disruptive tool in the history of technology has reshaped not just what we work on, but how we work. The advent of cloud computing didn’t just speed up software delivery—it transformed the entire product mindset. Companies moved from slow, waterfall models to agile, continuous delivery of services. Speed, iteration, and customer responsiveness became the new north stars.

Today, generative AI is prompting a similar reckoning. Its ability to produce code, content, and prototypes at lightning speed forces us to ask: What does meaningful work look like in an era where execution is cheap and near-instant? How do we organize for innovation when the tools themselves are evolving daily?

To answer these questions, we need to rethink how teams are built, how cultures are shaped, and how success is measured. The future of work is more about reconfiguring human work for a landscape where ideation, experimentation, and adaptability are the new competitive advantages, than simply automating tasks.

To navigate this next frontier, we need to understand the major trends reshaping the future of work in technology.

Trends

The Cost of Execution is Plummeting

Just a few years ago, building a minimum viable product (MVP) required a team of developers, designers, and weeks (if not months) of effort. Today, a capable generalist with access to tools like GitHub Copilot can spin up a working prototype in hours.

This shift is quantifiable. GitHub’s 2024 productivity report showed developers using AI coding tools completed tasks 55% faster, with higher focus and reduced mental fatigue. MIT and Microsoft researchers found similar results: a 56% speed increase when software engineers used AI as a pair programmer.

But as execution becomes commoditized, it ceases to be a differentiator. What matters now is what you build, why it matters, and how quickly you can learn from real users. Competitive advantage is shifting from efficiency to experimentation and product-market fit.

In short: in a world where everyone can build fast, those who explore better will win.

We’re Still in Exploration Mode

Despite the excitement, generative AI is still far from plug-and-play. Integrating these tools into real-world business workflows is messy, expensive, and often unreliable. And while AI is great at generating content or code, it’s still brittle when it comes to reasoning, context, or strategy.

The so-called “killer apps” of generative AI—the ones that will reshape entire industries—haven’t yet arrived. According to McKinsey’s 2024 report on GenAI, only about 10% of organizations report significant value from GenAI, and many pilots are failing to scale due to unclear ROI and integration challenges.

This places us squarely in the exploration phase of innovation. It’s tempting to force AI into existing processes, expecting predictable outputs. But the real opportunity lies in experimenting, probing new use cases, and embracing ambiguity.

Exploration is no longer “a nice to have”. It’s become essential. And organizations must build the capacity to explore without immediate payoff if they want to discover the next big thing.

Pressure to Innovate is Rising

All of this is happening against a backdrop of increased volatility: shifting customer expectations, economic uncertainty, and rapid technology cycles. Leaders are feeling the squeeze—needing to innovate faster while also managing risk.

But here’s the paradox: too much pressure can kill innovation. Research from Teresa Amabile has shown that high-pressure environments oriented around extrinsic rewards tend to suppress creativity. People become more risk-averse, less exploratory, and more focused on pleasing stakeholders than experimenting with new ideas.

To survive and thrive, tech companies must shift their mindset from optimization to experimentation, from managing work to designing conditions for innovation.

This leads us to the second core pillar of the future of work: how we organize and empower people in order to harness collaborative Intelligence. 

Building Blocks of Collaborative Intelligence

The old myth of the lone genius persists in tech but it’s increasingly out of step with today’s reality. Today’s problems like ethical AI, climate tech, platform trust are inherently complex. Solving them requires multiple perspectives, disciplines, and heuristics. No single individual, no matter how brilliant, can fully grasp the nuance alone.

Scott Page’s research on cognitive diversity shows that heterogeneous teams consistently outperform homogeneous ones when tackling non-routine, complex tasks. Diverse thinkers bring different models, biases, and blind spots which, when managed well, leads to better problem-solving.

But this collaborative intelligence doesn’t happen by accident. It requires the right mix of people, culture, and incentives. Let’s break that down.

People

In this new world, depth of expertise isn’t enough. What’s needed are T-shaped individuals—people who possess deep expertise in a specific area (the vertical bar of the “T”), but also broad skills and curiosity that allow them to collaborate across domains (the horizontal bar).

These individuals are connectors, translators, and creative synthesizers. They’re engineers who understand user research, product managers who code, designers who analyze data. They can shift gears from deep work to cross-functional problem-solving with ease.

IDEO, which helped popularize the concept, found T-shaped people to be central to high-performing innovation teams. And organizational research confirms this: T-shaped professionals are more adaptable, more comfortable with ambiguity, and better at generating creative solutions in multidisciplinary settings.

Hiring for T-shaped talent builds not just execution capacity, but also resilience and adaptability.

Culture

Culture is the invisible force that either enables or crushes innovation. Yet many companies cling to outdated models of top-down hierarchies, rigid approval systems, and fear-based management.

To foster exploration, cultures must be reengineered around the principles of Self-Determination Theory (SDT), developed by psychologists Edward Deci and Richard Ryan. According to SDT, people are most intrinsically motivated, and therefore most engaged and creative, when three core psychological needs are met:

  • Autonomy: The feeling that one can direct their own work and make meaningful choices.
  • Competence: The sense of being capable and growing in one’s abilities.
  • Relatedness: Feeling connected to others and contributing to something larger..

A culture rooted in SDT doesn’t just produce happier employees. It produces better ideas.

Incentives

Traditional incentive structures—performance bonuses, individual KPIs, stack rankings—are optimized for predictability and efficiency. They reward execution, not experimentation.

But as research from both Deci & Ryan and Amabile shows, extrinsic rewards often undermine intrinsic motivation, particularly for creative work. When people work only for outcomes, they become risk-averse. They choose the safe path, not the inventive one.

To build a future-ready organization, leaders must rethink what they reward:

  • Celebrate collaboration, not just individual brilliance.
  • Reward learning, even when projects fail.
  • Make space for intrinsic goals, like mastery, curiosity, and purpose.

Shifting incentives in this way doesn’t mean abandoning accountability—it means realigning it with innovation.

Final Thoughts

AI is shifting our focus from how we execute to how we explore, learn, and adapt. In this new landscape, competitive advantage will belong, not to those who can scale fastest, but to those who can reimagine the way teams think, build, and evolve together.

This requires more than new tech—it requires reconfiguring the foundations of work.

In the next part of this blog, we’ll explore how to redesign teams for this future. The future of work is not a question of whether we change, but how intentionally we do so. 

Leading When You Don’t Know: The Power of Negative Capability 

When New Zealand faced the first wave of COVID-19, Prime Minister Jacinda Ardern didn’t rush to over-promise or posture certainty. Instead, she leaned into transparency, regularly updating citizens with what was known—and, critically, what wasn’t. Her leadership was marked not by decisive bravado, but by a calm willingness to wait, listen, and act when the path became clearer.

This isn’t just a story of pandemic response. It’s an example of leadership at the edge—where accumulated knowledge and traditional decision-making frameworks fall short. In such moments, what matters is not just what a leader knows, but their ability to hold space for not knowing.

This is where the concept of Negative Capability becomes both urgent and transformative.

What Is Negative Capability?

First coined by Romantic poet John Keats, Negative Capability describes the capacity to remain “in uncertainties, mysteries, doubts, without any irritable reaching after fact and reason.” While Keats was reflecting on literary genius, modern scholars and leadership theorists have found in his words a valuable metaphor for navigating complexity and uncertainty.

In leadership, Negative Capability refers to the ability to tolerate ambiguity, suspend judgment, and resist the impulse to impose premature certainty—especially when the stakes are high and the path forward unclear. It is a form of reflective inaction—the deliberate choice to pause, absorb, and wait for the right insight, rather than react defensively or default to what has worked before.

Why Leaders Struggle with Negative Capability

Leadership, especially in Western corporate culture, is often measured by decisiveness, clarity, and confidence. The leader is expected to know, to act, and to inspire trust through their ability to lead from the front. Traditional leadership development prioritizes “positive capabilities”—attributes like visioning, planning, and execution. These are vital in stable environments.

But what happens when the environment is not stable? When the actors are unfamiliar, the rules have changed, and the old playbook no longer applies?

In today’s VUCA world—marked by volatility, uncertainty, complexity, and ambiguity—leadership often unfolds in “radical uncertainty.” Here, the demand to act collides with the reality that we simply don’t yet know the right strategy. Leaders face a paradox: the very qualities that earned them their positions—experience, expertise, confidence—can become liabilities when they prevent them from not acting long enough to sense what is really needed.

The Costs of Premature Action

Consider a common scenario: a tech company begins to lose market share to a disruptive competitor. The board demands a turnaround strategy. The CEO, feeling the weight of expectation, announces a reorganization, lays off staff, and pivots the product line. Six months later, nothing has improved. Why?

Because the leader responded with positive capability—decisive action—before taking the time to understand the deeper dynamics at play: shifting customer expectations, employee morale, and the subtleties of emerging technology trends.

In contrast, a leader drawing on Negative Capability would have paused to reflect more deeply. They might have resisted the urge to act immediately, choosing instead to convene diverse voices, sense the complexity of the situation, and consider new possibilities. This is not indecision—it’s discipline.

Negative Capability in Action: Practical Strategies for Leaders

So how can leaders cultivate Negative Capability? Here are a few grounded strategies:

1. Practice the “Pause”

Create structured pauses in your decision-making process. Before responding to a crisis or making a strategic pivot, ask yourself: What if I waited just a little longer? Create a discipline of pausing, not just for analysis, but for reflection—cognitively and emotionally.

“Don’t just do something, stand there.” — White Rabbit in Alice in Wonderland

2. Adopt a Meta-Perspective

When immersed in a high-stakes situation, practice the “balcony view” — observe yourself and the system neutrally, like looking down from above. What patterns emerge? Who’s reacting from fear or habit? What isn’t being said? This neutral observation disrupts automatic responses and allows for deeper insight.

3. Create Containers for Not-Knowing

Establish spaces—retreats, strategy offsites, or peer dialogue groups—where not knowing is acceptable. Frame these sessions as opportunities to explore complexity rather than solve problems. Psychological safety is key here; people must feel free to admit uncertainty without fear of appearing weak.

4. Normalize Ambiguity in Leadership Culture

Shift your team’s expectations. Instead of always seeking “quick wins,” model tolerance for ambiguity. Share your own moments of uncertainty and how you worked through them. This humanizes leadership and builds collective resilience.

5. Balance Positive and Negative Capabilities

Negative Capability is not the absence of action—it is the capacity to wait until the right action reveals itself. Leadership is often about knowing when to hold back, and when to move decisively. Mastery lies in balancing these twin forces.

Final Thoughts: Leading into the Unknown

We live in an era where no amount of experience can guarantee the right answer, and where the illusion of control is constantly being shattered by unpredictable change. In such times, perhaps the most courageous act of leadership is not to speak, but to listen. Not to act, but to reflect. Not to know, but to stay with the not-knowing.

Negative Capability is not a replacement for action-oriented leadership—it’s the precondition for wise action in uncertain times. It invites us to become more attuned to the present moment, more accepting of ambiguity, and more open to emergence.

Because sometimes, the answer doesn’t come from what you do next. It comes from what you don’t do yet.