Using Futures Thinking To Navigate Disruptive Shifts

After the release of the world’s first mass-market ‘personal navigation device’ in 2004, TomTom’s stock soared, reaching a peak of $65/share in 2007. However, the landscape shifted dramatically when Google launched its Maps app in 2007, effectively transforming every smartphone into a personal navigation device. Within 2 years, TomTom’s stock fell below $3. While TomTom was eventually able to pivot and change their business model, their initial dismissal of the threat posed by the rise of smartphone based navigation apps was very costly. This failure was not due to a lack of technological capability but rather a lack of strategic foresight to anticipate and adapt to disruptive technology.

Innovation is not just about responding to current needs but also anticipating and preparing for disruptive shifts, like the rise of generative AI. However, preparing for disruptive shifts is a very tricky problem because the mental models we use to make decisions in normal, predictive times don’t work in disruptive times. Product planning in such times has to go beyond predicting the future (which often turns out to be incorrect), to planning for multiple plausible scenarios. This is where futures thinking can be a useful tool. 

Futures thinking is a strategic approach to uncovering multiple possibilities, with the aim of creating a preferred future. It ties closely to innovation – once we identify a desirable (and plausible) future, we have a clearer roadmap of the problems we need to solve in order to reach that future. 

So how do you develop futures thinking? It starts with first figuring out (or even deciding) a vision for the future and then understanding what forces enable or thwart that vision. Depending on how likely and how important certain trends for a particular vision, one can arrive at plausible scenarios of the future. These scenarios can then guide what kinds of ideas and products to invest in. 

Future Archetypes

When it comes to our vision of the future, we all hold one or more of common archetypes which  dominate our imagined future thinking. Below are the five common future archetypes viewed through the lens of AI:

  • Progress: A tech-driven world with humans at the center, emphasizing rationality and innovation. AI in this future enhances human productivity, creativity, and decision-making.  
  • Collapse: A darker vision where AI exacerbates inequality, destabilizes jobs, and concentrates power, pushing society to a breaking point.
  • Gaia: A partnership-driven future where AI helps repair damage to the planet and fosters inclusive, harmonious systems between humans, nature, and technology.
  • Globalism: A borderless, interconnected future where AI powers collaboration across economies and cultures, breaking down barriers to knowledge, trade, and innovation.
  • Back to the Future: Nostalgia for simpler times, where AI’s rapid advancements are rejected in favor of human-centered, low-tech solutions to protect societal stability.

Trends

There are several key market, technology and social trends that impact the development of AI in both positive and negative ways. Here are a few sample trends:

  • Technology Improvements: The AI hardware market, encompassing GPUs and specialized AI accelerators, is projected to grow significantly indicating growth of computational power. AI models are expected to continue improving with more enhanced reasoning skills and capabilities. 
  • Regulation Focus: The number of AI-related regulations in the U.S. has risen significantly, with 25 AI-related regulations in 2023, up from just one in 2016, reflecting a growing focus on responsible AI development.
  • Computational Costs: Training large AI models is resource-intensive, requiring significant computational power, energy, and financial investment. OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.
  • Public Trust: People and companies may be hesitant to adopt AI due to various reasons like biased algorithms, privacy concerns, or fear of widespread job losses. 
  • AI Investment: Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion, highlighting increased investment in AI technologies.

Each of these trends can either contribute to a future or become a barrier. The likelihood of each trend plays a part in which of the futures are more plausible. 

Future Scenario Planning

To determine plausible future scenarios, leaders need to evaluate trends against each vision of the future. For example, consider the future archetype of “progress” where AI leads to greater productivity and innovation. Improvements in technology, like better models or better GPUs, clearly push us towards this future. However, issues like algorithmic biases or security concerns can erode trust in the technology and slow down adoption rates. If this is a preferred vision of the future, then actively addressing these barriers during the development process can ensure that we keep marching in the right direction. While this was an overly simplified example, a more thorough analysis that incorporates additional relevant trends can start to reveal the plausibility of different scenarios.  

One challenge in future scenario planning is that given the complex nature of the problem, there is no good way to accurately determine the likelihood of each trend and its contribution to each future archetype. This is where swarm intelligence might be useful. Groups of people are often better at predicting than relying on experts. Training employees on futures thinking and tapping their individual unique insights might provide better signals on what scenarios are more likely to happen in the future. 

As artificial intelligence redefines industries, businesses must integrate strategic foresight into their innovation frameworks to thrive in uncertain and fast-changing landscapes. By explicitly using future archetypes and integrating them with current and expected trends, we can start to identify what scenarios are most likely to play out in the future. These scenarios can then help create more efficient product innovation roadmaps.  

Organizational Play: The Unexpected Path to Radical Innovation

In a captivating TED talk, author Steven Johnson illustrates how the invention of a seemingly simple flute by cavemen 40,000 years ago unexpectedly paved the way for the development of the modern computer. A series of inventions—music boxes, toy robots, and the like—initially perceived as mere amusements, laid the groundwork for innovations that would ultimately revolutionize entire industries. As he explains, “Necessity isn’t always the mother of invention. The playful state of mind is fundamentally exploratory, seeking out new possibilities in the world around us. And that seeking is why so many experiences that started with simple delight and amusement eventually led us to profound breakthroughs.”

Yet, the power of play as a catalyst for innovation is often overlooked in the corporate world. When most companies think about innovation, their first instinct is to lean heavily on market research, analyzing customer feedback, and refining processes to meet immediate needs. However, this approach often leads to incremental improvements rather than groundbreaking advancements.

The Rise of AI and the Need for Transformational Innovation

With the proliferation of generative AI in recent years, most companies are adopting AI in their workflows, leading to more efficient or more capable product offerings (barring current AI challenges like hallucinations). But the more transformational breakthroughs — the killer apps — remain elusive.

The challenge lies in the fact that most company structures and processes are geared toward incremental innovation. Trying to squeeze out transformational innovation from an organizational machinery fine-tuned for incrementalism is a difficult task. The very elements that enable transformative breakthroughs are often the exact opposite of what most companies are structured for.

The Unpredictability of Transformational Ideas

Transformational ideas are inherently difficult to predict, even by the people who might have worked on an earlier, related problem. When the Musa brothers in Baghdad made the first programmable music box, using interchangeable metal cylinders to encode music, they could not have predicted that their invention would later inspire the French inventor, Jacques de Vaucanson, to use the same mechanism to create programmable looms. This unexpected connection highlights the unpredictable nature of innovation.

In other words, companies that rely solely on predicting the next big thing will likely miss out on creating breakthrough ideas. Their existing processes, which often rely heavily on customer feedback and market research, can only lead to predictable improvements on existing products.

Solutions in Search of Problems

Many offbeat ideas are quickly shut down with questions like “What is the customer pain point?” or “This looks like a solution looking for a problem.” While this feedback is relevant when the goal is to evolve existing products, it can stifle radical innovation. Most radical ideas are actually the reverse – they are solutions looking for problems.

Consider the invention of the sticky note. Spencer Silver, a scientist at 3M, was working on making a super-strong adhesive but accidentally ended up creating a weak adhesive that could be peeled off easily and was reusable. He was intrigued by this new adhesive and spent several years giving seminars and talking to people within the company to find ways to commercialize it, but couldn’t find a good use. It wasn’t until another colleague, Art Fry, recognized the potential of the adhesive as a bookmark that the sticky note was born. This example illustrates the importance of recognizing the potential of solutions even before a clear problem has been identified.

Instead of relying solely on customer feedback, companies need to develop a different approach to evaluate solutions that don’t yet have a well-defined problem. One effective strategy is to focus on complexity. If a solution was non-trivial and involved overcoming significant challenges, it could be a good candidate for creating a future competitive advantage.

The Exploration vs. Exploitation Mindset

To thrive in their environments, most animals toggle between exploration and exploitation. Exploration, akin to play, empowers animals to uncover new possibilities and problem-solving approaches. Research suggests that animals engaging in more play are often better prepared to adapt to environmental challenges, boosting their survival and reproductive prospects.

However, when faced with threats, exploration can be risky. In such situations, we tend to resort to using our existing knowledge to solve the immediate problem. This natural tendency highlights the tension between exploration and exploitation.

This biological function of exploration—stepping away from immediate survival needs to tinker with novel ideas—closely aligns with what companies must do to thrive in an uncertain and competitive landscape. Play allows us to probe beyond the “local optima,” or safe solutions, to discover entirely new paradigms. 

However, the conditions under which successful exploration occurs are very different from the exploitation phase. Exploration works best when people feel safe and supported in their environment. A playful, low-stress environment is essential for serious play to thrive.

A New Path For Radical Innovation

Successfully balancing transformational and incremental innovation is a persistent challenge for organizations. While many companies excel at incremental innovation, they often struggle to cultivate an environment that fosters radical breakthroughs. Existing approaches, such as corporate hackathons or Google’s “20% time,” have often fallen short of their intended goals. These initiatives, while valuable in promoting experimentation, often fail to fully embrace the conditions necessary for deep exploration.

What if we could create a new model, an “innovation sabbatical” for radical innovation to thrive? Imagine an extended period, lasting 2-3 months each year, where employees get a a dedicated space for radical exploration, free from the daily demands of their regular work. This would act as an extended hackathon, but with a crucial difference: it would be designed to foster a distinct culture that prioritizes deep exploration. Within this sabbatical environment, traditional hierarchies would fade, replaced by an emphasis on collaboration and a playful, low-stress atmosphere. Evaluation would shift away from immediate business needs, focusing instead on the ingenuity and complexity of the ideas generated. 

The goal with an innovation sabbatical is not for companies to predict the next big idea but to create an additional pathway where transformational ideas get a chance to flourish by ensuring that resources, incentives and motivation are all aligned in the right way.  

Labels and Fables: How Our Brains Learn

One of the most remarkable capabilities of the human brain is its ability to categorize objects, even those that have little visual resemblance to one another. It’s easier to see that visually similar objects, like different trees, fit into a category and it’s a skill that non-human animals also possess. For example, dogs show distinct behaviors in the presence of other dogs compared to their interactions with humans, demonstrating that they can differentiate the two even if they don’t have names for them.

A fascinating study explored whether infants are able to form categories for different looking objects. Researchers presented ten-month-old infants with a variety of dissimilar objects, ranging from animal-like toys to cylinders adorned with colorful beads and rectangles covered in foam flowers, each accompanied by a unique, made-up name like “wug” or “dak.” Despite the objects’ visual diversity, the infants demonstrated an ability to discern patterns. When presented with objects sharing the same made-up name, regardless of their appearance, infants expected a consistent sound. Conversely, objects with different names were expected to produce different sounds. This remarkable cognitive feat in infants highlights the ability of our brains to use words as a label to categorize objects and concepts beyond visual cues. 

Our ability to use words as labels comes in very handy to progressively build more abstract concepts. We know that our brains look for certain patterns (that mimic a story structure) when deciding what information is useful to store in memory. Imagine that the brain is like a database table where each row captures a unique experience (let’s call it a fable). By adding additional labels to each row we make the database more powerful. 

As an example, let’s suppose that you read a story to your toddler every night before bed. This time you are reading, “The Little Red Hen.” As you read the story, your child’s cortisol level rises a bit as she imagines the challenges that Little Red Hen faces when no one helps her; and as the situation resolves she feels a sense of relief. This makes it an ideal learning unit to store into her database for future reference. The story ends with the morals of working hard and helping others, so she is now able to add these labels  to this row in her database. As she reads more stories, she starts labeling more rows with words like “honesty” or “courage”, abstract concepts that have no basis in physical reality. Over time, with a sufficient number of examples in her database for each concept, she has an “understanding” of what that particular concept means. Few days later when you are having a conversation with her at breakfast and the concept of “helping others” comes up, she can proudly rattle off the anecdote from the Little Red Hen. 

In other words, attaching labels not only allowed her to build a sense of an abstract concept, it also made it more efficient for her brain to search for relevant examples in the database. The figure above shows a conceptual view, as a database table, of how we store useful information in our brains. The rows correspond to a unit of learning — a fable — that captures how a problem was solved in the past (through direct experience or vicariously). A problem doesn’t even have to be big – a simple gap in existing knowledge can trigger a feeling of discomfort that the brain then tries to plug. The columns in the table capture all the data that might be relevant to the situation including context, internal states and of course, labels. 

Labels also play a role in emotional regulation. When children are taught more nuanced emotional words, like “annoyed” or “irritated” instead of just “angry”, they have better emotional responses. Research shows that adolescents with low emotional granularity are more prone to mental health issues like depression. One possible reason is that when you have accurately labeled rows you are able to choose actions that are more appropriate for the situation. If you only have a single label “anger” then your brain might choose an action out of proportion for a situation that is merely annoying. 

At a fundamental level, barring any disability, we are very similar to each other – we have the same type of sensors, the same circuitry that allows us to predict incoming information or the same mechanisms to create entries in the table. What makes us different from each other is simply our unique set of labels and fables. 

The Science Behind Storytelling: Why Our Brains Crave Narratives

“Once upon a time…” These four words have captivated audiences for centuries, signaling the start of a story. But what is it about stories that so powerfully captures our attention and leaves a lasting impression? The answer may lie in the way our brains learn and process information.

How Our Brains Learn: A Baby’s Perspective

A baby is constantly facing an influx of sensory information that its underdeveloped brain isn’t capable of handling. So how does it make sense of all that information? She relies on her adult caretakers to help her understand what is important and what is not. An example can clarify how this learning process works. 

  • Say you are going on a walk with your toddler and you see the neighbor’s cat. 
  • You excitedly point to the cat, in the high-pitched and exaggerated voice that only parents use, and go  “Oh look, a kitty cat” 
  • The high-pitch sound stands out from all the other audio sounds the baby is hearing. At the same time, her body releases some chemicals like dopamine (to put her in alert state) and noradrenaline (to focus attention). 
  • You might then tell her how cute the cat looks and the cheery tone of your voice tells her that the cat is a “good” thing and not something to be afraid of. And simultaneously her body releases a bit of dopamine that signals relief. 

Her brain then captures all of the information related to this event — including context like the neighborhood, the name, the image and the emotional state — and stores it as a “searchable rule”. The next time she walks by the neighbor’s house, her brain pulls up this knowledge about the cat, and she gets excited to pet the cat. Suppose, at another time you happen to be on a hike and see a different cat. Now, the knowledge that your toddler has about cats doesn’t match perfectly – it’s a different location and a different type of cat. Depending on other existing bits of information (e.g. knowledge about aggressive animals in the wild), her brain might pick a different rule and suggest a more cautionary approach. 

The Story-Learning Connection

This learning process has striking similarities to how artificial intelligence (AI) is trained. Both require labeled data and multiple examples to generalize information. However, human brains have a unique ability to learn continuously by integrating discrete “units” of information into our existing knowledge base. Given what we now know about how our brains work, it seems likely that this unit of information corresponds to what lies between the cortisol and dopamine waves. The presence of this emotional signature tells the brain to take a snapshot of the moment and store it with additional metadata. This metadata, like the labels that we assign to this information (e.g. “cat”, “neighbor”, etc.), help in searching this database of knowledge at a later time. 

This also helps explain why we find stories so compelling. Stories are packaged perfectly in the form our brain needs to process a learning unit. “Once upon a time…”, “…and they lived happily ever after” which map to the rise (and fall) of cortisol and dopamine provide the ideal bookends for this learning unit.

Our affinity for the narrative form explains a lot about learning and how we make meaning. Here are three ways stories play a role for us in society:

  • Bedtime Stories: Bedtime stories, a tradition for many generations, are an ideal medium for communicating cultural values. Most folk tales don’t just tell a story but also explicitly call out a moral value, which is essentially a label for an abstract concept, at the end. When children hear different stories for the same moral they are able to build a deeper understanding of the moral concept and the different ways it can manifest. 
  • Pretend Play: When toddlers engage in pretend play they simulate novel scenarios with all the features of a story – setting, conflict, resolution. The simulation allows the child to vividly experience the emotions in the story and thereby learn from it. Engaging in pretend play with children is a great way for parents to recognize what learning their child is taking away from the situation and reframe it for them if needed.
  • Conspiracy Theories: Unfortunately, our learning mechanism can also be hacked in unhealthy ways. The narrative structure also explains why conspiracies, even though untrue and easily verifiable, are so effective. Most conspiracies start with an outrageous claim to grab attention, label the story with a moral value and suggest an action to resolve the situation. When delivered by someone you trust, which is how we started learning in the first place, the conspiracy is easily accepted and integrated into our knowledge base. 

Conclusion: The Enduring Power of Storytelling

Stories are not just a form of entertainment; they are fundamental to how we learn, make sense of the world, and connect with others. We are not certain why stories are so powerful, but one possible explanation is that the narrative structure is recognized by our brain as a unit of learning allowing it to be integrated well into existing knowledge structures. By understanding the science behind storytelling, we can harness its power for education, communication, and personal growth. So, the next time you hear “Once upon a time…,” remember that you’re not just embarking on a journey of imagination, but also engaging in a deeply ingrained learning process that has shaped humanity for millennia.

Can AI Have Ethics?

Imagine finding yourself marooned on a deserted island with no other human beings around. You’re not struggling for survival—there’s plenty of food, water, and shelter. Your basic needs are met, and you are, in a sense, free to live out the rest of your days in comfort. Once you settle down and get comfortable, you start to think about all that you have learned since childhood about living a good, principled life. You think about moral values like “one should not steal” or “one should not lie to others” and then it suddenly dawns on you that these principles no longer make sense. What role do morals and ethics play when there is no one else around? 

This thought experiment reveals a profound truth that our moral values are simply social constructs designed to facilitate cooperation among individuals. Without the presence of others, the very fabric of ethical behavior begins to unravel. 

This scenario leads us to a critical question in the debate on artificial intelligence: can AI have ethics?

Ethics as a Solution to Cooperation Problems

Human ethics have evolved primarily to solve the problem of cooperation within groups. When people live together, they need a system to guide their interactions to prevent conflicts and promote mutual benefit. This is where ethics come into play. Psychologists like Joshua Greene and Jonathan Haidt have extensively studied how ethical principles have emerged as solutions to the problems that arise from living in a society.

In his book Moral Tribes, Joshua Green proposes that morality developed as a solution to the “Tragedy of the Commons,” a dilemma faced by all groups. Consider a tribe where people sustain themselves by gathering nuts, berries, and fish. If one person hoards more food than necessary, their family will thrive, even during harsh winters. However, food is a finite resource. The more one person takes, the less remains for others, potentially leading to the tribe’s collapse as members starve. Even if the hoarder’s family survives, the tribe members are likely to react negatively to such selfish behavior, resulting in serious consequences for the hoarder. This example illustrates the fundamental role of morality in ensuring the survival and well-being of the group.

Our innate ability to recognize and respond to certain behaviors forms the bedrock of morality. Haidt defines morality as “a set of psychological adaptations that allow otherwise selfish individuals to reap the benefits of cooperation.” This perspective helps explain why diverse cultures, despite differences in geography and customs, have evolved strikingly similar core moral values. Principles like fairness, loyalty, and respect for authority are universally recognized, underscoring the fundamental role of cooperation in shaping human morality.

The Evolution of Moral Intuitions

Neuroscience has begun to uncover the biological mechanisms underlying our moral intuitions. These mechanisms are the result of evolutionary processes that have equipped us with the ability to navigate complex social environments. For instance, research has shown that humans are wired to find violence repulsive, a trait that discourages unnecessary harm to others. This aversion to violence is not just a social construct but a deeply ingrained biological response that has helped our species survive by fostering cooperation rather than conflict.

Similarly, humans are naturally inclined to appreciate generosity and fairness. Studies have shown that witnessing acts of generosity activates the reward centers in our brains, reinforcing behaviors that promote social bonds. Fairness, too, is something we are biologically attuned to; when we perceive fairness, our brains release chemicals like oxytocin that enhance trust and cooperation. These responses have been crucial in creating societies where individuals can work together for the common good.

The Limits of AI in Understanding Morality

Now, let’s contrast this with artificial intelligence. AI, by its very nature, does not face the same cooperation problems that humans do. It does not live in a society, it does not have evolutionary pressures, and it does not have a biological basis for moral intuition. AI can be programmed to recognize patterns in data that resemble ethical behavior, but it cannot “understand” morality in the way humans do.

To ask whether AI can have ethics is to misunderstand the nature of ethics itself. Ethics, for humans, is deeply rooted in our evolutionary history, our biology, and our need to cooperate. AI, on the other hand, is a tool—an extremely powerful one—but it does not possess a moral compass. It knows about human moral values strictly from a knowledge perspective, but it’s unlikely to ever create these concepts internally by itself simply because AI has no need to cooperate with others. 

The Implications of AI in Moral Decision-Making

The fact that AI cannot possess ethics in the same way humans do has profound implications for its use in solving human problems, especially those that involve moral issues. When we deploy AI in areas like criminal justice, healthcare, or autonomous driving, we are essentially asking a tool to make decisions that could have significant ethical consequences.

This does not imply that AI should be excluded from these domains. However, we must acknowledge AI’s limitations in moral decision-making. While AI can contribute to more consistent and data-driven decisions, it lacks the nuanced understanding inherent in human morality. It can inadvertently perpetuate existing biases present in training datasets, leading to outcomes that are less than ethical. Moreover, an overreliance on AI for ethical decision-making can hinder our own moral development. Morality is not static; it evolves within individuals and societies.  Without individuals actively challenging prevailing norms and beliefs, many of the freedoms we cherish today would not have been realized.

Conclusion

Ultimately, the question of whether AI can have ethics is not just meaningless; it is the wrong question to ask. AI does not have the capacity for moral reasoning because it does not share the evolutionary, biological, and social foundations that underlie human ethics. Instead of asking if AI can be ethical, we should focus on how we can design and use AI in ways that align with human values.

As we continue to integrate AI into various aspects of society, the role of humans in guiding its development becomes more critical. We must ensure that AI is used to complement human judgment rather than replace it, especially in areas where ethical considerations are paramount. By doing so, we can harness the power of AI while maintaining the moral integrity that defines us as human beings.