Beyond the Automation Trap: Why AI Needs Values

In 1997, after Gary Kasparov lost his historic chess match to IBM’s Deep Blue, he didn’t just walk away or rail against the machine. Instead, he started a new kind of competition called “Advanced Chess.” In these matches, a human player and a computer worked together as a team—a “Centaur.”

What happened next was quite unexpected. Amateur players with midrange computers often beat grandmasters and higher end chess computers. They knew when to listen to the machine and when to override it. They used the computer to explore possibilities, but they used their human judgment to make the final call.

In other words, the most powerful force wasn’t the smartest machine but the best collaboration.

Today, we are at a similar crossroads with Artificial Intelligence. We’ve built the machines, but we haven’t quite figured out how to be Centaurs. And that might be why AI adoption is stalling.

The Diffusion Mystery

If you look at the headlines, AI is taking over the world. But if you look at the data, the picture is more complicated.

Everett Rogers, the legendary sociologist who gave us the “Diffusion of Innovations” theory, taught us that technology doesn’t spread because it’s better. It spreads because it fits into our lives, our norms, and our trust networks. Right now, AI has a fit problem.

According to McKinsey’s 2025 global research, while almost every company is playing with AI, very few have successfully scaled it. The problem might be the kinds of problems we are trying to solve with AI. It’s not that the technology is too complex, it’s that we’re trying to use a “tame” solution for a “wicked” world.

Tame Tasks vs. Wicked Problems

In the 1970s, design theorists Horst Rittel and Melvin Webber identified two types of challenges:

  1. Tame Problems: These have a clear goal and a clear stopping rule. Think of a puzzle or a math equation. Coding is often a tame problem. You write the script, you run the test, and it either works or it doesn’t. This is why AI adoption has worked quite well for developers.
  2. Wicked Problems: These are messy. They have no clear definition and no right answer, only “better” or “worse” ones. Moreover, every time you try to solve a wicked problem, the problem changes. Think of education, healthcare, or leading a team.

When we try to use AI to solve a wicked problem through pure automation, we fail because wicked problems require judgment, and good judgment requires something else.

Turbulent Fields

Systems theorist Eric Trist called the environment we live in today a “turbulent field.” Imagine trying to play a game of soccer, but the grass is moving, the goals are shifting, and the other team keeps changing the rules. That’s turbulence. And turbulence creates wicked problems. 

In a stable world, you can rely on data and optimization. But in a turbulent world more data often leads to more confusion. Instead of data, you need a North Star that can simplify the number of variables you need to optimize for. Trist argued that values are effective North Stars in solving such complex problems. They clarify direction by eliminating options that don’t fit within those values.

This might be one reason why solving problems with AI is so challenging. Without clearly defined values, AI becomes a black box that’s hard to trust.

Designing with Values

If we want AI to actually work for us, we have to stop designing for automation and start designing for human flourishing.

This brings us to one of the most important frameworks in social science that I have seen to be highly effective: Self-Determination Theory (SDT). For people to be at their best, they need three things:

  • Autonomy: The desire to be the author of our work and lives.
  • Mastery (or Competence): The urge to learn new things and get better at skills that matter.
  • Purpose (or Relatedness): The yearning to do what we do in the service of something larger than ourselves.

The “Automation Trap” kills all three. If an AI writes your entire report, you lose your autonomy (you’re just a spectator). You lose your mastery (your skills begin to atrophy). And eventually, you lose your sense of purpose.

This is the “Irony of Automation.” As researcher Lisanne Bainbridge pointed out, the more we automate, the more we rely on humans to handle the rare, high-stakes crises. But if the human has been sidelined by the automation, they no longer have the skills to save the day when the machine fails.

Nowhere is this tension clearer than in the classroom. If a student uses AI to generate an essay, the task is finished, but the learning never happened.

Learning requires productive struggle. Elizabeth and Robert Bjork’s research on “desirable difficulties” shows that we learn best when the process feels a little bit hard. When we remove the struggle, we remove the growth.

If we want AI to diffuse in education, and for that matter, in any knowledge-work field, we have to move from “Answer Engines” to “Thought Partners.”

A New Blueprint for the AI Collaborator

So, what does a value-driven, human-centered AI look like? It follows a different set of design principles:

1. Values Over Vibes

Wicked problems are resolved by making choices based on what we value most. An AI collaborator shouldn’t hide these choices. It should surface them. Instead of saying “Here is the best strategy,” it should say “If you value speed, do X; if you value employee well-being, do Y.”

2. Design for Mastery

Success shouldn’t just be measured by task completion. It should be measured by capability gained. Does this AI help the user understand the problem better? Does it challenge their assumptions? A great AI should function like a coach, nudging the user to do their best thinking rather than doing the thinking for them.

3. Human Stewardship

In a turbulent field, the “correct” answer is often a conversation. AI can widen our options and test our scenarios, but humans must steward the meaning. We are the ones who decide which values are important and trade-offs are worth making.

The Question for 2026

As we stare down 2026, we need to stop asking, “What can AI do?” and start asking, “What values should do the steering?”

For the last two years, we’ve been obsessed with technical possibilities. We’ve treated AI like a new engine and spent all our time seeing how fast it can go. But in a turbulent field, speed without a North Star is just a faster way to get lost. If we continue to design simply because a solution is possible, we will keep falling into the Automation Trap.

The truth is, technological possibility should never precede moral clarity. In the era of wicked problems, the right answer doesn’t exist in the data; it exists in our intentions. If we want to move from “Answer Engines” to true Centaur-style collaboration, we have to identify the values we are designing for before we write a single line of code.

The real lesson of Gary Kasparov’s Centaurs wasn’t that they had better computers. It was that they had a better process rooted in human judgment. In the long run, the real competitive advantage won’t be the machine’s speed. It will be our wisdom.

Designing Products to Build Intrinsic Motivation

In a recent study researchers wanted to explore the relationship between rewards and motivation in the context of education. In order to understand the impact of gamified elements on student motivation and learning, they designed a long-term study for students enrolled in a semester long course. Students were divided into two groups – a gamified group that used a reward system aligned with the learning goals, and the control group that received the same instruction but without any gamified elements. They looked at student grades at the end of the course along with student surveys, and confirmed what some educators had always suspected.

The researchers found that the non-gamified group not only did better at the end of the semester exam, they also reported higher levels of motivation and satisfaction at the end of the class! As the researchers explain, “The results suggest that at best, our combination of leaderboards, badges, and competition mechanics do not improve educational outcomes and at worst can harm motivation, satisfaction, and empowerment. Further, in decreasing intrinsic motivation, it can affect students’ final exam scores.

While typical gaming elements like points and badges can lead to increased engagement in the short term, it is now believed that the initial appeal is due to a novelty effect, and that engagement and motivation decline as the novelty wears off. And this effect is more pronounced for younger age groups, where novelty and interest declines faster.

Educational products routinely employ rewards like badges and scores to get initial interest and traction among users, however, as research is now pointing out, these elements have negative long term consequences as they promote extrinsic motivation instead of building intrinsic motivation among students.

So,  how can we design educational products that focus on building students’ intrinsic motivation?

Edward Deci and Richard Ryan, professors of Psychology, have studied motivation for several decades and developed the Self Determination Theory (SDT) of motivation. According to their theory, three innate psychological needs play a role in motivation – competence, autonomy and relatedness. The main premise behind their theory is that humans have an inherent tendency to learn, have agency in their development and connect to others. Their theory has been widely used in many contexts, including gamification.

Based on the underlying theory of self determination, here are some high level product approaches that can be used in lieu of rewards to build the right kind of motivation:

Exploration

Creating a playful environment that leads to self-directed exploration ties to the underlying need for autonomy and competence. Games or products should allow for the freedom to fail, by allowing users to recover from mistakes without penalty. Games should also provide a freedom of choice, where users can decide what they want to work on or what skill to develop.

Feedback

In a classroom, feedback can be slow and constrained as teachers can only provide feedback one at a time. Games where feedback can be immediate can have a positive impact on the need for competency. Feedback messages that are actionable (guide the student in the right direction) and focus on growth mindset have been found to be effective.

Collaboration

A typical classroom environment fosters competition among students instead of collaboration, which in turn reduces intrinsic motivation. Elements like leaderboards have the same effect due to social comparison. A better way would be to design products that allow meaningful collaboration among students, and tap into the need for relatedness. Social cues that signal working together have been found to boost intrinsic motivation.  

 

Intrinsic motivation has been found to link positively to learning outcomes as well as personal wellbeing. Introducing the right kind of gamified elements into product elements can boost intrinsic motivation among students, but it involves walking away from more traditional elements in games like badges and points.

How Intrinsic Motivation Can Help Creativity

In 1971, Edward Deci did an experiment on college students to understand motivation and performance. These students were given puzzles to solve which Deci believed they would be intrinsically motivated to solve. Students in the control group did not receive any money to work on the puzzles, while students in the experimental group were paid only on the second day.  The experimenter gave a break in the middle of the experiment each day to see how long students played with the puzzles when left alone.

Deci found that students who were paid on the second day, spent longer on the puzzles during the break. However, on the third day when they were not paid, they spent significantly less time playing with the puzzles than the control group. Deci interpreted this as evidence that an external reward decreases the intrinsic motivation to engage in an activity.

Deci along with Ryan expanded on this work to propose the Self Determination Theory (SDT). The SDT outlines three universal psychological needs – autonomy, competence and relatedness – which govern individual motivation. Need for competence and autonomy form the basis of intrinsic motivation.

Monetary rewards have shown some benefit in performance if the task is more manual in nature or when people have identified with an activity’s value. For complex problems requiring creative problem solving skills, intrinsic motivation plays a bigger role.

Teresa Amabile, Professor at Harvard Business School and Creativity expert, has found plenty of evidence of what she calls the “Intrinsic Motivation Principle of Creativity”, namely that “people will be most creative when they feel motivated primarily by the interest, satisfaction, and challenge of the work itself-and not by external pressures.

Given the strong connection between creativity and intrinsic motivation, here are three ways to maintain intrinsic motivation.

Praise, Don’t Reward

Praising instead of giving a monetary reward works better in improving intrinsic motivation, even though both are forms of external rewards. However, for praise to be effective it should focus on the effort as opposed to ability, should not convey low expectation and should not convey information about competence solely through social comparison.

Focus on Others

While intrinsic motivation drives creativity, it turns out that it drives the “originality” component of creativity and not the “useful” aspect. Prof. Adam Grant’s research has shown that focusing on solving others’ problems improves creativity in the “useful” aspect as well. As he explains, “perspective taking, as generated by prosocial motivation, strengthens the association between intrinsic motivation and creativity.”

Embrace failure

Any creative task by definition has a lot of uncertainty and success isn’t guaranteed. Creating a mindset where failure is appreciated for the knowledge it brings on what doesn’t work, can go a long way in building intrinsic motivation. In Prof. Amabile’s words, “… if people do not perceive any “failure value” for projects that ultimately do not achieve commercial success, they’ll become less and less likely to experiment, explore, and connect with their work on a personal level. Their intrinsic motivation will evaporate