Intro
Those who think humans are motivated by incentives are behind such glorious innovations as gamification, carrot-and-stick approaches to education, sales commissions, and golden handcuffs.
While perverse incentives are a pervasive problem, the answer usually ain't better incentives.
I hope to convince you of three things:
- Perverse incentives are inevitable.
- They're mainly a problem when things are already fucked up on another level.
- And even if you could align incentives with values, it's not as great as you'd hope.
Perverse Incentives are Inevitable
In any organization or social structure, a host of concerns arise:
- The organization and department must continue to exist for you to keep your job, even if it no longer fits the mission.
- You want to rise within the ranks, to be accepted and esteemed by your colleagues. This goal is seldom aligned perfectly, even with your own values.
- To get work done, you need things from colleagues. Your work is blocked unless they deliver, even when it'd be best for them to reevaluate timelines or goals.
- A colleague's project might be part of your deliverable. You may benefit if it's finished quickly, even if they do a bad job.
- A colleague's project may compete for your resources. You may benefit if it's shut down.
There are incentives-alignment tricks, like bonus structures and stock ownership. There are other ways of resolving these conflicts, like internal court systems, management meetings, or participatory budgeting.
But there's no magic bullet. These perverse incentives arise inevitably, again and again, in any organization.
When People Exploit Perverse Incentives, there's likely a Deeper Problem
If perverse incentives are so widespread, why do we act like they are special cases, giving them names like "Tragedy of the Commons" or "Embedded Growth Obligation"?
In part, it's because perverse incentives are only a problem once things are already fucked.
Most real world situations have perverse incentives which no one exploits. Incentives to fraud, for instance: we have names for someone who exploits incentives to fraud in everyday social situations: we call that person a sociopath, a conman, a charlatan.
In general, people don't exploit perverse incentives. If we did, we'd all be sociopaths, conmen, and charlatans. If people are exploiting perverse incentives in your system (and on the internet, they sure as hell are!) it means you likely have other problems you need to fix first:
- Your system may need relationship structures to keep it cozier, so fewer people are in the situation of a rootless bad actor
- It may need rituals, so people have the chance to orient towards the values and norms of the community.
- It may need legitimation structures that keep sociopaths from getting in, or keep them from making important moves
Even Aligned Incentives can be Counterproductive
The above suggest that perverse incentives may not be your main problem. But here’s worse news:
Getting your incentives perfectly aligned can actually cause problems.
This seems impossible, right? But they can cause problems in three ways. Two of them are economic, one sociological.
The first is a phenomenon called crowding out (Arrow, Bowles), where explicit rewards make people less likely to do good things. Paying people to donate blood can lead to less blood, and lower-quality blood. Charging fees when parents are late to pick up their kids from day care can make parents later. And so on. (See especially work by Samuel Bowles)
The second problem is transaction costs and search costs (Coase). Optimization isn't the natural mode of human beings, so even perfectly-aligned incentives cause a cognitive burden: a layer of calculation in what'd otherwise be a values-and-norms affair. This is why people are given salaries instead of wages that measure their output directly. Salaries free up workers to focus on their job, rather than on a premature optimization of their output, etc.
And it's why there are firms instead of just individual market contractors. Every time you need help from a colleague, you don't want to find the optimal market price.
People like to be buffered from incentive structures. Endless budgeting and optimizing appeals to just a few of us, and in any case makes it harder for people to do what they came for.
The third problem is sociological: even if you manage to motivate the correct behavior with aligned incentives, you create a problem for your participants. Some of their colleagues will be operating from their values; others will be motivated by the extrinsic incentives. It will be hard for any participant to tell which is which—which makes it harder for them to form a network of shared values, trust one another, and bond.
What to do?
At the school for social design we get a lot of applicants with this incentives mindset. We help them round out their design imagination: to design new relationalities, rituals, and legitimation structures, not just new incentives.
We also help address where, I think, these incentives mistakes come from: people often misunderstand themselves, reading their own actions as motivated by goals or status, when, on a deeper level, they are trying to live by their values.
Of course, it isn't strictly one or the other. Say someone needs a certain amount of money to send their kids to a good school. Incentives are wrapped up in values. Situations like this are quite frequent, unfortunately. Even in everyday situations we move between focus on values, on others expectations, on goals, etc.
So incentives can't be entirely ignored in designing social systems. But you shouldn't focus on them without also taking care of relationalities, rituals, and legitimation.