This has been replaced by
Joe Edelman DRAFT June 2021. Thanks to B. Gabbai, A. Morris, A. Ovadya, J. Stray, & F. Noriega
The customer's always right. The people have spoken. These phrases illustrate the central role of "revealed preference" in our markets and democracies. Revealed preferences guide designers and policymakers. Their work is ultimately judged by what customer-citizens pick from a set of options presented in the marketplace or on the ballot.
When revealed preferences get summed up, they're called "engagement metrics" — most commonly clicks, views, purchases, or votes. The product or policy with the highest engagement is said to serve the user-citizens most, because they choose it most. The people have spoken. The customer's always right.
That has an attractive simplicity. Yet, we sense a tension: what we imagine with "values-based design" isn't exactly "engagement-maximizing design". Likewise, "values-based policy" isn't exactly populism.
But to point this out is courting danger: Don't people know best for their own lives? Do I want to impose "better values" from above?
There's a way out of this binary: one can believe people know best, and have wise values, but that their engagements aren't the last word on their values. Supporting this, though, is tough: you need another source of information on people's values. One that is the last word. But: what could be as robust as people's own revealed preferences?
Furthermore, "values" can mean different things:
- Sometimes people mean visions of what's right for everyone, or for a group—what a family should be like, how a father should behave, what a nation should be like, etc. On this definition, values would include things like inclusiveness, freedom, feminine dress-codes, etc.
- Other times, people mean things that feel right and meaningful when you do them—such as being vulnerable, taking stage, being creative, etc.
Let's call the first kind social visions; the second, meaning nuggets.*
I believe we can collect these meaning nuggets, and evolve designs and policies in service of them, as robustly as we can with revealed preference. To show this, I'll build on work by various philosophers and psychologists, , and say things about the role of attention in choice-making, the nature of meaningful experience, and the difference between theoretical knowledge and life wisdom.
Designers are supposed to evolve products to better suit users. Similarly, policymakers are supposed to evolve policies to suit citizens. Each, therefore, needs information about users or citizens—information that serves as evidence that a product or policy direction is an improvement.
Various information is collected, but the gold standard is revealed preference in both fields. Why? Because preferences are:
- Verifiable . Engagements leave a trail. Who did you actually vote for? What did you purchase?
- Local. Preferences respect local information and priorities of user-citizens. Most alternatives to preference are paternalistic—they assume designers or other industry experts know better than user-citizens what would help them.
- Hard-boiled. We often say we want things, but don't choose them in the final analysis. Preferences make us figure out our real priorities.
- Fine-grained. Preferences can say subtle things about how a person wants to live.
- Private. Engagements often happen away from temptations to signal allegiances, so they're less influenced by social pressure.
Despite these advantages, we have terms like addiction, soulless consumerism, atomization, and populism that describe when revealed preferences, summed up into engagement metrics, lead us astray. The problem is this: revealed preference omits signals we'd rather collect, and collects noise we'd rather omit:
- Lost signal. Alex wants to move to a different city, but only if his friends also move. Alice and her friends have, unfortunately, revealed a preference for their current city. Ben and his friends want to play tennis more—but they're choosing individually, from a menu of bookable tennis experiences. It looks like a preference to play tennis separately—even a rivalry for the same tennis court.
- Noise. Carla buys a car because there's no local transit. This counts as a preference for cars. Dan does something he later regrets, due to social pressure or a manipulative ad. Preference!
In general, choices made out of thoughtlessness, misinformation, lack of coordination, or external pressure count the same as those from reflection, experience, capacity, and wisdom.
This all stings worse in some parts of life: parts about community and meaning. That's why engagement-optimizing systems—like markets, democracies, and social media—aren't where we turn for meaning and community.
The Nature of Choice
It's worth looking deeper at those false positives and negatives. They happen when the options we have in mind, which we're choosing from, are biased or incomplete. To solve this, we can widen our conception of choice, to include option-set formation.
Imagine I'm with colleagues, and I say something witty. I chose to say that in particular. But this isn't one choice—it's many. At some earlier point, I decided to put some attention on finding witty things to say. Since then, I try witty ideas in my mind, and frame situations around me as opportunities for wittiness. My attentional policy of 'looking for witty quips and reframes' is how I assemble the option set for the latter choice, at , when I choose from the best I've found.
What I decided on earlier, I claim, is an attentional policy. To get more specific about this: by policy, I mean something like "taking out the trash when it's almost full", "calling mom on Sundays", or "running new contracts past the lawyer"—something done regularly, or in a certain context, without a cost-benefit analysis each time.
Attentional policies (APs), then, are policies about how to think about a thing, what to pay attention to in a context, or what to look for in selecting an action. "Taking out the trash on Tuesdays" is a normal policy, but "experiencing every step and breath while doing my chores" or "looking for kind words when giving feedback"—these are attentional policies.
APs can be about how to treat people (honestly, openly, generously, mercilessly); how to approach things (with reverence, with levity, with skepticism); how to keep things (simple, sensual, rocking, full of surprise); or how to act more generally (boldly, thoughtfully, carefully).
From a preference-replacement standpoint, APs look good:
- APs are fine-grained. To guide our attention, they must be precise. “Be honest“ is too vague—it doesn't tell me what to look for. So, a policy like "be honest when ___" is shorthand for a more specific articulation, like "attend to what I feel about each thing we discuss, and let my feeling show", or "attend to any false impressions the listener might get from my statements, and head them off with a disclaimer". To have honesty as a policy, I must first have a substantive interpretation of honesty.
- APs are local. These substantive interpretations differ from person to person. In fact, attentional policies make up much of what we call a person's "personality": When making friends, are they cautious or bold? When considering a purchase, is the focus on price, quality, or durability? When speaking, do they try to be witty, precise, or down to earth? Often these aren't just "character traits" a person is born with, but policies adopted for a reason, which work together for that person's way of life.
- APs are hard-boiled. If I had a galaxy brain, I'd have a million attentional policies, all in the same context. Talking to colleagues at work, I might craft my words to be kind, honest, tactful, humble, and inspiring—and try to be precise in my speech, aware of how each word lands, aware of my own feelings, and transparent with them. Calm and centered, but also passionate. Physically graceful, like a dancer.
- APs are verifiable. If someone says they have an AP in context , you can put them in and see what they attend to, or relatedly, what options they find. Alternatively, you can test for detailed knowledge of when exactly it makes sense for them to follow . If they really have , they'll know the context well.
They are also local in another way: they always come with a context, often implicit. In the story above, I have an idea when to try for wittiness: maybe at work, but not in a fight with my wife.
This is impossible, since policies compete for my attention. Instead, I must choose—often intuitively, unconsciously—what to attend to, in each context.
These are some of the biggest choices we make. A decision to look for witty things to say means I'm not looking for vulnerable things to say, or helpful things, or mysterious things, etc. That's a choice not to be vulnerable, helpful, or mysterious!
Because the opportunity costs are so high, many policies which sound very good (like being completely present, or endlessly compassionate) don't win the battle in the real contexts of life.
Some APs seem like "values". But not all. For instance, this one doesn't:
I've decided to be very careful with my speech at work, because my boss is prone to fire anyone who speaks casually or imprecisely.
But this one does:
While on break at work, I try being honest with a friend about something I'm struggling with. I notice many surprising benefits: the relationship feels more intimate, and stronger; it's easier for me to think about what to say; my friend is unexpectedly helpful. Soon, I can't imagine a good life that doesn't include being honest with friends.
One difference is how they're justified: speaking carefully is justified by a chain of hypotheses about the consequences. If I don't speak carefully, I'll say things my boss doesn't like; then, I'll get fired. We can visualize these reasons as a chain X⟶Y⟶Z.
In the second story, my honesty leads to all sorts of benefits, but they aren't chained together. We can visualize these reasons—which don't depend on one another and point in many directions—as a star ❋.
When the reasons to adopt a policy form a chain ⟶, I'll say it's narrow-justified. When they're a star ❋, I'll say broad-justified. Or equivalently, I'll say that an AP is justified by narrow-benefits (a NAP) or broad-benefits (a BAP).
When I say broad-justified or broad-benefits, that's shorthand for several properties. I mean a BAP's reasons for adoption are bountiful, redundant, and untracked. By bountiful, I mean I haven't listed them all. I've only started listing benefits of honesty and expect to discover new ones. By redundant, I mean I'd continue being honest if any one benefit (such as "my friend is unexpectedly helpful") turned out mistaken. By untracked I mean that, when being honest, I'm not tracking whether the benefits I've named happen in each case. I just focus on being honest.
In the intro, I mentioned two kinds of "values": social visions and meaning nuggets. Here's a social vision:
Andrew believes a pervasive dishonesty is undermining democracy and civil society. For this reason, he tries to spread honesty—denouncing lies, and setting an example of honesty wherever he goes.
Andrew thinks honesty will save democracy, and that he should spread it by being honest himself. This is a chain of hypotheses ⟶. In general, social visions create NAPs, not BAPs.
If you want meaning nuggets, without social visions, select only BAPs. This will also filter out other things—what's done just to keep our jobs, fit in with a friend group, achieve specific goals, or get good sensations—what other philosophers call instrumental goods.
Wisdom and Meaning
This ⟶ vs ❋ thing is powerful. It can separate our knowledge into two piles. Knowledge towards narrow benefit is know-how; well-informed broad justifications is wisdom.
Wisdom, n. Knowing from experience which policies are broadly beneficial.
Narrow justifications are easier to communicate. Broad justifications are made of many data points, usually collected via experience living a certain way. That's why "life wisdom" mostly comes from experience. No matter how dog-eared your Kahlil Gibran book is, you haven't collected all those diffuse benefits.
When wisdom does come quick, it's often via meaningful experience. Like this:
Brenda sips her morning tea, watching a bird on the feeder. Something shifts in her, and, she sees the bird shares a great project with her: she and the bird are explorers and representatives of what it is like to be alive. This is profoundly meaningful.
Over time, thinking of herself as "an explorer of what it is to be alive" becomes a new kind of curiosity for Brenda. It comes up when she "does animal things" (in the woods, overcome by emotion, plunging into cold water). She notices more about her environment, and about how she feels.
Brenda's now a more explorative, bolder person. Eventually, her attention shifts: when in nature, etc, she no longer focuses on being an explorer, but on balancing exploration with other factors, like self-care. Being "an explorer of what it is to be alive" is still, in a sense, something she does. It's still important to her. But it's become automatic. It's not where her attention goes.
At , Brenda might say her bird-moment is a new idea about something broadly-beneficial. But she doesn't see how to repeat it, or further explore it.
At , Brenda has a new BAP. She probably doesn't have a phrase for it, like "being an explorer of what it is to be alive". But she's (a) adopted a new mode of attention for some contexts, and (b) feels it has many benefits, (c) without tracking them, or (d) hold any one benefit necessary to justify the BAP.
At , "greeting the world as an explorer of what it is to be alive" has ceased to be meaningful. It may become meaningful again, if she loses her way—for instance, if she gets too busy with work, or loses touch with her curiosity.
So: at , it's meaningful. At , it's meaningful and wise. At , it's wise, but no longer meaningful. In other words, Brenda moves from left to right in this chart.
The meaning part is when things are broadly-beneficial and attentional. It gets distilled into judgments about BAPs to adopt. We call those judgements wisdom. Wisdom is what we learn from meaningful experiences. Meaning is the part of wisdom we still need attention for.
Or simply: Meaning is the first derivative of a wise, good life.
Informing Design and Policy
The customer's always right. The people have spoken.
How about a new one?
People deserve to live by their own wisdom and sense of meaning.
Many current crises are caused by engagement-maximizing systems. Problems as diverse as depression, media clickbait, isolation of the elderly, obesity, over-consumption, political polarization, bullshit jobs—all these stem from the gap between preferences and what people find meaningful and wise.
To address that gap, we might start by using BAPs as a way to collect what's meaningful to people, and their life wisdom. We do this in Quest 1 of
Instead of asking if users engage with a product, we ask if it makes space for them to be vulnerable, to be "explorers of what it is to be alive", or whatever their BAPs are.
This could fix the problems with preferences:
Carla has a vulnerability-BAP, and changes workplaces. Her new "hanging out with colleagues" context is different than the old old one. How they stack up for vulnerability?Say she experiences BAP-losses (without any corresponding BAP-gains) at her new job. And yet, she switched jobs.
Carla is engaging with the new job. Does that mean it serves her? Or is the rent rising, and she cannot afford her old salary anymore? Perhaps she has no way to coordinate the job she'd really like. BAP-information can show this, when preference information wouldn't.
Much of worker/consumer life is about BAP-related things—community, family, adventure, learning, aesthetics, etc. Would we all be better off if transactions which undermine BAPs, like Carla's job switch, were discouraged (e.g., through taxation)?
For now, it's unclear where in the economy to apply this
Terminology. In the rest of this textbook, I don't use "meaning nuggets"—I just say
values. I occasionally use BAP, to emphasize checking that a supposed value is really
attentional, and really an adopted