Tech Products (that Don't Cause Depression and War)
đź’Ł

Tech Products (that Don't Cause Depression and War)

Picture a typical technologist: she watches how her users scroll, click, and engage with her product. Based on what she sees, she makes changes. She uses user behavior as her "source of truth" about whether her design is working.

While doing this, she shapes her product based on individual preferences (not social ones) and immediate preferences (not long-term ones).

Uh oh!

  • An individually-optimized experience (like hailing an Uber) often has negative social effects.
  • An engaging immediate experience (like an hour spent delighted by cat videos, or arguing on the internet) isn't necessarily good in the long-term.

The problems caused by tech—like clickbait, internet outrage, or the rise in teen depression—are social and they emerge over time. That's why tech founders, despite their optimism, keep making things worse. (And why those working on the "decentralized web" seem likely to travel a similar path.)

Of course: we can't ask our technologist to ignore her users' behavior. We must offer a way for her to broaden her insights into users' lives. We must give her an alternative "source of truth" for judging her design's success: a way to design based on data, but not just about clicks—data about how users want to live, and whether her product helps.

Here's how to do that, in three quick parts.

Understanding How Users Want to Live

To make things concrete, it helps to break down "how someone wants to live" into atoms, which I call values. For instance, someone might want to live honestly, courageously, creatively, etc.

  • Some social technology makes being honest more difficult: for example, it may be harder to be honest on Instagram, if honest posts get fewer likes, or are harder to caption well.
  • It is similar with courage, creativity, and every other way a person wants to act or relate to others. A courageous statement on Twitter might lose you followers, or lead to a pile-on, which makes twitter less of a good place to practice courage. 

Because we use platforms that aren't designed for our values, we act and socialize in ways we don’t believe in, and later regret: we procrastinate, avoid our feelings, pander to other people’s opinions, participate in hateful mobs reacting to the news, etc.

Meanwhile, the software looks like its succeeding (in terms of engagement), but it sucks in terms of what it does to our lives.

One part of the answer is to design and test software according to values. To do this, we need to be more specific than vague words like "honest", "courageous", etc.

In our course, we write them in a special format, and test software against them:

image

Those are some of my values, written in this format. Do you recognize any that we share?

Going Deeper than Momentary Preferences

One big difference between values and preferences is this: preferences are instantaneously expressed in the moment someone clicks, downloads, votes, or purchases something. Values are woven deeper into a life. Our exercise Hard Steps reveals these deeper patterns.

To do this exercise, you start with a story where someone lives by a particular value—in the example below, it is vulnerability. You use that story to surface all the little things a design needs to get right, to make vulnerability possible.

This leads to a deeper understanding of vulnerability, and tends to generate new design ideas. Check it out in this diagram.

image

Anticipating Social Effects

The Hard Steps exercise above gives designers lots of new ideas. What comes next? Normally, they'd sketch an idea on paper, and maybe get a friend to click through a mock-up.

But these are individual methods for testing. We want to test social effects.

A simple solution is to turn your idea into a group game: something that can be played in a room, where you see social effects with your own eyes.

For instance: to prototype a newsfeed, put some cards up on the wall. People can add their posts and mark other posts with likes. A referee can reorder the wall, based on the posts, make multiple feeds, etc.

This kind of prototype shows surprising results right away:

  • You'll see new social norms evolve, as people encourage each other in whatever direction makes the whole system spin faster. If the designers of Instagram had done this, they may have predicted influencer dynamics and body image issues.
  • You'll see problems you wouldn't see with one test-user at a time. Like the self-image problems that plague Instagram today.

Most importantly, you'll develop an intuition of how different rules have social effects, you'll be able to change the rules more rapidly than you could with a software prototype, and you'll see the effects immediately.

A New Kind of Designer

Most designers think primarily in terms of individuals—they imagine individual experiences, user journeys, and (sometimes) incentives structures for those individuals. Their design methods emphasize delightful experience and efficiency.

But what feels delightful and efficient for one person in one moment is often neither delightful, nor efficient over the long-term, or for society at large.

I've presented three practices we teach at The School for Social Design: getting specific about values, analyzing hard steps, and social prototyping.

When designers practice these, their imagination goes beyond individual experience: they see how relationships will form; how new social norms will arise; and how new processes will lead to new outcomes.

In other words, they can design for positive social consequences, and a deeper kind of goodness.

We need designers like that. The 20th and early 21st centuries were a downward spiral of virtualization and individualization. It's time for values-based social designers to rise.

đź“Ś

We teach this kind of thinking and designing at The School for Social Design.