Posted: June 13th, 2022

Reflection on 4 articles

**PLEASE READ – Reactions for this week should include (1)Is This How Discrimination Ends? (2)The Problem with Implicit Bias Training (3)Two Powerful Ways Managers Can Curb Implicit Biases (4)Outsmart Your Own Biases: 4 ARTICLES in total -no need to react to”I didn’t mean it like that”: Challenging Your Own Biasesis a short list **
For accountability, please take notes on assigned readings and videos.
Feel free to choose a quote or phrase to react to. You can also write a summary as your reaction.
1-2 paragraphs or bullet points are sufficient for a completion grade.
businessReflection
ATTACHED FILE(S)
9/15/2020 Is This How Discrimination Ends? :: Reader View
chrome-extension://ecabifbgmdmgdllomnfinbmaellmclnh/data/reader/index.html?id=841&url=https%3A%2F%2Fwww.theatlantic.com%2Fscience%2Farchive%2F2… 1/6
www.theatlantic.com /science/archive/2017/05/unconscious-bias-training/525405/
Is This How Discrimination Ends?
Jessica Nordell
23-30 minutes
On a cloudy day in February, Will Cox pointed to a pair of news photos that prompted a room of
University of Wisconsin, Madison, graduate students to shift in their seats. In one image, a young
African American man clutches a carton of soda under his arm. Dark water swirls around his torso; his
yellow shirt is soaked. In the other, a white couple is in water up to their elbows. The woman is tattooed
and frowning, gripping a bag of bread.
Cox read aloud the captions that were published alongside these images of a post-Katrina New Orleans.
For the black man: “A young man walks through chest-deep water after looting a grocery store.” For the
white couple: “Two residents wade through chest-deep water after finding bread and soda.”
Looting. Finding. A murmur spread through the rows of students watching.
Cox, a social psychologist in the university’s Prejudice and Intergroup Relations Lab, turned to his co-
presenter, a compact, 50-something woman standing next to him. As she strode down the rows of
students, her voice was ardent, her movements deliberate. She could have been under a spotlight on
the stage at a tech summit, not at the head of a narrow classroom in the university’s education building.
Listen to the audio version of this article:Feature stories, read aloud: download the Audm app for your
iPhone.
“There are a lot of people who are very sincere in their renunciation of prejudice,” she said. “Yet they are
vulnerable to habits of mind. Intentions aren’t good enough.”
The woman, Patricia Devine, is a psychology professor and director of the Prejudice Lab. Thirty years
ago, as a graduate student, she conducted a series of experiments that laid out the psychological case
for implicit racial bias—the idea, broadly, is that it’s possible to act in prejudicial ways while sincerely
rejecting prejudiced ideas. She demonstrated that even if people don’t believe racist stereotypes are
true, those stereotypes, once absorbed, can influence people’s behavior without their awareness or
intent.
Now, decades after unraveling this phenomenon, Devine wants to find a way to end it. She’s not alone.
Since the mid-1990s, researchers have been trying to wipe out implicit bias. Over the last several years,
“unconscious-bias trainings” have seized Silicon Valley; they are now de rigueur at organizations across
the tech world.
But whether these efforts have had any meaningful effect is still largely undetermined.
Until, perhaps, now. I traveled to southern Wisconsin, because Devine and a small group of scientists
have developed an approach to bias that actually seems to be working—a two-hour, semi-interactive
presentation they’ve been testing and refining for years. They’ve created versions focused on race and
versions focused on gender. They’ve tried it with students and faculty. Next, they’ll test it with police.
Their goal is to make people act less biased. So far, it’s working.
On July 17, 2014, a 43-year-old former horticulturist on Staten Island named Eric Garner was
approached by police officers who suspected him of selling untaxed cigarettes. One of them put him in a
chokehold, a maneuver the New York City Police Department prohibits. Garner died an hour later—a
homicide, according to the medical examiner. Since Garner’s death, and then Michael Brown’s, and
Tamir Rice’s, and many, many others’, voices condemning discrimination in policing have grown to a
thunder. While police in many cases maintain that they used appropriate measures to protect lives and
their own personal safety, the concept of implicit bias suggests that in these crucial moments, the
officers saw these people not as individuals—a gentle father, an unarmed teenager, a 12-year-old child
—but as members of a group they had learned to associate with fear.
In Silicon Valley, a similar narrative of pervasive bias has unfolded over the last several years. In 2012,
venture capitalist Ellen Pao filed a gender-discrimination lawsuit against her employer, the venture-
9/15/2020 Is This How Discrimination Ends? :: Reader View
chrome-extension://ecabifbgmdmgdllomnfinbmaellmclnh/data/reader/index.html?id=841&url=https%3A%2F%2Fwww.theatlantic.com%2Fscience%2Farchive%2F2… 2/6
capital firm Kleiner Perkins Caufield & Byers, maintaining, for instance, that she was penalized for the
same behaviors her male colleagues were praised for. Her experience wasn’t unique: A 2016 survey of
hundreds of women in technology, titled Elephant in the Valley, revealed that the vast majority
experienced both subtle and overt bias in their careers.
In addition to urgent conversations about race and criminal justice, and employment and gender,
discussions about implicit bias have spread to Hollywood, the sciences, and the presidential election.
What’s more, though solutions are hard to come by, there’s plenty of hard data to validate a very real
problem.
(Julianna Brion)
In fact, studies demonstrate bias across nearly every field and for nearly every group of people. If you’re
Latino, you’ll get less pain medication than a white patient. If you’re an elderly woman, you’ll receive
fewer life-saving interventions than an elderly man. If you are a man being evaluated for a job as a lab
manager, you will be given more mentorship, judged as more capable, and offered a higher starting
salary than if you were a woman. If you are an obese child, your teacher is more likely to assume you’re
less intelligent than if you were slim. If you are a black student, you are more likely to be punished than
a white student behaving the same way.
There are thousands of these studies. And they show that at this moment in time, if person A is white
and person B is black, if person X is a woman and person Y is a man, they will be treated differently in
American society for no other reason than that their identities have a cultural meaning. And that
meaning clings to each person like a film that cannot be peeled away.
Findings like these can feel radioactive. Ben Barres, a Stanford neurobiologist I spoke with, once
wondered aloud whether it was wise to even share with women entering science what they are up
against; as James Baldwin wrote, it takes a rare person “to risk madness and heartbreak in an attempt
to achieve the impossible.” For people struggling to grapple with bias, these realities can elicit feelings of
rage and sadness, grief and guilt. Last summer, a man from North Carolina called in to C-SPAN
because he wanted to know, quite simply, how he could become less racially biased. “What can I do to
change?” he asked. “You know, to be a better American?” The video has been watched online more
than 8 million times.
At the same time, there are many people who reject the concept of implicit bias outright. Some
misinterpret it as a charge of plain, old-fashioned bigotry; others just don’t perceive its existence, or they
believe its role in determining outcomes is overstated. Mike Pence, for instance, bristled during the 2016
vice-presidential debate: “Enough of this seeking every opportunity to demean law enforcement broadly
by making the accusation of implicit bias whenever tragedy occurs.” And two days after the first
presidential debate, in which Hillary Clinton proclaimed the need to address implicit bias, Donald Trump
asserted that she was “essentially suggesting that everyone, including our police, are basically racist
and prejudiced.”
Still other people, particularly those who have been the victims of police violence, also reject implicit bias
—on the grounds that there’s nothing implicit about it at all.
9/15/2020 Is This How Discrimination Ends? :: Reader View
chrome-extension://ecabifbgmdmgdllomnfinbmaellmclnh/data/reader/index.html?id=841&url=https%3A%2F%2Fwww.theatlantic.com%2Fscience%2Farchive%2F2… 3/6
One challenge to bridging these perspectives is that real life rarely provides lab-perfect conditions in
which proof of implicit bias can be established. Take Cox’s Hurricane Katrina photos. After they were
published, people began to ask: What if the photographers really did see one person looting and not the
other? When the photographers were asked what they’d seen, the photographer of the “looting” photo
said that he did see that person loot. The other photographer said that he did not see how his subjects
acquired their groceries. There was a plausible explanation other than bias. In debates and jury trials,
doubts like this scatter like seeds.
But there may be an even more fundamental reason for this gulf between people’s perspectives on the
subject of bias. This has to do with the fact that a person’s very circumstances and position in the world
influence what they do and don’t perceive. As Evelyn Carter, a social psychologist at the University of
California, Los Angeles, told me, people in the majority and the minority often see two different realities.
While people in the majority may only see intentional acts of discrimination, people in the minority may
register both those acts and unintended ones. White people, for instance, might only hear a racist
remark, while people of color might register subtler actions, like someone scooting away slightly on a
bus—behaviors the majority may not even be aware they’re doing.
Bias is woven through culture like a silver cord woven through cloth. In some lights, it’s brightly visible.
In others, it’s hard to distinguish. And your position relative to that glinting thread determines whether
you see it at all.
One attempt to get a handle on all this and pin down implicit bias has come by way of a tool known as
the Implicit Association Test. There are many techniques to measure implicit associations, but the IAT, a
reaction-time test that gauges how strongly certain concepts are linked in a person’s mind, has become
the most well-known and widely used.
In an IAT designed to assess anti-gay bias, for instance, you are presented with a list of words (like
“smiling” or “rotten” or “homosexual”) and asked to sort each into a combined category: “gay or bad” or
“straight or good.” Then, you see another list and are asked to sort each word again, this time with the
combinations flipped: “gay or good” or “straight or bad.” If your responses are faster when “gay” is paired
with “bad” than with “good,” this is supposed to demonstrate that the gay/bad association in your mind is
stronger. In a review of more than 2.5 million of these tests, most people showed a preference for
straight people over gay, white people over black, and young people over old.
Ideally, the IAT would provide not only a way to quantify bias, but a clear target in the quest to end it. If
the implicit associations the IAT measures are the cause of biased behavior, then untethering these
negative associations could eliminate it.
But an increasingly vocal group of critics now questions all these assumptions. One issue, according to
detractors, is that people’s IAT scores are not stable. In scientific parlance, the IAT has low “test-retest
reliability”; the same person might end up with different scores at different times. (If a bathroom scale
says you weigh 210 pounds today and 160 tomorrow, you might feel skeptical about the scale.) More
importantly, according to meta-analyses (a synthesis of all available studies on a topic), there’s a weak
relationship between a person’s IAT score and their actual behavior. In other words, people may be
acting in biased ways, but it’s not clear this is due to the mental construct measured by the IAT.
While this debate could threaten decades of IAT-based work on implicit bias, a third point of view has
also quietly emerged. Yes, the IAT has flaws, this perspective holds. But the meta-analyses criticizing it
also have flaws. Furthermore, the IAT is only one of many tools for measuring implicit associations, and
all these different tools tend to turn up the same results—the same preferences for certain social groups
over others. There is something truly consistent and real there, these results suggest.
Perhaps, this third view holds, what’s really going on is that implicit bias is more complex than these
tools can fully handle. Implicit associations may not be the stable entities scientists and others have
been imagining them to be. In fact, studies show that the specific associations that arise depend on a
person’s context and state of mind. People in one experiment, for instance, automatically associated
rich foods with positive things when they were prompted to focus on taste, and negative things when
they were prompted to focus on health. In this case, the test-retest reliability criticism would be
irrelevant; something that’s fluid and changeable shouldn’t be consistent. The fact that there’s any
correlation at all between IAT scores and behavior would be remarkable.
All of which is to say that while bias in the world is plainly evident, the exact sequence of mental events
that cause it is still a roiling question. Devine, for her part, told me that she is no longer comfortable
even calling this phenomenon “implicit bias.” Instead, she prefers “unintentional bias.” The term implicit
bias, she said, “has become so broad that it almost has no meaning.”
An hour into the workshop in Wisconsin, Devine rolled up one sleeve of her blue paisley blouse and
walked over to an African American student sitting in the front row. “People think, ‘If I don’t want to treat
people based on race, then I’m going to be colorblind, or gender-blind, or age-blind,’” she said. “It’s not a
9/15/2020 Is This How Discrimination Ends? :: Reader View
chrome-extension://ecabifbgmdmgdllomnfinbmaellmclnh/data/reader/index.html?id=841&url=https%3A%2F%2Fwww.theatlantic.com%2Fscience%2Farchive%2F2… 4/6
very effective strategy. First, it’s impossible.” She held her pale forearm next to the student’s. “There’s a
difference,” she said. Students exchanged glances. “Who’s the man?” she continued, looking at Cox.
Then she raised her eyebrows and gestured to herself.“Who’s the old person?” It was a goofy-Dad joke,
but the students chuckled.
(How this borderline-aggressive approach affects students put on the spot is another question. Later, the
student Devine had approached with her bare arm said, “I was a little surprised, but I kind of appreciated
it.”)
If pointing out skin color in a workshop on overcoming bias seems strange, that may be part of the point.
Trying to ignore these differences, Devine says, makes discrimination worse. Humans see age and
gender and skin color: That’s vision. Humans have associations about these categories: That’s culture.
And humans use these associations to make judgments: That, Devine believes, is habit—something you
can engage in without knowing it, the way a person might nibble fingernails down to the bloody quick
before realizing they are even doing so.
The entire workshop is crafted as a way to break this habit. To do so, Devine said, you have to be aware
of it, motivated to change, and have a strategy for replacing it. Over the course of the two-hour
presentation, Cox and Devine hit all these notes: They walked through the science of how people can
act biased without realizing it. They barreled through mountains of evidence and detailed explanations
of how bias spreads. Halfway through the workshop, they gave students a chance to discuss how these
ideas relate to their own personal lives, and everyone had a story. A chemist recounted being steered to
a sales internship, despite seven years of chemistry training, because she had “such a nice personality.”
An African American student described the eerie sensation he experienced in Italy—before realizing it
was the feeling of not having shopkeepers follow and watch him.
At the end, Devine and Cox offered ideas for substitute habits. Observe your own stereotypes and
replace them, Cox said. Look for situational reasons for a person’s behavior, rather than stereotypes
about that person’s group. Seek out people who belong to groups unlike your own. Devine paced
among the desks, making eye contact with each student. “I submit to you,” she said, her voice steady
with conviction, “that prejudice is a habit that can be broken.”
It may sound a bit whiz-bang, but the data show that this seemingly simple intervention works. For
example, after Devine and a colleague presented a version of the workshop focused on gender bias to
STEM faculty at the University of Wisconsin, departmental hiring patterns began to change. In the two
years following the intervention, in departments that had received the training, the proportion of women
faculty hired rose from 32 percent to 47 percent—an increase of almost 50 percent—while in
departments that hadn’t received the training, the proportion remained flat. And in an independent
survey conducted months after the workshop, faculty members in participating departments—both
women and men—reported feeling more comfortable bringing up family issues, and even feeling that
their research was more valued.
In another study, the researchers gave hundreds of undergraduate students a version of the intervention
focused on racial bias. Weeks afterwards, students who had participated noticed bias more in others
than did students who hadn’t participated, and they were more likely to label the bias they perceived as
wrong. Notably, the impact seemed to last: Two years later, students who took part in a public forum on
race were more likely to speak out against bias if they had participated in the training.
9/15/2020 Is This How Discrimination Ends? :: Reader View
chrome-extension://ecabifbgmdmgdllomnfinbmaellmclnh/data/reader/index.html?id=841&url=https%3A%2F%2Fwww.theatlantic.com%2Fscience%2Farchive%2F2… 5/6
(Julianna Brion)
In treating bias as a habit, the Madison approach is unique. Many psychology experiments that try to
change implicit bias treat it as something like blood pressure—a condition that can be adjusted, not a
behavior to be overcome. The Madison approach aims to make unconscious patterns conscious and
intentional. “The problem is big. It’s going to require a variety of different strategies,” Devine says. “But if
people can address it within themselves, then I think it’s a start. If those individuals become part of
institutions, they may carry messages forward.”
The STEM work suggests this approach at least can have an impact on discrimination against women.
What the team has not yet determined is whether the race-focused interventions have an impact on the
experiences of people of color. That question is a current priority. “If we’re just making white people feel
better,” Devine says, “who cares?”
Another potential strength of the Madison approach is that it’s both rigorously experimental and tested in
the real world. When the Princeton psychologist Betsy Levy Paluck reviewed hundreds of interventions
designed to reduce prejudice, she found that only 11 percent of all experimental efforts were actually
tested outside of a laboratory. By contrast, few of the trainings that have become popular in Silicon
Valley and elsewhere are scientifically evaluated. This is very concerning to researchers, because the
trainings could be having literally any effect on people. In the words of an uncommonly candid
acupuncturist I once visited, “After this treatment, you might get better, you might get worse, or you
might just stay the same.”
Making things worse is a serious risk: A 2006 review of more than 700 companies that had implemented
diversity initiatives showed that after diversity training, the likelihood of black men and women advancing
in their organizations actually decreased. Proving that efforts like these work as intended, Paluck wrote,
“should be considered an ethical imperative, on the level of rigorous testing of medical interventions.”
Two days after the workshop, I sat down in a soaring, glass-walled building on campus with Patrick
Forscher, a postdoc in Devine’s lab and a co-author on many of the studies evaluating this workshop. I
wanted to know why this approach was working.
Forscher is shy; his voice was so low it was almost sub-auditory. Their success, Forscher explained,
may have something to do with the creation of the self. In the 1970s, a social psychologist named Milton
Rokeach posited that the self is made of many layers and that some layers are more central to one’s
self-concept than others. Values are highly central; beliefs a little less so. Something like associations
would likely be less central still. “If you’re asked to describe who you are,” said Forscher, “you’re more
likely to say, ‘I’m someone who values equality’ than ‘I’m someone who implicitly associates white
people with good.’”
This hierarchy matters, because the more central a layer is to self-concept, the more resistant it is to
change. It’s hard, for instance, to alter whether or not a person values the environment. But if you do
manage to shift one of these central layers, Forscher explained, the effect is far-reaching. “If you think of
therapy, the goal often is to change processes central to how people view themselves,” he said. “When
it works, it can create very large changes.”
9/15/2020 Is This How Discrimination Ends? :: Reader View
chrome-extension://ecabifbgmdmgdllomnfinbmaellmclnh/data/reader/index.html?id=841&url=https%3A%2F%2Fwww.theatlantic.com%2Fscience%2Farchive%2F2… 6/6
The Madison workshop, for its part, zeroes in on people’s beliefs about bias—the belief that they might
be discriminating, the belief that discrimination is a problem, the belief that they can overcome their own
habit of prejudice. As a result of the workshop, these beliefs grow stronger. And beliefs might be just the
sweet spot that needs to be targeted. Call beliefs “the Goldilocks layer.” They’re a central enough part of
each person to unleash a torrent of other changes, but removed enough from entrenched core values
that they can, with the right kind of pressure, be shifted.
Watching Devine, I was struck by how charismatic and funny she is when presenting. It’s intentional. If
people feel attacked, she told me, they shut down. Avoiding blame is key. The resulting message is
carefully balanced: Bias is normal, but it’s not acceptable. You must change, but you’re not a bad
person. Watching Cox and Devine is like watching people play the classic game Operation, tweaking
specific beliefs without nicking the wrong reaction.
Still, this approach has shortcomings. A problem as complex as prejudice can’t possibly be solved by a
single psychological fix. Joelle Emerson, the CEO of Paradigm, a Silicon Valley consultancy that is in
the process of evaluating the effectiveness of its own trainings and interventions, believes that long-term
change must come through overhauling workplace systems and processes, not relying on individuals.
“Even the most well-designed training is not going to solve things by itself,” she said. “You have to
reinforce ideas within broader organizations.” Social scientists such as the psychologist Glenn Adams
have begun to call for a shift “from the task of changing individual hearts and minds to changing the
sociocultural worlds in which those hearts and minds are immersed.”
There is an elephant in the workshop, too. On the day I attended, almost all the students in the audience
were women or people of color, some seeking a way to combat bias directed at them. One student with
shoulder-length blond hair confided in me that she cared a lot about these topics, but had hoped the
workshop would address what to do about experiencing bias. The absence of white men in the group
was conspicuous. Cox said the crowd is usually more mixed, but the audience makeup may point to a
fundamental limitation of this kind of work: Its success depends on people already caring enough about
these issues to show up in the first place. Not everyone will seek out, in the middle of a weekday, a fairly
technical presentation about changing their own habits of mind.
That said, the workshop may come to them anyway. Forscher recently conducted what’s known as a
“network analysis” of the workshop’s effect—a look at how its effects spread throughout a community.
What he found, in the STEM gender-bias intervention, was that after the workshop, the people who
reported doing the most for gender equity weren’t those who’d attended the training, but those who
worked alongside them. It’s an unusual finding, and it’s not clear exactly what this means. But it’s
possible that as people who attended the workshop changed, they began influencing the people around
them.
When Devine first developed the idea that hidden stereotypes can influence people’s behavior without
their awareness, a colleague observed that her work revealed “the dark side of the mind.” Devine
disagreed. “I actually think that it reveals the mind,” she said. “I don’t think it’s dark; it’s real. You can’t
pretend it doesn’t exist.” Neural connections aren’t moral. What people do with them is. And as Devine
sees it, that doing starts with awareness.
And if there’s one thing the Madison workshops do truly shift, it is people’s concern that discrimination is
a widespread and serious problem. As people become more concerned, the data show, their awareness
of bias in the world grows, too. In the days after attending, I noticed my own spontaneous reactions to
other people to an almost overwhelming degree. The day I left Madison, in the lobby of my hotel, I saw
two people standing near the front desk. They were wearing worn, rumpled clothes, with ragged holes in
the knees. As I glanced at them, a story about them flickered in my mind. They weren’t guests staying
here, I thought; they must be friends of the clerk, visiting him on his break.
It was a tiny story, a minor assumption, but that’s how bias starts: as a flicker—unseen, unchecked—that
taps at behaviors, reactions, and thoughts. And this tiny story flitted through my mind for seconds before
I caught it. My God, I thought, is this how I’ve been living?
Afterwards, I kept watching for that flutter, like a person with a net in hand waiting for a dragonfly. And I
caught it, many times. Maybe this is the beginning of how my own prejudice ends. Watching for it.
Catching it and holding it up to the light. Releasing it. Watching for it again.
Jessica Nordell is a writer based in Minnesota.
9/15/2020 The Problem with Implicit Bias Training :: Reader View
chrome-extension://ecabifbgmdmgdllomnfinbmaellmclnh/data/reader/index.html?id=1025&url=https%3A%2F%2Fwww.scientificamerican.com%2Farticle%2Fthe-p… 1/2
www.scientificamerican.com /article/the-problem-with-implicit-bias-training/
The Problem with Implicit Bias Training
Tiffany L. Green,Nao Hagiwara
7-8 minutes
It’s well motivated, but there’s little evidence that it leads to meaningful changes in behavior
Credit: Nicola Katie Getty Images
While the nation roils with ongoing protests against police violence and persistent societal racism, many
organizations have released statements promising to do better. These promises often include
improvements to hiring practices; a priority on retaining and promoting people of color; and pledges to
better serve those people as customers and clients.
As these organizations work to make good on their declarations, implicit bias training is often at the top
of the list. As the thinking goes, these nonconscious prejudices and stereotypes are spontaneously and
automatically activated and may inadvertently affect how white Americans see and treat Black people
and other people of color. The hope is that, with proper training, people can learn to recognize and
correct this damaging form of bias.
In the health care industry, implicit bias is among the likely culprits in many persistent racial and ethnic
disparities, like infant and maternal mortality, chronic diseases such as diabetes, and more recently,
COVID-19. Black Americans are about 2.5 times more likely to die from COVID-19 relative to whites,
and emerging data indicate that Native Americans are also disproportionately suffering from the
pandemic. Implicit biases may impact the ways in which clinicians and other health care professionals
diagnose and treat people of color, leading to worse outcomes. In response to these disparities,
Michigan and California have mandated implicit bias training for some health professionals.
There’s just one problem. We just don’t have the evidence yet that implicit bias training actually works.
To be sure, finding ways to counter unfair treatment is critical. The evidence is clear that implicit
prejudice, an affective component of implicit bias (i.e., feeling or emotion) exists among health care
providers with respect to Black and/or Latinx patients, as well as to dark-skinned patients not in those
categories. In turn, these biases lower the quality of patient-provider communication and result in lower
satisfaction with the healthcare encounter.
But while implicit bias trainings are multiplying, few rigorous evaluations of these programs exist. There
are exceptions; some implicit bias interventions have been conducted empirically among health care
professionals and college students. These interventions have been proven to lower scores on the
Implicit Association Test (IAT), the most commonly used implicit measure of prejudice and stereotyping.
But to date, none of these interventions has been shown to result in permanent, long-term reductions of
implicit bias scores or, more importantly, sustained and meaningful changes in behavior (i.e., narrowing
of racial/ethnic clinical treatment disparities).
9/15/2020 The Problem with Implicit Bias Training :: Reader View
chrome-extension://ecabifbgmdmgdllomnfinbmaellmclnh/data/reader/index.html?id=1025&url=https%3A%2F%2Fwww.scientificamerican.com%2Farticle%2Fthe-p… 2/2
Even worse, there is consistent evidence that bias training done the “wrong way” (think lukewarm
diversity training) can actually have the opposite impact, inducing anger and frustration among white
employees. What this all means is that, despite the widespread calls for implicit bias training, it will likely
be ineffective at best; at worst, it’s a poor use of limited resources that could cause more damage and
exacerbate the very issues it is trying to solve.
So, what should we do? The first thing is to realize that racism is not just an individual problem requiring
an individual intervention, but a structural and organizational problem that will require a lot of work to
change. It’s much easier for organizations to offer an implicit bias training than to take a long, hard look
and overhaul the way they operate. The reality is, even if we could reliably reduce individual-level bias,
various forms of institutional racism embedded in health care (and other organizations) would likely
make these improvements hard to maintain.
Explicit, uncritical racial stereotyping in medicine is one good example. We have known for many years
that race is a social construct rather than a proxy for genetic or biological differences. Even so, recent
work has identified numerous cases of race-adjusted clinical algorithms in medicine. In nephrology, for
example, race adjustments that make it appear as if Black patients have better kidney function than they
actually do can potentially lead to worse outcomes such as delays in referral for needed specialist care
or kidney transplantation. Other more insidious stereotyping characterizes Native Americans and African
Americans as more likely to be “noncompliant” with diet and lifestyle advice. These characterizations of
noncompliance as a function of attitudes and practices completely ignore structural factors such as
poverty, segregation and marketing—factors that create health inequities in the first place.
Meaningful progress at the structural and institutional levels takes longer than a few days of implicit bias
training. But there are encouraging examples of individuals who have fought successfully for structural
change within their health care organizations. For example, medical students at the University of
Washington successfully lobbied for race to be removed as a criterion for determining kidney function—
a process that took many years. Their success may have important implications for closing gaps in
disparities among patients with renal disease. And innovative new programs like the Mid-Ohio Farmacy
have linked health care providers with community-based organizations, and help providers address food
insecurity among their low-income patients—an issue that disproportionately impacts people of color.
(Doctors can write a “food prescription” that allows their patients to purchase fresh produce.)
None of this, of course, means that we should give up on trying to understand implicit bias or developing
evidence-based training that successfully reduces discriminatory behaviors at the individual level. What
it does mean is that we need to lean into the hard work of auditing long-standing practices that unfairly
stigmatize people of color and fail to take into account how health inequities evolve. Creating
organizations that value equity and ultimately produce better outcomes for people of color will be long,
hard work, but it’s necessary and it’s been a long time coming.
Celebrating 175 Years of Discovery
ABOUT THE AUTHOR(S)
Tiffany L. Green
Tiffany L. Green, Ph.D. is an assistant professor in the Department of Population Health Sciences and
the Department of Obstetrics and Gynecology at the University of Wisconsin-Madison.
Nao Hagiwara
Nao Hagiwara, Ph.D., is an associate professor in the Department of Psychology at Virginia
Commonwealth University.
DIVERSITY
Two Powerful Ways Managers
Can Curb Implicit Biases
by Lori Mackenzie and Shelley Correll
OCTOBER 01, 2018
PATRIK STOLLARZ/GETTY IMAGES
Many managers want to be more inclusive. They recognize the value of inclusion and diversity
and believe it’s the right thing to aspire to. But they don’t know how to get there.
For the most part, managers are not given the right tools to overcome the challenges posed by
implicit biases. The workshops companies invest in typically teach them to constantly check
their thoughts for bias. But this demands a lot of cognitive energy, so over time, managers go
back to their old habits.
https://hbr.org/topic/diversity
https://hbr.org/search?term=lori+mackenzie
https://hbr.org/search?term=shelley+correll
https://hbr.org/
Based on our work at the Stanford Women’s Leadership Lab, helping organizations across many
industries become more diverse and inclusive, our research shows there are two, small — but
more powerful — ways managers can block bias: First, by closely examining and broadening their
definitions of success, and second, by asking what each person adds to their teams, what we call
their “additive contribution.”
The problem is that, when hiring, evaluating, or promoting employees, we often measure people
against our implicit assumptions of what talent looks like — our hidden “template of success.”
These templates potentially favor one group over others, even if members of each group were
equally likely to be successful.
Take, for example, the hiring process. While interviewing a candidate, we might ask her where
she went to school or to share her experiences. We genuinely believe we are gathering relevant
information that will help us decide objectively whether the person is a good fit for the job. But,
in fact, we are likely measuring that person against our hidden “template.” Did the person go to
the “right” school? Are her experiences similar to ours? Is her personality a close match with that
of the other employees on the team?
Not surprisingly, most managers end up hiring people who match their implicit template of
success. Now, this approach may seem like a recipe for sound decision-making. Wouldn’t those
people work best with the hiring manager and fit in with the rest of the team? Perhaps.
But this approach can pose a serious problem: Even if we want to be inclusive, the template itself
may inadvertently invite bias by giving preference to more traditional candidates or “the safe
bet.” In finance, for example, that might mean believing — based on no evidence — that only MBA
graduates from an elite university are likely to succeed at their jobs. Even if we apply that criteria
fairly to every candidate, it can lead to an implicit preference for hiring white males. After all, 60
to 70% of graduates of elite MBA programs are male — and very few are people of color.
Take another example: In positions that demand skills for working in an open-source context, our
hidden template of success might lead us to believe, again with no evidence, that only someone
who is already part of the open-source community can do the job well. This narrow definition,
however, will result in the same kind of candidate being picked over and over. Those who
volunteer in the open-source community often do so outside and beyond their paid “day” job
hours, which pretty much excludes people with care-giving roles and other responsibilities
https://www.linkedin.com/pulse/how-do-you-debunk-safe-bet-narrative-hiring-lori-nishiura-mackenzie/
outside of work. As a result, open-source communities are typically 3 to 5% women and mostly
younger men. You can see how replicating the template of success can quickly translate into
sameness. And sameness blocks performance and innovation.
Diversity, on the other hand, spurs innovation. In research spanning decades, Columbia professor
Katherine Phillips has repeatedly found that, when tasked to innovate, teams that include diverse
members and that value the contributions of all their members outperform homogenous teams.
When working across difference, Phillips finds that team members work harder. They have to in
order to communicate and to reach consensus with others who may not share the same
experiences or perspectives. This makes all members of the team think more deeply and arrive at
better decisions. Diversity, as Phillips writes, “makes us smarter.”
The Power of Additive Contribution
To block our implicit biases, we need to challenge the assumptions behind our templates for
success. We need to ask if the criteria used to evaluate candidates will lead us to choose
employees who will add to our team success or simply replicate the status quo. For example, is an
MBA from a top business school really necessary to be successful in this position? It may be, or
maybe we’re privileging some criteria without evidence that they are necessary for success. We
need to ask questions that help us determine how a person adds to the portfolio of experiences
and skills across our entire team.
Focusing on additive contribution, a term we developed in a collaboration with Alix Hughes,
diversity program leader at Amazon, is a powerful way to avoid sameness in a team and to foster
inclusion and innovation. When we consider other’s additive contributions, we open the door to
people who might not traditionally match our implicit template of success, that are not like “us.”
We make our teams more diverse and more successful.
So how can you ask questions that help you determine someone’s additive contribution? Here
are four ways:
Clarify ambiguous criteria for success. First ask, “What are my hidden ‘preferences?’” Then
challenge your hidden preferences by asking what are the mindsets, skills, and diverse
experiences that actually lead your team to success. This may make you more effective at hiring
people who will thrive in your organization. Instead of asking about prior open-source
experience, for example, you might seek someone who can discuss critical points effectively and
respectfully in an environment of open debate.
https://insight.kellogg.northwestern.edu/article/better_decisions_through_diversity
https://www.scientificamerican.com/article/how-diversity-makes-us-smarter/
https://www.linkedin.com/in/alix-hughes-3338341/
Focus on a person’s value to your team. Ask, “How does this person’s approach help us get to
better discussions and decisions?” or “Does this person help me see outside my ‘box’?” Professor
Mary Murphy, an expert on growth mindsets in organizations, offered this question: “How can
[or does] this person add to the total value (composition) of our team?” By asking questions like
these, you are more likely to move beyond your hidden template of success and avoid any
implicit bias that might come along with it.
Run a gap analysis. Ask, “What skills and experiences am I missing on my team that this person
has?” Be careful not to focus on one-dimensional characteristics. For example, don’t determine
you need “a woman to round out the team.” Diversity for diversity’s sake often leads others to
make negative assumptions about your people decisions — and about those you hire or promote.
Criteria still matter. Instead, look at how people can add to the total portfolio of mindsets, skills,
and experiences on the team.
Consider their journey. Ask, “What has this person learned from her experiences? Can she take
risks and persevere through difficulties?” We often perceive being quickly promoted as an
indicator of someone’s talent. But using this criteria might lead you to overlook the value of grit
and perseverance. If a person took a risk and it did not pay off, for example, they may have
learned more than a person who took a safer path. The lessons people learn throughout their
careers are often the key to uncovering their additive contribution.
Small Wins, Big Payoff
In 2016 Anton Hanebrink, Intuit’s Chief Corporate Strategy and Development Officer, took over a
high-performing team known for its contributions to the direction of the company. The team’s
historical approach to finding top talent had been simple — target graduates of top universities
and MBA programs with experience at leading management consulting firms or investment
banks. While these filters simplified the screening process, they also led to a relatively
homogenous way of viewing the world.
Seeking to find a better way he pushed his team to broaden how they thought about top
talent.The breakthrough for Anton and his team came during an offsite we facilitated for the
company on implicit biases in criteria, such as only hiring people from elite universities. The
company’s CFO asked a crowd of the company’s most accomplished finance leaders to raise their
hand if they had attended an Ivy League school. Hardly anyone raised their hand.
https://www.mindandidentityincontext.com/

https://www.linkedin.com/in/antonhanebrink/
Seizing on the moment, Anton pushed his team to examine this historical criterion more
closely.His team discovered it was not an effective marker of how well the person would perform
in the organization. It was, indeed, just a hidden preference. In reality, many of the top
performers at Intuit, including the CEO and CFO, did not hold degrees from an Ivy League school.
Energized by Anton’s charge, the team worked together to define the skills, experiences, and
mindsets that actually were necessary to succeed in the team. They identified the abilities to
structure ambiguous problems, influence change at senior levels, and to effectively develop team
members as the key contributions an incoming executive should add to the team.None of these
abilities would be guaranteed by a credential earned sometimes decades ago.
After reconsidering their template of success, the team’s approach to hiring changed significantly.
They especially improved how they interviewed candidates, engaging them more deeply and
thoughtfully on the core skills of the job than they had in the past.They even went so far as
redact the names of schools and prior employers during the interview process.
As a result, the team hired top talent whose diverse backgrounds have added to their total
portfolio of skills. Anton’s team achieved more gender and racial diversity as well. By redefining
success, a greater diversity of people were able to beseenfor their leadership. The breadth of
talent has led to a more rigorous debate of ideas and enabled the team to navigate new business
opportunities and identify critical strategic insights they would have missed with their old
approach to recruiting talent.
That’s the power of reexamining our assumptions and considering people’s additive
contributions. They constitute small changes on our part, but the payoff is significant.
Lori Mackenzie is Executive Director of the Clayman Institute at Stanford University and co-founder of Stanford
VMware Women’s Leadership Innovation Lab.
Shelley Correll is professor of sociology and organizational behavior at Stanford University, Director of the
Stanford VMware Women’s Leadership Innovation Lab, and the Barbara D. Finberg Director of the Clayman Institute for
Gender Research.
https://hbr.org/search?term=lori+mackenzie
https://hbr.org/search?term=shelley+correll
Related Topics: D E C I S I O NM A K I N G |M A N A G I N GP E O P L E |P E R S O N N E LP O L I C I E S
This article is about DIVERSITY
 F O L L O WT H I STO P I C
Comments
Leave a Comment
P O S T
0 COMMENTS
POSTING GUIDELINES
We hope the conversations that take place on HBR.org will be energetic, constructive, and thought-provoking. To comment, readers must sign in or
register. And to ensure the quality of the discussion, our moderating team will review all comments and may edit them for clarity, length, and
relevance. Comments that are overly promotional, mean-spirited, or off-topic may be deleted per the moderators’ judgment. All postings become
the property of Harvard Business Publishing.
 J O I NT H EC O N V E R S AT I O N
https://hbr.org/topic/decision-making
https://hbr.org/topic/managing-people
https://hbr.org/topic/personnel-policies
https://hbr.org/topic/diversity
https://hbr.org/sign-in
https://hbr.org/register
Copyright 2018 Harvard Business Publishing. All Rights Reserved. Additional restrictions
may apply including the use of this content as assigned course material. Please consult your
institution’s librarian about any restrictions that might apply under the license with your
institution. For more information and teaching resources from Harvard Business Publishing
including Harvard Business School Cases, eLearning products, and business simulations
please visit hbsp.harvard.edu.
weElits:Harvard
Business
Review
ANALYTICSERVICES
LIVEWEBINAR
EmpoweringDecision-Makers
withSelf-ServiceAnalytics
featuringCindiHowson,ManishMotiramani,
andAlexClemente
Registernow
Sponsoredby:
ThoughtSpot
Summary.
Cognitive Bias
Outsmart Your Own Biases
How to broaden your thinking and make better decisions by Jack B.
Soll, Katherine L. Milkman, and John W. Payne
From the Magazine (May 2015)
Artwork: Millo, 2014, B.ART-Arte in Barriera, Turin, Italy
When making decisions, we all rely too heavily on intuition and use flawed reasoning
sometimes. But it’s possible to fight these pernicious sources of bias by learning to spot them
and using the techniques presented in this article, gleaned from the latest research….
FURTHER READING
The Five Traps of High-Stakes Decision
Making
Magazine Article by Michael C. Mankins
Bet on process rather than luck or inspiration.
TEST YOURSELF
Are You Being Tricked by Intuition?
Quiz by John Beshears, Shane Frederick, and
Francesca Gino
Answer three questions to see what your
“default” mode is for judgments and
decisions.
FURTHER READING
Before You Make That Big Decision…
Magazine Article by Daniel Kahneman, Dan
Lovallo, and Olivier Sibony
Unearth and neutralize problems in your
teams’ thinking.
THIS ARTICLE ALSO APPEARS IN:
Suppose you’re evaluating a job candidate to lead a new office in a
different country. On paper this is by far the most qualified person
you’ve seen. Her responses to your interview questions are flawless. She
has impeccable social skills. Still, something doesn’t feel right. You can’t
put your finger on what—you just have a sense. How do you decide
whether to hire her?
You might trust your intuition, which has guided you well in the past,
and send her on her way. That’s what most executives say they’d do
when we pose this scenario in our classes on managerial decision
making. The problem is, unless you occasionally go against your gut,
you haven’t put your intuition to the test. You can’t really know it’s
helping you make good choices if you’ve never seen what happens when
you ignore it.
It can be dangerous to rely too
heavily on what experts call System
1 thinking—automatic judgments
that stem from associations stored
in memory—instead of logically
working through the information
that’s available. No doubt, System 1
is critical to survival. It’s what
makes you swerve to avoid a car accident. But as the psychologist Daniel
Kahneman has shown, it’s also a common source of bias that can result
in poor decision making, because our intuitions frequently lead us
astray. Other sources of bias involve flawed System 2 thinking—
essentially, deliberate reasoning gone awry. Cognitive limitations or
laziness, for example, might cause people to focus intently on the wrong
things or fail to seek out relevant information.
We are all susceptible to such biases, especially when we’re fatigued,
stressed, or multitasking. Just think of a CEO who’s negotiating a merger
while also under pressure from lawyers to decide on a plant closing and
from colleagues to manage layoffs. In situations like this, we’re far from
decision-ready—we’re mentally, emotionally, and physically spent. We
cope by relying even more heavily on intuitive, System 1 judgments and
less on careful reasoning. Decision making becomes faster and simpler,
but quality often suffers.
Most of us tend to be overconfident in
our estimates. It’s important to allow
for uncertainty.
One solution is to delegate and to fight bias at the organizational level,
using choice architecture to modify the environment in which decisions
are made. (See “Leaders as Decision Architects,” in this issue.) Much of
the time, though, delegation isn’t appropriate, and it’s all on you, the
manager, to decide. When that’s the case, you can outsmart your own
biases. You start by understanding where they’re coming from:
excessive reliance on intuition, defective reasoning, or both. In this
article, we describe some of the most stubborn biases out there: tunnel
vision about future scenarios, about objectives, and about options. But
awareness alone isn’t enough, as Kahneman, reflecting on his own
experiences, has pointed out. So we also provide strategies for
overcoming biases, gleaned from the latest research on the psychology
of judgment and decision making.
First, though, let’s return to that candidate you’re considering. Perhaps
your misgivings aren’t really about her but about bigger issues you
haven’t yet articulated. What if the business environment in the new
region isn’t as promising as forecast? What if employees have problems
collaborating across borders or coordinating with the main office?
Answers to such questions will shape decisions to scale back or manage
continued growth, depending on how the future unfolds. So you should
think through contingencies now, when deciding whom to hire.
But asking those bigger, tougher questions does not come naturally.
We’re cognitive misers—we don’t like to spend our mental energy
entertaining uncertainties. It’s easier to seek closure, so we do. This
hems in our thinking, leading us to focus on one possible future (in this
case, an office that performs as projected), one objective (hiring someone
who can manage it under those circumstances), and one option in
isolation (the candidate in front of us). When this narrow thinking
weaves a compelling story, System 1 kicks in: Intuition tells us,
prematurely, that we’re ready to decide, and we venture forth with great,
unfounded confidence. To “debias” our decisions, it’s essential to
broaden our perspective on all three fronts.
Thinking About the Future
Nearly everyone thinks too
narrowly about possible outcomes.
Some people make one best guess
and stop there (“If we build this
factory, we will sell 100,000 more
cars a year”). Others at least try to
hedge their bets (“There is an 80%
chance we will sell between 90,000 and 110,000 more cars”).
Unfortunately, most hedging is woefully inadequate. When researchers
asked hundreds of chief financial officers from a variety of industries to
forecast yearly returns for the S&P 500 over a nine-year horizon, their
80% ranges were right only one-third of the time. That’s a terribly low
rate of accuracy for a group of executives with presumably vast
knowledge of the U.S. economy. Projections are even further off the
mark when people assess their own plans, partly because their desire to
succeed skews their interpretation of the data. (As former Goldman
Sachs CFO David Viniar once put it, “The lesson you always learn is that
your definition of extreme is not extreme enough.”)
Because most of us tend to be highly overconfident in our estimates, it’s
important to “nudge” ourselves to allow for risk and uncertainty. The
following methods are especially useful.
Make three estimates.
What will be the price of crude oil in January 2017? How many new
homes will be built in the United States next year? How many memory
chips will your customers order next month? Such forecasts shape
decisions about whether to enter a new market, how many people to
hire, and how many units to produce. To improve your accuracy, work
up at least three estimates—low, medium, and high—instead of just
stating a range. People give wider ranges when they think about their
low and high estimates separately, and coming up with three numbers
prompts you to do that.
Your low and high guesses should be unlikely but still within the realm
of possibility. For example, on the low end, you might say, “There’s a
10% chance that we’ll sell fewer than 10,000 memory chips next
month.” And on the high end, you might foresee a 10% chance that sales
will exceed 50,000. With this approach, you’re less likely to get
blindsided by events at either extreme—and you can plan for them.
(How will you ramp up production if demand is much higher than
anticipated? If it’s lower, how will you deal with excess inventory and
keep the cash flowing?) Chances are, your middle estimate will bring
you closer to reality than a two-number range would.
Think twice.
A related exercise is to make two forecasts and take the average. For
instance, participants in one study made their best guesses about dates
in history, such as the year the cotton gin was invented. Then, asked to
assume that their first answer was wrong, they guessed again. Although
one guess was generally no closer than the other, people could harness
the “wisdom of the inner crowd” by averaging their guesses; this
strategy was more accurate than relying on either estimate alone.
Research also shows that when people think more than once about a
problem, they often come at it with a different perspective, adding
valuable information. So tap your own inner crowd and allow time for
reconsideration: Project an outcome, take a break (sleep on it if you can),
and then come back and project another. Don’t refer to your previous
estimate—you’ll only anchor yourself and limit your ability to achieve
new insights. If you can’t avoid thinking about your previous estimate,
then assume it was wrong and consider reasons that support a different
guess.
Use premortems.
In a postmortem, the task is
typically to understand the cause of
a past failure. In a premortem, you
imagine a future failure and then
explain the cause. This technique,
also called prospective hindsight,
helps you identify potential
problems that ordinary foresight won’t bring to mind. If you’re a
manager at an international retailer, you might say: “Let’s assume it’s
2025, and our Chinese outlets have lost money every year since 2015.
Why has that happened?”
Thinking in this way has several benefits. First, it tempers optimism,
encouraging a more realistic assessment of risk. Second, it helps you
prepare backup plans and exit strategies. Third, it can highlight factors
that will influence success or failure, which may increase your ability to
control the results.
Perhaps Home Depot would have benefited from a premortem before
deciding to enter China. By some accounts, the company was forced to
close up shop there because it learned too late that China isn’t a do-it-
yourself market. Apparently, given how cheap labor is, middle-class
Chinese consumers prefer to contract out their repairs. Imagining low
demand in advance might have led to additional market research
(asking Chinese consumers how they solve their home-repair problems)
and a shift from do-it-yourself products to services.
Take an outside view.
Now let’s say you’re in charge of a new-product development team.
You’ve carefully devised a six-month plan—about which you are very
confident—for initial design, consumer testing, and prototyping. And
you’ve carefully worked out what you’ll need to manage the team
optimally and why you expect to succeed. This is what Dan Lovallo and
Daniel Kahneman call taking an “inside view” of the project, which
typically results in excessive optimism. You need to complement this
perspective with an outside view—one that considers what’s happened
with similar ventures and what advice you’d give someone else if you
weren’t involved in the endeavor. Analysis might show, for instance,
that only 30% of new products in your industry have turned a profit
within five years. Would you advise a colleague or a friend to accept a
70% chance of failure? If not, don’t proceed unless you’ve got evidence
that your chances of success are substantially better than everyone
else’s.
An outside view also prevents the “planning fallacy”—spinning a
narrative of total success and managing for that, even though your odds
of failure are actually pretty high. If you take a cold, hard look at the
costs and the time required to develop new products in your market, you
might see that they far outstrip your optimistic forecast, which in turn
might lead you to change or scrap your plan.
Thinking About Objectives
It’s important to have an expansive mindset about your objectives, too.
This will help you focus when it’s time to pick your most suitable
options. Most people unwittingly limit themselves by allowing only a
subset of worthy goals to guide them, simply because they’re unaware of
the full range of possibilities.
That’s a trap the senior management team at Seagate Technology sought
to avoid in the early 1990s, when the company was the world’s largest
manufacturer of disk drives. After acquiring a number of firms, Seagate
approached the decision analyst Ralph Keeney for help in figuring out
how to integrate them into a single organization. Keeney conducted
individual interviews with 12 of Seagate’s top executives, including the
CEO, to elicit the firm’s goals. By synthesizing their responses, he
identified eight general objectives (such as creating the best software
organization and providing value to customers) and 39 specific ones
(such as developing better product standards and reducing customer
costs). Tellingly, each executive named, on average, only about a third of
the specific objectives, and only one person cited more than half. But
with all the objectives mapped out, senior managers had a more
comprehensive view and a shared framework for deciding which
opportunities to pursue. If they hadn’t systematically reflected on their
goals, some of those prospects might have gone undetected.
Early in the decision-making process, you want to generate many
objectives. Later you can sort out which ones matter most. Seagate, for
example, placed a high priority on improving products because that
would lead to more satisfied customers, more sales, and ultimately
greater profits. Of course, there are other paths to greater profits, such as
developing a leaner, more efficient workforce. Articulating,
documenting, and organizing your goals helps you see those paths
clearly so that you can choose the one that makes the most sense in light
of probable outcomes.
Take these steps to ensure that you’re reaching high—and far—enough
with your objectives.
Seek advice.
Round out your perspective by looking to others for ideas. In one study,
researchers asked MBA students to list all their objectives for an
internship. Most mentioned seven or eight things, such as “improve my
attractiveness for full-time job offers” and “develop my leadership
skills.” Then they were shown a master list of everyone’s objectives and
asked which ones they considered personally relevant. Their own lists
doubled in size as a result—and when participants ranked their goals
afterward, those generated by others scored as high as those they had
come up with themselves.
Outline objectives on your own before seeking advice so that you don’t
get “anchored” by what others say. And don’t anchor your advisers by
leading with what you already believe (“I think our new CFO needs to
have experience with acquisitions—what do you think?”). If you are
making a decision jointly with others, have people list their goals
independently and then combine the lists, as Keeney did at Seagate.
Cycle through your objectives.
Drawing on his consulting work and lab experiments, Keeney has
found that looking at objectives one by one rather than all at once helps
people come up with more alternatives. Seeking a solution that checks
off every single box is too difficult—it paralyzes the decision maker.
So, when considering your goals for,
say, an off-site retreat, tackle one at
a time. If you want people to
exchange lessons from the past
year, develop certain leadership
skills, and deepen their
understanding of strategic
priorities, thinking about these
aims separately can help you
achieve them more effectively. You might envision multiple sessions or
even different events, from having expert facilitators lead brainstorming
sessions to attending a leadership seminar at a top business school.
Next, move on to combinations of objectives. To develop leadership
skills and entertain accompanying family members, you might consider
an Outward Bound–type experience. Even if you don’t initially like an
idea, write it down—it may spark additional ideas that satisfy even more
objectives.
Thinking About Options
Although you need a critical mass of options to make sound decisions,
you also need to find strong contenders—at least two but ideally three to
five. Of course, it’s easy to give in to the tug of System 1 thinking and
generate a false choice to rationalize your intuitively favorite option
(like a parent who asks an energetic toddler, “Would you like one nap or
two today?”). But then you’re just duping yourself. A decision can be no
better than the best option under consideration. Even System 2 thinking
is often too narrow. Analyzing the pros and cons of several options won’t
do you any good if you’ve failed to identify the best ones.
Unfortunately, people rarely consider more than one at a time.
Managers tend to frame decisions as yes-or-no questions instead of
generating alternatives. They might ask, for instance, “Should we
expand our retail furniture business into Brazil?” without questioning
whether expansion is even a good idea and whether Brazil is the best
place to go.
Yes-no framing is just one way we narrow our options. Others include
focusing on one type of solution to a problem (what psychologists call
functional fixedness) and being constrained by our assumptions about
what works and what doesn’t. All these are signs of cognitive rigidity,
which gets amplified when we feel threatened by time pressure,
negative emotions, exhaustion, and other stressors. We devote mental
energy to figuring out how to avoid a loss rather than developing new
possibilities to explore.
Use joint evaluation.
The problem with evaluating options in isolation is that you can’t ensure
the best outcomes. Take this scenario from a well-known study: A
company is looking for a software engineer to write programs in a new
computer language. There are two applicants, recent graduates of the
same esteemed university. One has written 70 programs in the new
language and has a 3.0 (out of 5.0) grade point average. The other has
written 10 programs and has a 4.9 GPA. Who gets the higher offer?
The answer will probably depend on whether you look at both
candidates side by side or just one. In the study, most people who
considered the two programmers at the same time—in joint evaluation
mode—wanted to pay more money to the more prolific recruit, despite
his lower GPA. However, when other groups of people were asked about
only one programmer each, proposed salaries were higher for the one
with the better GPA. It is hard to know whether 70 programs is a lot or a
little when you have no point of comparison. In separate evaluation
mode, people pay attention to what they can easily evaluate—in this
case, academic success—and ignore what they can’t. They make a
decision without considering all the relevant facts.
A proven way to snap into joint evaluation mode is to consider what
you’ll be missing if you make a certain choice. That forces you to search
for other possibilities. In a study at Yale, 75% of respondents said yes
when asked, “Would you buy a copy of an entertaining movie for
$14.99?” But only 55% said yes when explicitly told they could either buy
the movie or keep the money for other purchases. That simple shift to
joint evaluation highlights what economists call the opportunity cost—
what you give up when you pursue something else.
Try the “vanishing options” test.
Once people have a solid option, they usually want to move on, so they
fail to explore alternatives that may be superior. To address this
problem, the decision experts Chip Heath and Dan Heath recommend a
mental trick: Assume you can’t choose any of the options you’re
weighing and ask, “What else could I do?” This question will trigger an
exploration of alternatives. You could use it to open up your thinking
about expanding your furniture business to Brazil: “What if we couldn’t
invest in South America? What else could we do with our resources?”
That might prompt you to consider investing in another region instead,
making improvements in your current location, or giving the online
store a major upgrade. If more than one idea looked promising, you
might split the difference: for instance, test the waters in Brazil by
leasing stores instead of building them, and use the surplus for
improvements at home.
Fighting Motivated Bias
All these cognitive biases—narrow thinking about the future, about
objectives, and about options—are said to be “motivated” when driven
by an intense psychological need, such as a strong emotional
attachment or investment. Motivated biases are especially difficult to
overcome. You know this if you’ve ever poured countless hours and
resources into developing an idea, only to discover months later that
someone has beaten you to it. You should move on, but your desire to
avoid a loss is so great that it distorts your perception of benefits and
risks. And so you feel an overwhelming urge to forge ahead—to prove
that your idea is somehow bigger or better.
Our misguided faith in our own judgment makes matters worse. We’re
overconfident for two reasons: We give the information we do have too
much weight (see the sidebar “How to Prevent Misweighting”). And
because we don’t know what we can’t see, we have trouble imagining
other ways of framing the problem or working toward a solution.
But we can preempt some motivated biases, such as the tendency to
doggedly pursue a course of action we desperately want to take, by using
a “trip wire” to redirect ourselves to a more logical path. That’s what
many expedition guides do when leading clients up Mount Everest:
They announce a deadline in advance. If the group fails to reach the
summit by then, it must head back to camp—and depending on weather
conditions, it may have to give up on the expedition entirely. From a
rational perspective, the months of training and preparation amount to
sunk costs and should be disregarded. When removed from the
situation, nearly everyone would agree that ignoring the turnaround
time would put lives at stake and be too risky. However, loss aversion is
a powerful psychological force. Without a trip wire, many climbers do
push ahead, unwilling to give up their dream of conquering the
mountain. Their tendency to act on emotion is even stronger because
System 2 thinking is incapacitated by low oxygen levels at high
altitudes. As they climb higher, they become less decision-ready—and
in greater need of a trip wire.
In business, trip wires can make people less vulnerable to “present bias”
—the tendency to focus on immediate preferences and ignore long-term
aims and consequences. For instance, if you publicly say when you’ll
seek the coaching that your boss wants you to get (and that you’ve been
putting off even though you know it’s good for you), you’ll be more apt
to follow through. Make your trip wire precise (name a date) so that
you’ll find it harder to disregard later, and share it with people who will
hold you accountable.
Cognitive rigidity gets amplified by
time pressure, negative emotions,
exhaustion, and other stressors.
Another important use of trip wires is in competitive bidding situations,
where the time and effort already invested in a negotiation may feel like
more

Tweet

Post

Share

Save

Buy Copies

Print
HBR’s 10 Must Reads
2016
Book
$24.95
View Details
Diversity Latest Magazine Ascend Topics Podcasts Video Store The Big Idea Data & Visuals Case Selections
Subscribe Sign In

Start my subscription!
Explore HBR
The Latest
Most Popular
All Topics
Magazine Archive
The Big Idea
Reading Lists
Case Selections
Video
Podcasts
Webinars
Data & Visuals
My Library
Newsletters
HBR Press
HBR Ascend
HBR Store
Article Reprints
Books
Cases
Collections
Magazine Issues
HBR Guide Series
HBR 20-Minute Managers
HBR Emotional Intelligence
Series
HBR Must Reads
Tools
About HBR
Contact Us
Advertise with Us
Information for
Booksellers/Retailers
Masthead
Global Editions
Media Inquiries
Guidelines for Authors
HBR Analytic Services
Copyright Permissions
Manage My Account
My Library
Topic Feeds
Orders
Account Settings
Email Preferences
Account FAQ
Help Center
Contact Customer Service
Follow HBR
About Us |Careers |Privacy Policy |Cookie Policy |Copyright Information |Trademark Policy
Harvard Business Publishing:Higher Education |Corporate Learning |Harvard Business Review |Harvard Business School
Copyright ©2022Harvard Business School Publishing. All rights reserved. Harvard Business Publishing is an affiliate of Harvard Business School.
a loss if no deal is reached. Executives often try to avoid that loss by
escalating their commitment, overpaying by millions or even billions of
dollars. The thing is, preferences often change over the course of a
negotiation (for example, new information that comes to light may
justify paying a higher price). So in this sort of situation, consider setting
a decision point—a kind of trip wire that’s less binding because it triggers
thinking instead of a certain action. If the deal price escalates beyond
your trigger value, take a break and reassess your objectives and options.
Decision points provide greater flexibility than “hard” trip wires, but
because they allow for multiple courses of action, they also increase
your risk of making short-term, emotion-based decisions.
Although narrow thinking can plague us at any time, we’re especially
susceptible to it when faced with one-off decisions, because we can’t
learn from experience. So tactics that broaden our perspective on
possible futures, objectives, and options are particularly valuable in
these situations. Some tools, such as checklists and algorithms, can
improve decision readiness by reducing the burden on our memory or
attention; others, such as trip wires, ensure our focus on a critical event
when it happens.
As a rule of thumb, it’s good to anticipate three possible futures,
establish three key objectives, and generate three viable options for each
decision scenario. We can always do more, of course, but this general
approach will keep us from feeling overwhelmed by endless possibilities
—which can be every bit as debilitating as seeing too few.
Even the smartest people exhibit biases in their judgments and choices.
It’s foolhardy to think we can overcome them through sheer will. But we
can anticipate and outsmart them by nudging ourselves in the right
direction when it’s time to make a call.
A version of this article appeared in the May 2015 issue (pp.64–71) of Harvard Business
Review.
Read more on Cognitive bias or
related topics Decision making
and problem solving and
Psychology
Jack B. Soll is an associate professor of
management at Duke University’s Fuqua
School of Business. He is a coauthor of “A
User’s Guide to Debiasing,” a chapter in The
Wiley Blackwell Handbook of Judgment and
Decision Making, forthcoming in 2015.
Katherine L. Milkman is the James G.
Campbell Jr. Assistant Professor of Operations
and Information Management at the University
of Pennsylvania’s Wharton School. She is a
coauthor of “A User’s Guide to Debiasing,” a
chapter in The Wiley Blackwell Handbook of
Judgment and Decision Making, forthcoming in
2015.
John W. Payne is the Joseph J. Ruvane Jr.
Professor of Business Administration at Fuqua.
He is a coauthor of “A User’s Guide to
Debiasing,” a chapter in The Wiley Blackwell
Handbook of Judgment and Decision Making,
forthcoming in 2015.

Tweet

Post

Share

Save

Buy Copies

Print
Recommended
for You
Leaders as Decision
Architects
Structure your organization’s
work to encourage wise
choices.
JS
JP
Partner Center
Facebook
Twitter
LinkedIn
Instagram
Your Newsreader

Expert paper writers are just a few clicks away

Place an order in 3 easy steps. Takes less than 5 mins.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00