Posted: February 26th, 2023

Do Artifacts Have Politics

Please see attached for instructions. 

17
Do Artifacts Have Politics?

Langdon Winner

No idea is more provocative in controversies about technology and society than the notion that
technical things have political qualities. At issue is the claim that the machines, structures, and
systems of modern material culture can be accurately judged not only for their contributions to
efficiency and productivity and their positive and negative environmental side effects, but also for
the ways in which they can embody specific forms of power and authority. Since ideas of this kind
are a persistent and troubling presence in discussions about the meaning of technology, they
deserve explicit attention.

It is no surprise to learn that technical systems of various kinds are deeply interwoven in the
conditions of modern politics. The physical arrangements of industrial production, warfare, com-
munications, and the like have fundamentally changed the exercise of power and the experience of
citizenship. But to go beyond this obvious fact and to argue that certain technologies in themselves
have political properties seems, at first glance, completely mistaken. We all know that people have
politics; things do not. To discover either virtues or evils in aggregates of steel, plastic, transistors,
integrated circuits, chemicals, and the like seems just plain wrong, a way of mystifying human
artifice and of avoiding the true sources, the human sources of freedom and oppression, justice and
injustice. Blaming the hardware appears even more foolish than blaming the victims when it comes
to judging conditions of public life.

Hence, the stern advice commonly given those who flirt with the notion that technical arti-
facts have political qualities: What matters is not technology itself, but the social or economic sys-
tem in which it is embedded. This maxim, which in a number of variations is the central premise
of a theory that can be called the social determination of technology, has an obvious wisdom. It
serves as a needed corrective to those who focus uncritically upon such things as ‘‘the computer
and its social impacts’’ but who fail to look behind technical devices to see the social circumstances
of their development, deployment, and use. This view provides an antidote to naı̈ve technological
determinism—the idea that technology develops as the sole result of an internal dynamic and then,
unmediated by any other influence, molds society to fit its patterns. Those who have not recognized
the ways in which technologies are shaped by social and economic forces have not gotten very far.

But the corrective has its own shortcomings; taken literally, it suggests that technical things
do not matter at all. Once one has done the detective work necessary to reveal the social origins—

From Langdon Winner, The Whale and the Reactor, 19–39. Copyright � 1986 by University of Chicago
Press. Reprinted by permission.

251

C
o
p
y
r
i
g
h
t

2
0
0
9
.

R
o
w
m
a
n

&

L
i
t
t
l
e
f
i
e
l
d

P
u
b
l
i
s
h
e
r
s
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.

M
a
y

n
o
t

b
e

r
e
p
r
o
d
u
c
e
d

i
n

a
n
y

f
o
r
m

w
i
t
h
o
u
t

p
e
r
m
i
s
s
i
o
n

f
r
o
m

t
h
e

p
u
b
l
i
s
h
e
r
,

e
x
c
e
p
t

f
a
i
r

u
s
e
s

p
e
r
m
i
t
t
e
d

u
n
d
e
r

U
.
S
.

o
r

a
p
p
l
i
c
a
b
l
e

c
o
p
y
r
i
g
h
t

l
a
w
.

EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 1/30/2023 6:29 AM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS
AN: 336849 ; David M. Kaplan.; Readings in the Philosophy of Technology
Account: s4264928.main.edsebook

252 Langdon Winner

power holders behind a particular instance of technological change—one will have explained
everything of importance. This conclusion offers comfort to social scientists. It validates what they
had always suspected, namely, that there is nothing distinctive about the study of technology in the
first place. Hence, they can return to their standard models of social power—those of interest-group
politics, bureaucratic politics, Marxist models of class struggle, and the like—and have everything
they need. The social determination of technology is, in this view, essentially no different from the
social determination of, say, welfare policy or taxation.

There are, however, good reasons to believe that technology is politically significant in its
own right, good reasons why the standard models of social science only go so far in accounting for
what is most interesting and troublesome about the subject. Much of modern social and political
thought contains recurring statements of what can be called a theory of technological politics, an
odd mongrel of notions often crossbred with orthodox liberal, conservative, and socialist philoso-
phies.1 The theory of technological politics draws attention to the momentum of large-scale socio-
technical systems, to the response of modern societies to certain technological imperatives, and to
the ways human ends are powerfully transformed as they are adapted to technical means. This
perspective offers a novel framework of interpretation and explanation for some of the more puz-
zling patterns that have taken shape in and around the growth of modern material culture. Its start-
ing point is a decision to take technical artifacts seriously. Rather than insist that we immediately
reduce everything to the interplay of social forces, the theory of technological politics suggests that
we pay attention to the characteristics of technical objects and the meaning of those characteristics.
A necessary complement to, rather than a replacement for, theories of the social determination of
technology, this approach identifies certain technologies as political phenomena in their own right.
It points us back, to borrow Edmund Husserl’s philosophical injunction, to the things themselves.

In what follows I will outline and illustrate two ways in which artifacts can contain political
properties. First are instances in which the invention, design, or arrangement of a specific technical
device or system becomes a way of settling an issue in the affairs of a particular community. Seen
in the proper light, examples of this kind are fairly straightforward and easily understood. Second
are cases of what can be called ‘‘inherently political technologies,’’ man-made systems that appear
to require or to be strongly compatible with particular kinds of political relationships. Arguments
about cases of this kind are much more troublesome and closer to the heart of the matter. By the
term ‘‘politics’’ I mean arrangements of power and authority in human associations as well as the
activities that take place within those arrangements. For my purposes here, the term ‘‘technology’’
is understood to mean all of modern practical artifice, but to avoid confusion I prefer to speak of
‘‘technologies’’ plural, smaller or larger pieces or systems of hardware of a specific kind.2 My inten-
tion is not to settle any of the issues here once and for all, but to indicate their general dimensions
and significance.

TECHNICAL ARRANGEMENTS AND SOCIAL ORDER

Anyone who has traveled the highways of America and has gotten used to the normal height of
overpasses may well find something a little odd about some of the bridges over the parkways on
Long Island, New York. Many of the overpasses are extraordinarily low, having as little as nine
feet of clearance at the curb. Even those who happened to notice this structural peculiarity would
not be inclined to attach any special meaning to it. In our accustomed way of looking at things such
as roads and bridges, we see the details of form as innocuous and seldom give them a second
thought.

It turns out, however, that some two hundred or so low-hanging overpasses on Long Island

EBSCOhost – printed on 1/30/2023 6:29 AM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

Do Artifacts Have Politics? 253

are there for a reason. They were deliberately designed and built that way by someone who wanted
to achieve a particular social effect. Robert Moses, the master builder of roads, parks, bridges, and
other public works of the 1920s to the 1970s in New York, built his overpasses according to speci-
fications that would discourage the presence of buses on his parkways. According to evidence pro-
vided by Moses’ biographer, Robert A. Caro, the reasons reflect Moses’ social class bias and racial
prejudice. Automobile-owning whites of ‘‘upper’’ and ‘‘comfortable middle’’ classes, as he called
them, would be free to use the parkways for recreation and commuting. Poor people and blacks,
who normally used public transit, were kept off the roads because the twelve-foot-tall buses could
not handle the overpasses. One consequence was to limit access of racial minorities and low-
income groups to Jones Beach, Moses’ widely acclaimed public park. Moses made doubly sure of
this result by vetoing a proposed extension of the Long Island Railroad to Jones Beach.

Robert Moses’ life is a fascinating story in recent U.S. political history. His dealings with
mayors, governors, and presidents; his careful manipulation of legislatures, banks, labor unions,
the press, and public opinion could be studied by political scientists for years. But the most impor-
tant and enduring results of his work are his technologies, the vast engineering projects that give
New York much of its present form. For generations after Moses’ death and the alliances he forged
have fallen apart, his public works, especially the highways and bridges he built to favor the use of
the automobile over the development of mass transit, will continue to shape that city. Many of
his monumental structures of concrete and steel embody a systematic social inequality, a way of
engineering relationships among people that, after a time, became just another part of the land-
scape. As New York planner Lee Koppleman told Caro about the low bridges on Wantagh Parkway,
‘‘The old son of a gun had made sure that buses would never be able to use his goddamned park-
ways.’’3

Histories of architecture, city planning, and public works contain many examples of physical
arrangements with explicit or implicit political purposes. One can point to Baron Haussmann’s
broad Parisian thoroughfares, engineered at Louis Napoleon’s direction to prevent any recurrence
of street fighting of the kind that took place during the revolution of 1848. Or one can visit any
number of grotesque concrete buildings and huge plazas constructed on university campuses in the
United States during the late 1960s and early 1970s to defuse student demonstrations. Studies of
industrial machines and instruments also turn up interesting political stories, including some that
violate our normal expectations about why technological innovations are made in the first place. If
we suppose that new technologies are introduced to achieve increased efficiency, the history of
technology shows that we will sometimes be disappointed. Technological change expresses a pano-
ply of human motives, not the least of which is the desire of some to have dominion over others
even though it may require an occasional sacrifice of cost savings and some violation of the normal
standard of trying to get more from less.

One poignant illustration can be found in the history of nineteenth-century industrial mecha-
nization. At Cyrus McCormick’s reaper manufacturing plant in Chicago in the middle 1880s, pneu-
matic molding machines, a new and largely untested innovation, were added to the foundry at an
estimated cost of $500,000. The standard economic interpretation would lead us to expect that this
step was taken to modernize the plant and achieve the kind of efficiencies that mechanization
brings. But historian Robert Ozanne has put the development in a broader context. At the time,
Cyrus McCormick II was engaged in a battle with the National Union of Iron Molders. He saw the
addition of the new machines as a way to ‘‘weed out the bad element among the men,’’ namely, the
skilled workers who had organized the union local in Chicago.4 The new machines, manned by
unskilled laborers, actually produced inferior castings at a higher cost than the earlier process. After
three years of use the machines were, in fact, abandoned, but by that time they had served their
purpose—the destruction of the union. Thus, the story of these technical developments at the

EBSCOhost – printed on 1/30/2023 6:29 AM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

254 Langdon Winner

McCormick factory cannot be adequately understood outside the record of workers’ attempts to
organize, police repression of the labor movement in Chicago during that period, and the events
surrounding the bombing at Haymarket Square. Technological history and U.S. political history
were at that moment deeply intertwined.

In the examples of Moses’ low bridges and McCormick’s molding machines, one sees the
importance of technical arrangements that precede the use of the things in question. It is obvious
that technologies can be used in ways that enhance the power, authority, and privilege of some over
others, for example, the use of television to sell a candidate. In our accustomed way of thinking
technologies are seen as neutral tools that can be used well or poorly, for good, evil, or something
in between. But we usually do not stop to inquire whether a given device might have been designed
and built in such a way that it produces a set of consequences logically and temporally prior to any
of its professed uses. Robert Moses’ bridges, after all, were used to carry automobiles from one
point to another; McCormick’s machines were used to make metal castings; both technologies,
however, encompassed purposes far beyond their immediate use. If our moral and political lan-
guage for evaluating technology includes only categories having to do with tools and uses, if it
does not include attention to the meaning of the designs and arrangements of our artifacts, then we
will be blinded to much that is intellectually and practically crucial.

Because the point is most easily understood in the light of particular intentions embodied in
physical form, I have so far offered illustrations that seem almost conspiratorial. But to recognize
the political dimensions in the shapes of technology does not require that we look for conscious
conspiracies or malicious intentions. The organized movement of handicapped people in the United
States during the 1970s pointed out the countless ways in which machines, instruments, and struc-
tures of common use—buses, buildings, sidewalks, plumbing fixtures, and so forth—made it
impossible for many handicapped persons to move freely about, a condition that systematically
excluded them from public life. It is safe to say that designs unsuited for the handicapped arose
more from long-standing neglect than from anyone’s active intention. But once the issue was
brought to public attention, it became evident that justice required a remedy. A whole range of
artifacts have been redesigned and rebuilt to accommodate this minority.

Indeed, many of the most important examples of technologies that have political conse-
quences are those that transcend the simple categories ‘‘intended’’ and ‘‘unintended’’ altogether.
These are instances in which the very process of technical development is so thoroughly biased in
a particular direction that it regularly produces results heralded as wonderful breakthroughs by
some social interests and crushing setbacks by others. In such cases it is neither correct nor insight-
ful to say, ‘‘Someone intended to do somebody else harm.’’ Rather one must say that the technolog-
ical deck has been stacked in advance to favor certain social interests and that some people were
bound to receive a better hand than others.

The mechanical tomato harvester, a remarkable device perfected by researchers at the Uni-
versity of California from the late 1940s to the present offers an illustrative tale. The machine is
able to harvest tomatoes in a single pass through a row, cutting the plants from the ground, shaking
the fruit loose, and (in the newest models) sorting the tomatoes electronically into large plastic
gondolas that hold up to twenty-five tons of produce headed for canning factories. To accommodate
the rough motion of these harvesters in the field, agricultural researchers have bred new varieties
of tomatoes that are hardier, sturdier, and less tasty than those previously grown. The harvesters
replace the system of handpicking in which crews of farm workers would pass through the fields
three or four times, putting ripe tomatoes in lug boxes and saving immature fruit for later harvest.5

Studies in California indicate that the use of the machine reduces costs by approximately five to
seven dollars per ton as compared to hand harvesting.6 But the benefits are by no means equally
divided in the agricultural economy. In fact, the machine in the garden has in this instance been

EBSCOhost – printed on 1/30/2023 6:29 AM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

Do Artifacts Have Politics? 255

the occasion for a thorough reshaping of social relationships involved in tomato production in rural
California.

By virtue of their very size and cost of more than $50,000 each, the machines are compatible
only with a highly concentrated form of tomato growing. With the introduction of this new method
of harvesting, the number of tomato growers declined from approximately 4,000 in the early 1960s
to about 600 in 1973, and yet there was a substantial increase in tons of tomatoes produced. By the
late 1970s an estimated 32,000 jobs in the tomato industry had been eliminated as a direct conse-
quence of mechanization.7 Thus, a jump in productivity to the benefit of very large growers has
occurred at the sacrifice of other rural agricultural communities.

The University of California’s research on and development of agricultural machines such as
the tomato harvester eventually became the subject of a lawsuit filed by attorneys for California
Rural Legal Assistance, an organization representing a group of farm workers and other interested
parties. The suit charged that university officials are spending tax monies on projects that benefit a
handful of private interests to the detriment of farm workers, small farmers, consumers, and rural
California generally and asks for a court injunction to stop the practice. The university denied these
charges, arguing that to accept them ‘‘would require elimination of all research with any potential
practical application.’’8

As far as I know, no one argued that the development of the tomato harvester was the result
of a plot. Two students of the controversy, William Friedland and Amy Barton, specifically exoner-
ate the original developers of the machine and the hard tomato from any desire to facilitate eco-
nomic concentration in that industry.9 What we see here instead is an ongoing social process in
which scientific knowledge, technological invention, and corporate profit reinforce each other in
deeply entrenched patterns, patterns that bear the unmistakable stamp of political and economic
power. Over many decades agricultural research and development in U.S. land-grant colleges and
universities has tended to favor the interests of large agribusiness concerns.10 It is in the face of
such subtly ingrained patterns that opponents of innovations such as the tomato harvester are made
to seem ‘‘antitechnology’’ or ‘‘antiprogress.’’ For the harvester is not merely the symbol of a social
order that rewards some while punishing others; it is in a true sense an embodiment of that order.

Within a given category of technological change there are, roughly speaking, two kinds of
choices that can affect the relative distribution of power, authority, and privilege in a community.
Often the crucial decision is a simple ‘‘yes or no’’ choice—are we going to develop and adopt the
thing or not? In recent years many local, national, and international disputes about technology have
centered on ‘‘yes or no’’ judgments about such things as food additives, pesticides, the building of
highways, nuclear reactors, dam projects, and proposed high-tech weapons. The fundamental
choice about an antiballistic missile or supersonic transport is whether or not the thing is going to
join society as a piece of its operating equipment. Reasons given for and against are frequently as
important as those concerning the adoption of an important new law.

A second range of choices, equally critical in many instances, has to do with specific features
in the design or arrangement of a technical system after the decision to go ahead with it has already
been made. Even after a utility company wins permission to build a large electric power line, impor-
tant controversies can remain with respect to the placement of its route and the design of its towers;
even after an organization has decided to institute a system of computers, controversies can still
arise with regard to the kinds of components, programs, modes of access, and other specific features
the system will include. Once the mechanical tomato harvester had been developed in its basic
form, a design alteration of critical social significance—the addition of electronic sorters, for exam-
ple—changed the character of the machine’s effects upon the balance of wealth and power in Cali-
fornia agriculture. Some of the most interesting research on technology and politics at present
focuses upon the attempt to demonstrate in a detailed, concrete fashion how seemingly innocuous

EBSCOhost – printed on 1/30/2023 6:29 AM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

256 Langdon Winner

design features in mass transit systems, water projects, industrial machinery, and other technologies
actually mask social choices of profound significance. Historian David Noble has studied two kinds
of automated machine tool systems that have different implications for the relative power of man-
agement and labor in the industries that might employ them. He has shown that although the basic
electronic and mechanical components of the record/playback and numerical control systems are
similar, the choice of one design over another has crucial consequences for social struggles on the
shop floor. To see the matter solely in terms of cost cutting, efficiency, or the modernization of
equipment is to miss a decisive element in the story.11

From such examples I would offer some general conclusions. These correspond to the inter-
pretation of technologies as ‘‘forms of life’’ presented earlier, filling in the explicitly political
dimensions of that point of view.

The things we call ‘‘technologies’’ are ways of building order in our world. Many technical
devices and systems important in everyday life contain possibilities for many different ways of
ordering human activity. Consciously or unconsciously, deliberately or inadvertently, societies
choose structures for technologies that influence how people are going to work, communicate,
travel, consume, and so forth over a very long time. In the processes by which structuring decisions
are made, different people are situated differently and possess unequal degrees of power as well as
unequal levels of awareness. By far the greatest latitude of choice exists the very first time a particu-
lar instrument, system, or technique is introduced. Because choices tend to become strongly fixed
in material equipment, economic investment, and social habit, the original flexibility vanishes for
all practical purposes once the initial commitments are made. In that sense technological innova-
tions are similar to legislative acts or political foundings that establish a framework for public order
that will endure over many generations. For that reason the same careful attention one would give
to the rules, roles, and relationships of politics must also be given to such things as the building of
highways, the creation of television networks, and the tailoring of seemingly insignificant features
on new machines. The issues that divide or unite people in society are settled not only in the institu-
tions and practices of politics proper, but also, and less obviously, in tangible arrangements of steel
and concrete, wires and semiconductors, nuts and bolts.

INHERENTLY POLITICAL TECHNOLOGIES

None of the arguments and examples considered thus far addresses a stronger, more troubling claim
often made in writings about technology and society—the belief that some technologies are by
their very nature political in a specific way. According to this view, the adoption of a given techni-
cal system unavoidably brings with it conditions for human relationships that have a distinctive
political cast—for example, centralized or decentralized, egalitarian or inegalitarian, repressive or
liberating. This is ultimately what is at stake in assertions such as those of Lewis Mumford that
two traditions of technology, one authoritarian, the other democratic, exist side by side in Western
history. In all the cases cited above the technologies are relatively flexible in design and arrange-
ment and variable in their effects. Although one can recognize a particular result produced in a
particular setting, one can also easily imagine how a roughly similar device or system might have
been built or situated with very much different political consequences. The idea we must now
examine and evaluate is that certain kinds of technology do not allow such flexibility, and that to
choose them is to choose unalterably a particular form of political life.

A remarkably forceful statement of one version of this argument appears in Friedrich Eng-
els’s little essay ‘‘On Authority,’’ written in 1872. Answering anarchists who believed that authority
is an evil that ought to be abolished altogether, Engels launches into a panegyric for authoritarian-

EBSCOhost – printed on 1/30/2023 6:29 AM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

Do Artifacts Have Politics? 257

ism, maintaining, among other things, that strong authority is a necessary condition in modern
industry. To advance his case in the strongest possible way, he asks his readers to imagine that the
revolution has already occurred. ‘‘Supposing a social revolution dethroned the capitalists, who now
exercise their authority over the production and circulation of wealth. Supposing, to adopt entirely
the point of view of the anti-authoritarians, that the land and the instruments of labour had become
the collective property of the workers who use them. Will authority have disappeared or will it have
only changed its form?’’12

His answer draws upon lessons from three sociotechnical systems of his day, cotton-spinning
mills, railways, and ships at sea. He observes that on its way to becoming finished thread, cotton
moves through a number of different operations at different locations in the factory. The workers
perform a wide variety of tasks, from running the steam engine to carrying the products from one
room to another. Because these tasks must be coordinated and because the timing of the work is
‘‘fixed by the authority of the steam,’’ laborers must learn to accept a rigid discipline. They must,
according to Engels, work at regular hours and agree to subordinate their individual wills to the
persons in charge of factory operations. If they fail to do so, they risk the horrifying possibility that
production will come to a grinding halt. Engels pulls no punches. ‘‘The automatic machinery of a
big factory,’’ he writes, ‘‘is much more despotic than the small capitalists who employ workers
ever have been.’’13

Similar lessons are adduced in Engels’s analysis of the necessary operating conditions for
railways and ships at sea. Both require the subordination of workers to an ‘‘imperious authority’’
that sees to it that things run according to plan. Engels finds that far from being an idiosyncrasy of
capitalist social organization, relationships of authority and subordination arise ‘‘independently of
all social organization, [and] are imposed upon us together with the material conditions under
which we produce and make products circulate.’’ Again, he intends this to be stern advice to the
anarchists who, according to Engels, thought it possible simply to eradicate subordination and
superordination at a single stroke. All such schemes are nonsense. The roots of unavoidable authori-
tarianism are, he argues, deeply implanted in the human involvement with science and technology.
‘‘If man, by dint of his knowledge and inventive genius, has subdued the forces of nature, the latter
avenge themselves upon him by subjecting him, insofar as he employs them, to a veritable despo-
tism independent of all social organization.’’14

Attempts to justify strong authority on the basis of supposedly necessary conditions of techni-
cal practice have an ancient history. A pivotal theme in the Republic is Plato’s quest to borrow the
authority of technē and employ it by analogy to buttress his argument in favor of authority in the
state. Among the illustrations he chooses, like Engels, is that of a ship on the high seas. Because
large sailing vessels by their very nature need to be steered with a firm hand, sailors must yield to
their captain’s commands; no reasonable person believes that ships can be run democratically. Plato
goes on to suggest that governing a state is rather like being captain of a ship or like practicing
medicine as a physician. Much the same conditions that require central rule and decisive action in
organized technical activity also create this need in government.

In Engels’s argument, and arguments like it, the justification for authority is no longer made
by Plato’s classic analogy, but rather directly with reference to technology itself. If the basic case
is as compelling as Engels believed it to be, one would expect that as a society adopted increasingly
complicated technical systems as its material basis, the prospects for authoritarian ways of life
would be greatly enhanced. Central control by knowledgeable people acting at the top of a rigid
social hierarchy would seem increasingly prudent. In this respect his stand in ‘‘On Authority’’
appears to be at variance with Karl Marx’s position in Volume I of Capital. Marx tries to show that
increasing mechanization will render obsolete the hierarchical division of labor and the relation-
ships of subordination that, in his view, were necessary during the early stages of modern manufac-

EBSCOhost – printed on 1/30/2023 6:29 AM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

258 Langdon Winner

turing. ‘‘Modern Industry,’’ he writes, ‘‘sweeps away by technical means the manufacturing
division of labor, under which each man is bound hand and foot for life to a single detail operation.
At the same time, the capitalistic form of that industry reproduces this same division of labour in
a still more monstrous shape; in the factory proper, by converting the workman into a living
appendage of the machine.’’15 In Marx’s view the conditions that will eventually dissolve the capi-
talist division of labor and facilitate proletarian revolution are conditions latent in industrial tech-
nology itself. The differences between Marx’s position in Capital and Engels’s in his essay raise
an important question for socialism: What, after all, does modern technology make possible or
necessary in political life? The theoretical tension we see here mirrors many troubles in the practice
of freedom and authority that had muddied the tracks of socialist revolution.

Arguments to the effect that technologies are in some sense inherently political have been
advanced in a wide variety of contexts, far too many to summarize here. My reading of such
notions, however, reveals there are two basic ways of stating the case. One version claims that the
adoption of a given technical system actually requires the creation and maintenance of a particular
set of social conditions as the operating environment of that system. Engels’s position is of this
kind. A similar view is offered by a contemporary writer who holds that ‘‘if you accept nuclear
power plants, you also accept a techno-scientific-industrial-military elite. Without these people in
charge, you could not have nuclear power.’’16 In this conception some kinds of technology require
their social environments to be structured in a particular way in much the same sense that an auto-
mobile requires wheels in order to move. The thing could not exist as an effective operating entity
unless certain social as well as material conditions were met. The meaning of ‘‘required’’ here is
that of practical (rather than logical) necessity. Thus, Plato thought it a practical necessity that a
ship at sea have one captain and an unquestionably obedient crew.

A second, somewhat weaker, version of the argument holds that a given kind of technology
is strongly compatible with, but does not strictly require, social and political relationships of a
particular stripe. Many advocates of solar energy have argued that technologies of that variety are
more compatible with a democratic, egalitarian society than energy systems based on coal, oil, and
nuclear power; at the same time they do not maintain that anything about solar energy requires
democracy. Their case is, briefly, that solar energy is decentralizing in both a technical and political
sense: technically speaking, it is vastly more reasonable to build solar systems in a disaggregated,
widely distributed manner than in large-scale centralized plants; politically speaking, solar energy
accommodates the attempts of individuals and local communities to manage their affairs effectively
because they are dealing with systems that are more accessible, comprehensible, and controllable
than huge centralized sources. In this view solar energy is desirable not only for its economic and
environmental benefits, but also for the salutary institutions it is likely to permit in other areas of
public life.17

Within both versions of the argument there is a further distinction to be made between condi-
tions that are internal to the workings of a given technical system and those that are external to it.
Engels’s thesis concerns internal social relations said to be required within cotton factories and
railways, for example; what such relationships mean for the condition of society at large is, for
him, a separate question. In contrast, the solar advocate’s belief that solar technologies are compati-
ble with democracy pertains to the way they complement aspects of society removed from the
organization of those technologies as such.

There are, then, several different directions that arguments of this kind can follow. Are the
social conditions predicated said to be required by, or strongly compatible with, the workings of a
given technical system? Are those conditions internal to that system or external to it (or both)?
Although writings that address such questions are often unclear about what is being asserted, argu-
ments in this general category are an important part of modern political discourse. They enter into

EBSCOhost – printed on 1/30/2023 6:29 AM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

Do Artifacts Have Politics? 259

many attempts to explain how changes in social life take place in the wake of technological innova-
tion. More important, they are often used to buttress attempts to justify or criticize proposed courses
of action involving new technology. By offering distinctly political reasons for or against the adop-
tion of a particular technology, arguments of this kind stand apart from more commonly employed,
more easily quantifiable claims about economic costs and benefits, environmental impacts, and
possible risks to public health and safety that technical systems may involve. The issue here does
not concern how many jobs will be created, how much income generated, how many pollutants
added, or how many cancers produced. Rather, the issue has to do with ways in which choices
about technology have important consequences for the form and quality of human associations.

If we examine social patterns that characterize the environments of technical systems, we
find certain devices and systems almost invariably linked to specific ways of organizing power and
authority. The important question is: Does this state of affairs derive from an unavoidable social
response to intractable properties in the things themselves, or is it instead a pattern imposed inde-
pendently by a governing body, ruling class, or some other social or cultural institution to further
its own purposes?

Taking the most obvious example, the atom bomb is an inherently political artifact. As long
as it exists at all, its lethal properties demand that it be controlled by a centralized, rigidly hierarchi-
cal chain of command closed to all influences that might make its workings unpredictable. The
internal social system of the bomb must be authoritarian; there is no other way. The state of affairs
stands as a practical necessity independent of any larger political system in which the bomb is
embedded, independent of the type of regime or character of its rulers. Indeed, democratic states
must try to find ways to ensure that the social structures and mentality that characterize the manage-
ment of nuclear weapons do not ‘‘spin off’’ or ‘‘spill over’’ into the polity as a whole.

The bomb is, of course, a special case. The reasons very rigid relationships of authority are
necessary in its immediate presence should be clear to anyone. If, however, we look for other
instances in which particular varieties of technology are widely perceived to need the maintenance
of a special pattern of power and authority, modern technical history contains a wealth of examples.

Alfred D. Chandler in The Visible Hand, a monumental study of modern business enterprise,
presents impressive documentation to defend the hypothesis that the construction and day-to-day
operation of many systems of production, transportation, and communication in the nineteenth and
twentieth centuries require the development of particular social form—a large-scale centralized,
hierarchical organization administered by highly skilled managers. Typical of Chandler’s reasoning
is his analysis of the growth of the railroads.18

Technology made possible fast, all-weather transportation; but safe, regular, reliable movement
of goods and passengers, as well as the continuing maintenance and repair of locomotives, rolling
stock, and track, roadbed, stations, roundhouses, and other equipment, required the creation of a
sizable administrative organization. It meant the employment of a set of managers to supervise
these functional activities over an extensive geographical area; and the appointment of an admin-
istrative command of middle and top executives to monitor, evaluate, and coordinate the work of
managers responsible for the day-to-day operations.

Throughout his book Chandler points to ways in which technologies used in the production and
distribution of electricity, chemicals, and a wide range of industrial goods ‘‘demanded’’ or
‘‘required’’ this form of human association. ‘‘Hence, the operational requirements of railroads
demanded the creation of the first administrative hierarchies in American business.’’19

Were there other conceivable ways of organizing these aggregates of people and apparatus?
Chandler shows that a previously dominant social form, the small traditional family firm, simply
could not handle the task in most cases. Although he does not speculate further, it is clear that he

EBSCOhost – printed on 1/30/2023 6:29 AM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

260 Langdon Winner

believes there is, to be realistic, very little latitude in the forms of power and authority appropriate
within modern sociotechnical systems. The properties of many modern technologies—oil pipelines
and refineries, for example—are such that overwhelmingly impressive economies of scale and
speed are possible. If such systems are to work effectively, efficiently, quickly, and safely, certain
requirements of internal social organization have to be fulfilled; the material possibilities that mod-
ern technologies make available could not be exploited otherwise. Chandler acknowledges that as
one compares sociotechnical institutions of different nations, one sees ‘‘ways in which cultural atti-
tudes, values, ideologies, political systems, and social structure affect these imperatives.’’20 But the
weight of argument and empirical evidence in The Visible Hand suggests that any significant depar-
ture from the basic pattern would be, at best, highly unlikely.

It may be that other conceivable arrangements of power and authority, for example, those of
decentralized, democratic worker self-management, could prove capable of administering factories,
refineries, communications systems, and railroads as well as or better than the organizations Chan-
dler describes. Evidence from automobile assembly teams in Sweden and worker-managed plants
in Yugoslavia and other countries is often presented to salvage these possibilities. Unable to settle
controversies over this matter here, I merely point to what I consider to be their bone of contention.
The available evidence tends to show that many large, sophisticated technological systems are in
fact highly compatible with centralized, hierarchical managerial control. The interesting question,
however, has to do with whether or not this pattern is in any sense a requirement of such systems,
a question that is not solely empirical. The matter ultimately rests on our judgments about what
steps, if any, are practically necessary in the workings of particular kinds of technology and what,
if anything, such measures require of the structure of human associations. Was Plato right in saying
that a ship at sea needs steering by a decisive hand and that this could only be accomplished by a
single captain and an obedient crew? Is Chandler correct in saying that the properties of large-scale
systems require centralized, hierarchical managerial control?

To answer such questions, we would have to examine in some detail the moral claims of
practical necessity (including those advocated in the doctrines of economics) and weigh them
against moral claims of other sorts, for example, the notion that it is good for sailors to participate
in the command of a ship or that workers have a right to be involved in making and administering
decisions in a factory. It is characteristic of societies based on large, complex technological sys-
tems, however, that moral reasons other than those of practical necessity appear increasingly obso-
lete, ‘‘idealistic,’’ and irrelevant. Whatever claims one may wish to make on behalf of liberty,
justice, or equality can be immediately neutralized when confronted with arguments to the effect,
‘‘Fine, but that’s no way to run a railroad’’ (or steel mill, or airline, or communication system, and
so on). Here we encounter an important quality in modern political discourse and in the way people
commonly think about what measures are justified in response to the possibilities technologies
make available. In many instances, to say that some technologies are inherently political is to say
that certain widely accepted reasons of practical necessity—especially the need to maintain crucial
technological systems as smoothly working entities—have tended to eclipse other sorts of moral
and political reasoning.

One attempt to salvage the autonomy of politics from the bind of practical necessity involves
the notion that conditions of human association found in the internal workings of technological
systems can easily be kept separate from the polity as a whole. Americans have long rested content
in the belief that arrangements of power and authority inside industrial corporations, public utilities,
and the like have little bearing on public institutions, practices, and ideas at large. That ‘‘democracy
stops at the factory gates’’ was taken as a fact of life that had nothing to do with the practice of
political freedom. But can the internal politics of technology and the politics of the whole commu-
nity be so easily separated? A recent study of business leaders in the United States, contemporary

EBSCOhost – printed on 1/30/2023 6:29 AM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

Do Artifacts Have Politics? 261

exemplars of Chandler’s ‘‘visible hand of management,’’ found them remarkably impatient with
such democratic scruples as ‘‘one man, one vote.’’ If democracy doesn’t work for the firm, the most
critical institution in all of society, American executives ask, how well can it be expected to work
for the government of a nation—particularly when that government attempts to interfere with the
achievements of the firm? The authors of the report observe that patterns of authority that work
effectively in the corporation become for businessmen ‘‘the desirable model against which to com-
pare political and economic relationships in the rest of society.’’21 While such findings are far from
conclusive, they do reflect a sentiment increasingly common in the land: what dilemmas such as
the energy crisis require is not a redistribution of wealth or broader public participation but, rather,
stronger, centralized public and private management.

An especially vivid case in which the operational requirements of a technical system might
influence the quality of public life is the debates about the risks of nuclear power. As the supply of
uranium for nuclear reactors runs out, a proposed alternative fuel is the plutonium generated as a
by-product in reactor cores. Well-known objections to plutonium recycling focus on its unaccept-
able economic costs, its risks of environmental contamination, and its dangers in regard to the
international proliferation of nuclear weapons. Beyond these concerns, however, stands another less
widely appreciated set of hazards—those that involve the sacrifice of civil liberties. The widespread
use of plutonium as a fuel increases the chance that this toxic substance might be stolen by terror-
ists, organized crime, or other persons. This raises the prospect, and not a trivial one, that extraordi-
nary measures would have to be taken to safeguard plutonium from theft and to recover it should
the substance be stolen. Workers in the nuclear industry as well as ordinary citizens outside could
well become subject to background security checks, covert surveillance, wiretapping, informers,
and even emergency measures under martial law—all justified by the need to safeguard plutonium.

Russell W. Ayres’s study of the legal ramifications of plutonium recycling concludes: ‘‘With
the passage of time and the increase in the quantity of plutonium in existence will come pressure
to eliminate the traditional checks the courts and legislatures place on the activities of the executive
and to develop a powerful central authority better able to enforce strict safe-guards.’’ He avers that
‘‘once a quantity of plutonium had been stolen, the case for literally turning the country upside
down to get it back would be overwhelming.’’ Ayres anticipates and worries about the kinds of
thinking that, I have argued, characterize inherently political technologies. It is still true that in a
world in which human beings make and maintain artificial systems nothing is ‘‘required’’ in an
absolute sense. Nevertheless, once a course of action is under way, once artifacts such as nuclear
power plants have been built and put in operation, the kinds of reasoning that justify the adaptation
of social life to technical requirements pop up as spontaneously as flowers in the spring. In Ayres’s
words, ‘‘Once recycling begins and the risks of plutonium theft become real rather than hypotheti-
cal, the case for governmental infringement of protected rights will seem compelling.’’22 After a
certain point, those who cannot accept the hard requirements and imperatives will be dismissed as
dreamers and fools.

* * *

The two varieties of interpretation I have outlined indicate how artifacts can have political
qualities. In the first instance we noticed ways in which specific features in the design or arrange-
ment of a device or system could provide a convenient means of establishing patterns of power and
authority in a given setting. Technologies of this kind have a range of flexibility in the dimensions
of their material form. It is precisely because they are flexible that their consequences for society
must be understood with reference to the social actors able to influence which designs and arrange-
ments are chosen. In the second instance we examined ways in which the intractable properties of
certain kinds of technology are strongly, perhaps unavoidably, linked to particular institutionalized

EBSCOhost – printed on 1/30/2023 6:29 AM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

262 Langdon Winner

patterns of power and authority. Here the initial choice about whether or not to adopt something is
decisive in regard to its consequences. There are no alternative physical designs or arrangements
that would make a significant difference; there are, furthermore, no genuine possibilities for cre-
ative intervention by different social systems—capitalist or socialist—that could change the intrac-
tability of the entity or significantly alter the quality of its political effects.

To know which variety of interpretation is applicable in a given case is often what is at stake
in disputes, some of them passionate ones, about the meaning of technology for how we live. I have
argued a ‘‘both/and’’ position here, for it seems to me that both kinds of understanding are applica-
ble in different circumstances. Indeed, it can happen that within a particular complex of technol-
ogy—a system of communication or transportation, for example—some aspects may be flexible in
their possibilities for society, while other aspects may be (for better or worse) completely intracta-
ble. The two varieties of interpretation I have examined here can overlap and intersect at many
points.

These are, of course, issues on which people can disagree. Thus, some proponents of energy
from renewable resources now believe they have at last discovered a set of intrinsically democratic,
egalitarian, communitarian technologies. In my best estimation, however, the social consequences
of building renewable energy systems will surely depend on the specific configurations of both
hardware and the social institutions created to bring that energy to us. It may be that we will find
ways to turn this silk purse into a sow’s ear. By comparison, advocates of the further development
of nuclear power seem to believe that they are working on a rather flexible technology whose
adverse social effects can be fixed by changing the design parameters of reactors and nuclear waste
disposal systems. For reasons indicated above, I believe them to be dead wrong in that faith. Yes,
we may be able to manage some of the ‘‘risks’’ to public health and safety that nuclear power
brings. But as society adapts to the more dangerous and apparently indelible features of nuclear
power, what will be the long-range toll in human freedom?

My belief that we ought to attend more closely to technical objects themselves is not to say
that we can ignore the contexts in which those objects are situated. A ship at sea may well require,
as Plato and Engels insisted, a single captain and obedient crew. But a ship out of service, parked
at the dock, needs only a caretaker. To understand which technologies and which contexts are
important to us, and why, is an enterprise that must involve both the study of specific technical
systems and their history as well as a thorough grasp of the concepts and controversies of political
theory. In our times people are often willing to make drastic changes in the way they live to accom-
modate technological innovation while at the same time resisting similar kinds of changes justified
on political grounds. If for no other reason than that, it is important for us to achieve a clearer view
of these matters than has been our habit so far.

NOTES

1. Langdon Winner, Autonomous Technology: Technics-Out-of-Control as a Theme in Political Thought
(Cambridge: MIT Press, 1977).

2. The meaning of ‘‘technology’’ I employ in this essay does not encompass some of the broader defini-
tions of that concept found in contemporary literature, for example, the notion of ‘‘technique’’ in the writings
of Jacques Ellul. My purposes here are more limited. For a discussion of the difficulties that arise in attempts
to define ‘‘technology,’’ see Autonomous Technology, 8–12.

3. Robert A. Caro, The Power Broker: Robert Moses and the Fall of New York (New York: Random House,
1974), 318, 481, 514, 546, 951–958, 952.

4. Robert Ozanne, A Century of Labor-Management Relations at McCormick and International Harvester
(Madison: University of Wisconsin Press, 1967), 20.

EBSCOhost – printed on 1/30/2023 6:29 AM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

Do Artifacts Have Politics? 263

5. The early history of the tomato harvester is told in Wayne D. Rasmussen, ‘‘Advances in American
Agriculture: The Mechanical Tomato Harvester as a Case Study,’’ Technology and Culture 9:531–543, 1968.

6. Andrew Schmitz and David Seckler, ‘‘Mechanized Agriculture and Social Welfare: The Case of the
Tomato Harvester,’’ American Journal of Agricultural Economics 52:569–577, 1970.

7. William H. Friedland and Amy Barton, ‘‘Tomato Technology,’’ Society 13:6, September/October 1976.
See also William H. Friedland, Social Sleepwalkers: Scientific and Technological Research in California Agri-
culture, University of California, Davis, Department of Applied Behavioral Sciences, Research Monograph
No. 13, 1974.

8. University of California Clip Sheet 54:36, May 1, 1979.
9. ‘‘Tomato Technology.’’

10. A history and critical analysis of agricultural research in the land-grant colleges is given in James High-
tower, Hard Tomatoes, Hard Times (Cambridge: Schenkman, 1978).

11. David F. Noble, Forces of Production: A Social History of Machine Tool Automation (New York: Alfred
A. Knopf, 1984).

12. Friedrich Engels, ‘‘On Authority,’’ in The Marx-Engels Reader, ed. 2, Robert Tucker (ed.) (New York:
W. W. Norton, 1978), 731.

13. Ibid.
14. Ibid., 732, 731.
15. Karl Marx, Capital, vol. 1, ed. 3, translated by Samuel Moore and Edward Aveling (New York: Modern

Library, 1906), 530.
16. Jerry Mander, Four Arguments for the Elimination of Television (New York: William Morrow, 1978),

44.
17. See, for example, Robert Argue, Barbara Emanuel, and Stephen Graham, The Sun Builders: A People’s

Guide to Solar, Wind and Wood Energy in Canada (Toronto: Renewable Energy in Canada, 1978). ‘‘We think
decentralization is an implicit component of renewable energy; this implies the decentralization of energy
systems, communities and of power. Renewable energy doesn’t require mammoth generation sources of dis-
ruptive transmission corridors. Our cities and towns, which have been dependent on centralized energy sup-
plies, may be able to achieve some degree of autonomy, thereby controlling and administering their own
energy needs’’ (16).

18. Alfred D. Chandler, Jr., The Visible Hand: The Managerial Revolution in American Business (Cam-
bridge: Belknap, 1977), 244.

19. Ibid.
20. Ibid., 500.
21. Leonard Silk and David Vogel, Ethics and Profits: The Crisis of Confidence in American Business

(New York: Simon and Schuster, 1976), 191.
22. Russell W. Ayres, ‘‘Policing Plutonium: The Civil Liberties Fallout,’’ Harvard Civil Rights—Civil Lib-

erties Law Review 10 (1975):443, 413–414, 374.

EBSCOhost – printed on 1/30/2023 6:29 AM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

3 Values in technology and disclosive
computer ethics

Philip Brey

3.1 Introduction

Is it possible to do an ethical study of computer systems themselves inde-
pendently of their use by human beings? The theories and approaches in this
chapter answer this question affirmatively and hold that such studies should
have an important role in computer and information ethics. In doing so, they
undermine conventional wisdom that computer ethics, and ethics generally,
is concerned solely with human conduct, and they open up new directions for
computer ethics, as well as for the design of computer systems.

As our starting point for this chapter, let us consider some typical examples
of ethical questions that are raised in relation to computers and information
technology, such as can be found throughout this book:

� Is it wrong for a system operator to disclose the content of employee email
messages to employers or other third parties?

� Should individuals have the freedom to post discriminatory, degrading and
defamatory messages on the Internet?

� Is it wrong for companies to use data-mining techniques to generate con-
sumer profiles based on purchasing behaviour, and should they be allowed
to do so?

� Should governments design policies to overcome the digital divide between
skilled and unskilled computer users?

As these examples show, ethical questions regarding information and com-
munication technology typically focus on the morality of particular ways of
using the technology or the morally right way to regulate such uses.

Taken for granted in such questions, however, are the computer systems
and software that are used. Could there, however, not also be valid ethical
questions that concern the technology itself? Could there be an ethics of
computer systems separate from the ethics of using computer systems? The
embedded values approach in computer ethics, formulated initially by Helen
Nissenbaum (1998; Flanagan, Howe and Nissenbaum 2008) and since adopted
by many authors in the field, answers these questions affirmatively, and aims
to develop a theory and methodology for moral reflection on computer systems
themselves, independently of particular ways of using them.

C
o
p
y
r
i
g
h
t

2
0
1
0
.

C
a
m
b
r
i
d
g
e

U
n
i
v
e
r
s
i
t
y

P
r
e
s
s
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.

M
a
y

n
o
t

b
e

r
e
p
r
o
d
u
c
e
d

i
n

a
n
y

f
o
r
m

w
i
t
h
o
u
t

p
e
r
m
i
s
s
i
o
n

f
r
o
m

t
h
e

p
u
b
l
i
s
h
e
r
,

e
x
c
e
p
t

f
a
i
r

u
s
e
s

p
e
r
m
i
t
t
e
d

u
n
d
e
r

U
.
S
.

o
r

a
p
p
l
i
c
a
b
l
e

c
o
p
y
r
i
g
h
t

l
a
w
.

EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS
AN: 317678 ; Luciano Floridi.; The Cambridge Handbook of Information and Computer Ethics
Account: s4264928.main.edsebook

42 Philip Brey

The embedded values approach holds that computer systems and software
are not morally neutral and that it is possible to identify tendencies in them to
promote or demote particular moral values and norms. It holds, for example,
that computer programs can be supportive of privacy, freedom of informa-
tion, or property rights or, instead, to go against the realization of these val-
ues. Such tendencies in computer systems are called ‘embedded’, ‘embodied’
or ‘built-in’ moral values or norms. They are built-in in the sense that they
can be identified and studied largely or wholly independently of actual uses
of the system, although they manifest themselves in a variety of uses of
the system. The embedded values approach aims to identify such tenden-
cies and to morally evaluate them. By claiming that computer systems may
incorporate and manifest values, the embedded values approach is not claim-
ing that computer systems engage in moral actions, that they are morally
praiseworthy or blameworthy, or that they bear moral responsibility (Johnson
2006). It is claiming, however, that the design and operation of computer
systems has moral consequences and therefore should be subjected to ethical
analysis.

If the embedded values approach is right, then the scope of computer ethics
is broadened considerably. Computer ethics should not just study ethical issues
in the use of computer technology, but also in the technology itself. And if
computer systems and software are indeed value-laden, then many new ethi-
cal issues emerge for their design. Moreover, it suggests that design practices
and methodologies, particularly those in information systems design and soft-
ware engineering, can be changed to include the consideration of embedded
values.

In the following section, Section 3.2, the case will be made for the embed-
ded values approach, and some common objections against it will be dis-
cussed. Section 3.3 will then turn to an exposition of a particular approach
in computer ethics that incorporates the embedded values approach, disclo-
sive computer ethics, proposed by the author (Brey 2000). Disclosive com-
puter ethics is an attempt to incorporate the notion of embedded values
into a comprehensive approach to computer ethics. Section 3.4 considers
value-sensitive design (VSD), an approach to design developed by computer
scientist Batya Friedman and her associates, which incorporates notions of
the embedded values approach (Friedman, Kahn and Borning 2006). The
VSD approach is not an approach within ethics but within computer sci-
ence, specifically within information systems design and software engineer-
ing. It aims to account for values in a comprehensive manner in the design
process, and makes use of insights of the embedded values approach for
this purpose. In a concluding section, the state of the art in these dif-
ferent approaches is evaluated and some suggestions are made for future
research.

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

43 Values in technology and disclosive computer ethics

3.2 How technology embodies values

The existing literature on embedded values in computer technology is still
young, and has perhaps focused more on case studies and applications for
design than on theoretical underpinnings. The idea that technology embod-
ies values has been inspired by work in the interdisciplinary field of science
and technology studies, which investigates the development of science and
technology and their interaction with society. Authors in this field agree that
technology is not neutral but shaped by society. Some have argued, specifi-
cally, that technological artefacts (products or systems) issue constraints on
the world surrounding them (Latour 1992) and that they can harbour political
consequences (Wiener 1954). Authors in the embedded value approach have
taken these ideas and applied them to ethics, arguing that technological arte-
facts are not morally neutral but value-laden. However, what it means for an
artefact to have an embedded value remains somewhat vague.

In this section a more precise description of what it means for a technologi-
cal artefact to have embedded values is articulated and defended. The position
taken here is in line with existing accounts of embedded values, although their
authors need not agree with all of the claims made in this section. The idea
of embedded values is best understood as a claim that technological artefacts
(and in particular computer systems and software) have built-in tendencies to
promote or demote the realization of particular values. Defined in this way, a
built-in value is a special sort of built-in consequence. In this section a defence
of the thesis that technological artefacts are capable of having built-in con-
sequences is first discussed. Then tendencies for the promotion of values are
identified as special kinds of built-in consequences of technological artefacts.
The section is concluded by a brief review of the literature on values in infor-
mation technology, and a discussion of how values come to be embedded in
technology.

3.2.1 Consequences built into technology

The embedded values approach promotes the idea that technology can have
built-in tendencies to promote or demote particular values. This idea, how-
ever, runs counter to a frequently held belief about technology, the idea that
technology itself is neutral with respect to consequences. Let us call this the
neutrality thesis. The neutrality thesis holds that there are no consequences
that are inherent to technological artefacts, but rather that artefacts can always
be used in a variety of different ways, and that each of these uses comes with
its own consequences. For example, a hammer can be used to hammer nails,
but also to break objects, to kill someone, to flatten dough, to keep a pile of
paper in place or to conduct electricity. These uses have radically different

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

44 Philip Brey

effects on the world, and it is difficult to point to any single effect that is
constant in all of them.

The hammer example, and other examples like it (a similar example could
be given for a laptop), suggest strongly that the neutrality thesis is true. If so,
this would have important consequences for an ethics of technology. It would
follow that ethics should not pay much attention to technological artefacts
themselves, because they in themselves do not ‘do’ anything. Rather, ethics
should focus on their usage alone.

This conclusion holds only if one assumes that the notion of embedded
values requires that there are consequences that manifest themselves in each
and every use of an artefact. But this strong claim need not be made. A
weaker claim is that artefacts may have built-in consequences in that there
are recurring consequences that manifest themselves in a wide range of uses
of the artefact, though not in all uses. If such recurring consequences can be
associated with technological artefacts, this may be sufficient to falsify the
strong claim of the neutrality thesis that each use of a technological artefact
comes with its own consequences. And a good case can be made that at least
some artefacts can be associated with such recurring consequences.

An ordinary gas-engine automobile, for example, can evidently be used
in many different ways: for commuter traffic, for leisure driving, to taxi
passengers or cargo, for hit jobs, for auto racing, but also as a museum piece,
as a temporary shelter for the rain or as a barricade. Whereas there is no single
consequence that results from all of these uses, there are several consequences
that result from a large number of these uses: in all but the last three uses,
gasoline is used up, greenhouse gases and other pollutants are being released,
noise is being generated, and at least one person (the driver) is being moved
around at high speeds. These uses, moreover, have something in common:
they are all central uses of automobiles, in that they are accepted uses that
are frequent in society and that account for the continued production and
usage of automobiles. The other three uses are peripheral in that they are
less dominant uses that depend for their continued existence on these central
uses, because their central uses account for the continued production and
consumption of automobiles. Central uses of the automobile make use of its
capacity for driving, and when it is used in this capacity, certain consequences
are very likely to occur. Generalizing from this example, a case can be made
that technological artefacts are capable of having built-in consequences in
the sense that particular consequences may manifest themselves in all of the
central uses of the artefact.

It may be objected that, even with this restriction, the idea of built-in
consequences employs a too deterministic conception of technology. It sug-
gests that, when technological artefacts are used, particular consequences are
necessary or unavoidable. In reality, there are usually ways to avoid par-
ticular consequences. For example, a gas-fuelled automobile need not emit

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

45 Values in technology and disclosive computer ethics

greenhouse gases into the atmosphere if a greenbox device is attached to it,
which captures carbon dioxide and nitrous oxide and converts it into bio-oil.
To avoid this objection, it may be claimed that the notion of built-in con-
sequences does not refer to necessary, unavoidable consequences but rather
to strong tendencies towards certain consequences. The claim is that these
consequences are normally realized whenever the technology is used, unless
it is used in a context that is highly unusual or if extraordinary steps are
taken to avoid particular consequences. Built-in consequences are therefore
never absolute but always relative to a set of typical uses and contexts of use,
outside of which the consequences may not occur.

Do many artefacts have built-in consequences in the way defined above?
The extent to which technological artefacts have built-in consequences can be
correlated with two factors: the extent to which they are capable of exerting
force or behaviour autonomously, and the extent to which they are embedded
in a fixed context of use. As for the first parameter, some artefacts seem
to depend strongly on users for their consequences, whereas others seem to
be able to generate effects on their own. Mechanical and electrical devices,
in particular, are capable of displaying all kinds of behaviours on their own,
ranging from simple processes, like the consumption of fuel or the emission of
steam, to complex actions, like those of robots and artificial agents. Elements
of infrastructure, like buildings, bridges, canals and railway tracks, may not
behave autonomously but, by their mere presence, they do impose significant
constraints on their environment, including the actions and movements of
people, and in this way engender their own consequences. Artefacts that are
not mechanical, electrical or infrastructural, like simple hand-held tools and
utensils, tend to have less consequences of their own and their consequences
tend to be more dependent on the uses to which they are put.

As for the second parameter, it is easier to attribute built-in consequences
to technological artefacts that are placed in a fixed context of use than to
those that are used in many different contexts. Adapting an example by
Winner (1980), an overpass that is 180 cm (6 ft) high has as a generic built-in
consequence that it prevents traffic from going through that is more than
180 cm high. But when such an overpass is built over the main access road
to an island from a city in which automobiles are generally less than 180 cm
high and buses are taller, then it acquires a more specific built-in consequence,
which is that buses are being prevented from going to the island whereas
automobiles do have access. When, in addition, it is the case that buses are
the primary means of transportation for black citizens, whereas most white
citizens own automobiles, then the more specific consequence of the overpass
is that it allows easy access to the island for one racial group, while denying
it to another. When the context of use of an artefact is relatively fixed, the
immediate, physical consequences associated with a technology can often
be translated into social consequences because there are reliable correlations

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

46 Philip Brey

between the physical and the social (for example between prevention of access
to buses and prevention of access to blacks) that are present (Latour 1992).

3.2.2 From consequences to values

Let us now turn from built-in consequences to embedded values. An embedded
value is a special kind of built-in consequence. It has already been explained
how technological artefacts can have built-in consequences. What needs to
be explained now is how some of these built-in consequences can be asso-
ciated with values. To be able to make this case, let us first consider what a
value is.

Although the notion of a value remains somewhat ambiguous in philosophy,
some agreements seem to have emerged (Frankena 1973). First, philosophers
tend to agree that values depend on valuation. Valuation is the act of valuing
something, or finding it valuable, and to find something valuable is to find it
good in some way. People find all kinds of things valuable, both abstract and
concrete, real and unreal, general and specific. Those things that people find
valuable that are both ideal and general, like justice and generosity, are called
values, with disvalues being those general qualities that are considered to be
bad or evil, like injustice and avarice. Values, then, correspond to idealized
qualities or conditions in the world that people find good. For example, the
value of justice corresponds to some idealized, general condition of the world
in which all persons are treated fairly and rewarded rightly.

To have a value is to want it to be realized. A value is realized if the
ideal conditions defined by it are matched by conditions in the actual world.
For example, the value of freedom is fully realized if everyone in the world
is completely free. Often, though, a full realization of the ideal conditions
expressed in a value is not possible. It may not be possible for everyone to be
completely free, as there are always at least some constraints and limitations
that keep people from a state of complete freedom. Therefore, values can
generally be realized only to a degree.

The use of a technological artefact may result in the partial realization of a
value. For instance, the use of software that has been designed not to make
one’s personal information accessible to others helps to realize the value of
privacy. The use of an artefact may also hinder the realization of a value or
promote the realization of a disvalue. For instance, the use of software that
contains spyware or otherwise leaks personal data to third parties harms the
realization of the value of privacy. Technological artefacts are hence capable
of either promoting or harming the realization of values when they are used.
When this occurs systematically, in all of its central uses, we may say that
the artefact embodies a special kind of built-in consequence, which is a built-
in tendency to promote or harm the realization of a value. Such a built-in
tendency may be called, in short, an embedded value or disvalue. For example,

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

47 Values in technology and disclosive computer ethics

spyware-laden software has a tendency to harm privacy in all of its typical
uses, and may therefore be claimed to have harm to privacy as an embedded
disvalue.

Embedded values approaches often focus on moral values. Moral values
are ideals about how people ought to behave in relation to others and them-
selves and how society should be organized so as to promote the right course
of action. Examples of moral values are justice, freedom, privacy and hon-
esty. Next to moral values, there are different kinds of non-moral values, for
example, aesthetic, economic, (non-moral) social and personal values, such as
beauty, efficiency, social harmony and friendliness.

Values should be distinguished from norms, which can also be embedded
in technology. Norms are rules that prescribe which kinds of actions or state
of affairs are forbidden, obligatory or allowed. They are often based on values
that provide a rationale for them. Moral norms prescribe which actions are
forbidden, obligatory or allowed from the point of view of morality. Exam-
ples of moral norms are ‘do not steal’ and ‘personal information should not
be provided to third parties unless the bearer has consented to such distri-
bution’. Examples of non-moral norms are ‘pedestrians should walk on the
right side of the street’ and ‘fish products should not contain more than
10 mg histamines per 100 grams’. Just as technological artefacts can promote
the realization of values, they can also promote the enforcement of norms.
Embedded norms are a special kind of built-in consequence. They are tenden-
cies to effectuate norms by bringing it about that the environment behaves
or is organized according to the norm. For example, web browsers can be set
not to accept cookies from websites, thereby enforcing the norm that websites
should not collect information about their user. By enforcing a norm, arte-
facts thereby also promote the corresponding value, if any (e.g., privacy in the
example).

So far we have seen that technological artefacts may have embedded values
understood as special kinds of built-in consequences. Because this conception
relates values to causal capacities of artefacts to affect their environment, it
may be called the causalist conception of embedded values. In the literature
on embedded values, other conceptions have been presented as well. Notably,
Flanagan, Howe and Nissenbaum (2008) and Johnson (1997) discuss what they
call an expressive conception of embedded values. Artefacts may be said to be
expressive of values in that they incorporate or contain symbolic meanings
that refer to values. For example, a particular brand of computer may sym-
bolize or represent status and success, or the representation of characters and
events in a computer game may reveal racial prejudices or patriarchal values.
Expressive embedded values in artefacts represent the values of designers or
users of the artefact. This does not imply, however, that they also function
to realize these values. It is conceivable that the values expressed in arte-
facts cause people to adopt these values and thereby contribute to their own

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

48 Philip Brey

realization. Whether this happens frequently remains an open question. In
any case, whereas the expressive conception of embedded values merits fur-
ther philosophical reflection, the remainder of this chapter will be focused on
the causalist conception.

3.2.3 Values in information technology

The embedded values approach within computer ethics studies embedded val-
ues in computer systems and software and their emergence, and provides
moral evaluations of them. The study of embedded values in Information and
Communication Technology (ICT) has begun with a seminal paper by Batya
Friedman and Helen Nissenbaum in which they consider bias in computer
systems (Friedman and Nissenbaum 1996). A biased computer system or pro-
gram is defined by them as one that systematically and unfairly discriminates
against certain individuals or groups, who may be users or other stakeholders
of the system. Examples include educational programs that have much more
appeal to boys than to girls, loan approval software that gives negative rec-
ommendations for loans to individuals with ethnic surnames, and databases
for matching organ donors with potential transplant recipients that system-
atically favour individuals retrieved and displayed immediately on the first
screen over individuals displayed on later screens. Building on their work, I
have distinguished user biases that discriminate against (groups of) users of an
information system, and information biases that discriminate against stake-
holders represented by the system (Brey 1998). I have discussed various kinds
of user bias, such as user exclusion and the selective penalization of users,
as well as different kinds of information bias, including bias in information
content, data selection, categorization, search and matching algorithms and
the display of information.

After their study of bias in computer systems, Friedman and Nissenbaum
went on to consider consequences of software agents for the autonomy of
users. Software agents are small programs that act on behalf of the user to
perform tasks. Friedman and Nissenbaum (1987) argue that software agents
can undermine user autonomy in various ways – for example by having only
limited capabilities to perform wanted tasks or by not making relevant infor-
mation available to the user – and argue that it is important that software
agents are designed so as to enhance user autonomy. The issue of user auton-
omy is also taken up in Brey (1998, 1999c), in which I argue that computer
systems can undermine autonomy by supporting monitoring by third parties,
by imposing their own operational logic on the user, thus limiting creativity
and choice, or by making users dependent on systems operators or others for
maintenance or access to systems functions.

Deborah Johnson (1997) considers the claim that the Internet is an inher-
ently democratic technology. Some have claimed that the Internet, because of

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

49 Values in technology and disclosive computer ethics

its distributed and nonhierarchical nature, promotes democratic processes by
empowering individuals and stimulating democratic dialogue and decision-
making (see Chapter 10). Johnson subscribes to this democratic potential.
She cautions, however, that these democratic tendencies may be limited if
the Internet is subjected to filtering systems that only give a small group of
individuals control over the flow of information on the Internet. She hence
identifies both democratic and undemocratic tendencies in the technology that
may become dominant depending on future use and development.

Other studies, within the embedded values approach, have focused on spe-
cific values, such as privacy, trust, community, moral accountability and
informed consent, or on specific technologies. Introna and Nissenbaum (2000)
consider biases in the algorithms of search engines, which, they argue, favour
websites with a popular and broad subject matter over specialized sites, and the
powerful over the less powerful. Introna (2007) argues that existing plagiarism
detection software creates an artificial distinction between alleged plagiarists
and non-plagiarists, which is unfair. Introna (2005) considers values embed-
ded in facial recognition systems. Camp (1999) analyses the implications of
Internet protocols for democracy. Flanagan, Howe and Nissenbaum (2005)
study values in computer games, and Brey (1999b, 2008) studies them in
computer games, computer simulations and virtual reality applications. Agre
and Mailloux (1997) reveal the implications for privacy of Intelligent Vehicle-
Highway Systems, Tavani (1999) analyses the implications of data-mining
techniques for privacy and Fleischmann (2007) considers values embedded in
digital libraries.

3.2.4 The emergence of values in information technology

What has not been discussed so far is how technological artefacts and systems
acquire embedded values. This issue has been ably taken up by Friedman
and Nissenbaum (1996). They analyse the different ways in which biases
(injustices) can emerge in computer systems. Although their focus is on biases,
their analysis can easily be generalized to values in general. Biases, they argue,
can have three different types of origins. Preexisting biases arise from values
and attitudes that exist prior to the design of a system. They can either be
individual, resulting from the values of those who have a significant input into
the design of the systems, or societal, resulting from organizations, institutions
or the general culture that constitute the context in which the system is
developed. Examples are racial biases of designers that become embedded in
loan approval software, and overall gender biases in society that lead to the
development of computer games that are more appealing to boys than to girls.
Friedman and Nissenbaum note that preexisting biases can be embedded in
systems intentionally, through conscious efforts of individuals or institutions,
or unintentionally and unconsciously.

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

50 Philip Brey

A second type is technical bias, which arises from technical constraints or
considerations. The design of computer systems includes all kinds of technical
limitations and assumptions that are perhaps not value-laden in themselves
but that could result in value-laden designs, for example because limited
screen sizes cannot display all results of a search process, thereby privileging
those results that are displayed first, or because computer algorithms or models
contain formalized, simplified representations of reality that introduce biases
or limit the autonomy of users, or because software engineering techniques
do not allow for adequate security, leading to systematic breaches of privacy.
A third and final type is emergent bias, which arises when the social context
in which the system is used is not the one intended by its designers. In the
new context, the system may not adequately support the capabilities, values
or interests of some user groups or the interests of other stakeholders. For
example, an ATM that relies heavily on written instructions may be installed
in a neighborhood with a predominantly illiterate population.

Friedman and Nissenbaum’s classification can easily be extended to embed-
ded values in general. Embedded values may hence be identified as preexist-
ing, technical or emergent. What this classification shows is that embedded
values are not necessarily a reflection of the values of designers. When they
are, moreover, their embedding often has not been intentional. However, their
embedding can be an intentional act. If designers are aware of the way in
which values are embedded into artefacts, and if they can sufficiently antic-
ipate future uses of an artefact and its future context(s) of use, then they
are in a position to intentionally design artefacts to support particular val-
ues. Several approaches have been proposed in recent years that aim to make
considerations of value part of the design process. In Section 3.4, the most
influential of these approaches, called value-sensitive design, is discussed. But
first, let us consider a more philosophical approach that also adopts the notion
of embedded values.

3.3 Disclosive computer ethics

The approach of disclosive computer ethics (Brey 2000, 1999a) intends to
make the embedded values approach part of a comprehensive approach
to computer ethics. It is widely accepted that the aim of computer ethics is
to morally evaluate practices that involve computer technology and to devise
ethical policies for these practices. The practices in question are activities of
designing, using and managing computer technology by individuals, groups
or organizations. Some of these practices are already widely recognized in
society as morally controversial. For example, it is widely recognized that
copying patented software and filtering Internet information are morally con-
troversial practices. Such practices may be called morally transparent because

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

51 Values in technology and disclosive computer ethics

the practice is known and it is roughly understood what moral values are at
stake in relation to it.

In other computer-related practices, the moral issues that are involved may
not be sufficiently recognized. This may be the case because the practices
themselves are not well known beyond a circle of specialists, or because they
are well known but not recognized as morally charged because they have
a false appearance of moral neutrality. Practices of this type may be called
morally opaque, meaning that it is not generally understood that the practice
raises ethical questions or what these questions may be. For example, the
practice of browser tracking is morally opaque because it is not well known
or well understood by many people, and the practice of search engine use is
morally opaque because, although the practice is well known, it is not well
known that the search algorithms involved in the practice contain biases and
raise ethical questions.

Computer ethics has mostly focused on morally transparent practices, and
specifically on practices of using computer systems. Such approaches may be
called mainstream computer ethics. In mainstream computer ethics, a typical
study begins by identifying a morally controversial practice, like software
theft, hacking, electronic monitoring or Internet pornography. Next, the prac-
tice is described and analysed in descriptive terms, and finally, moral principles
and judgements are applied to it and moral deliberation takes place, resulting
in a moral evaluation of the practice and, possibly, a set of policy recommen-
dations. As Jim Moor has summed up this approach, ‘A typical problem in
computer ethics arises because there is a policy vacuum about how computer
technology should be used’ (1985, p. 266).

The approach of disclosive computer ethics focuses instead on morally
opaque practices. Many practices involving computer technology are morally
opaque because they include operations of technological systems that are very
complex and difficult to understand for laypersons and that are often hidden
from view for the average user. Additionally, practices are often morally
opaque because they involve distant actions over computer networks by sys-
tem operators, providers, website owners and hackers and remain hidden from
view from users and from the public at large. The aim of disclosive ethics is
to identify such morally opaque practices, describe and analyse them, so as
to bring them into view, and to identify and reflect on any problematic moral
features in them. Although mainstream and disclosive computer ethics are
different approaches, they are not rival approaches but are rather comple-
mentary. They are also not completely separable, because the moral opacity
of practices is always a matter of degree, and because a complex practice may
include both morally transparent and opaque dimensions, and thus require
both approaches.

Many computer-related practices that are morally opaque are so because
they depend on operations of computer systems that are value-laden without

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

52 Philip Brey

it being known. Many morally opaque practices, though not all, are the result
of undisclosed embedded values and norms in computer technology. A large
part of the work in disclosive computer ethics, therefore, focuses on the iden-
tification and moral evaluation of such embedded values.

3.3.1 Methodology: multi-disciplinary and multi-level

Research typically focuses on an (alleged) morally opaque practice (e.g., pla-
giarism detection) and optionally on a morally opaque computer system or
software program involved in this practice (e.g., plagiarism detection software).
The aim of the investigation usually is to reveal hidden morally problematic
features in the practice and to provide ethical reflections on these features,
optionally resulting in specific moral judgements or policy recommendations.
To achieve this aim, research should include three different kinds of research
activities, which take place at different levels of analysis. First, there is the
disclosure level. At this level, morally opaque practices and computer systems
are analysed from the point of view of one or more relevant moral values, like
privacy or justice. It is investigated whether and how the practice or system
tends to promote or demote the relevant value. At this point, very little moral
theory is introduced into the analysis, and only a coarse definition of the
value in question is used that can be refined later on into the research.

Second, there is the theoretical level at which moral theory is developed
and refined. As Jim Moor (1985) has pointed out, the changing settings and
practices that emerge with new computer technology may yield new values, as
well as require the reconsideration of old values. There may also be new moral
dilemmas because of conflicting values that suddenly clash when brought
together in new settings and practices. It may then be found that existing moral
theory has not adequately theorized these values and value conflicts. Privacy,
for example, is now recognized by many computer ethicists as requiring more
attention than it has previously received in moral theory. In part, this is
due to reconceptualizations of the private and public sphere, brought about
by the use of computer technology, which has resulted in inadequacies in
existing moral theory about privacy. It is part of the task of computer ethics
to further develop and modify existing moral theory when, as in the case of
privacy, existing theory is insufficient or inadequate in light of new demands
generated by new practices involving computer technology.

Third, there is the application level, in which, in varying degrees of speci-
ficity and concreteness, moral theory is applied to analyses that are the out-
come of research at the disclosure level. For example, the question of what
amount of protection should be granted to software developers against the
copying of their programs may be answered by applying consequentialist or
natural law theories of property; and the question of what actions governments

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

53 Values in technology and disclosive computer ethics

should take in helping citizens have access to computers may be answered
by applying Rawls’ principles of justice. The application level is where moral
deliberation takes place. Usually, this involves the joint consideration of moral
theory, moral judgements or intuitions and background facts or theories, rather
than a slavish application of preexisting moral rules.

Disclosive ethics should not just be multi-level, ideally it should also be
a multi-disciplinary endeavour, involving ethicists, computer scientists and
social scientists. The disclosure level, particularly, is best approached in a
multi-disciplinary fashion because research at this level often requires con-
siderable knowledge of the technological aspects of the system or practice that
is studied and may also require expertise in social science for the analysis of
the way in which the functioning of systems is dependent on human actions,
rules and institutions. Ideally, research at the disclosure level, and perhaps
also at the application level, is best approached as a cooperative venture
between computer scientists, social scientists and philosophers. If this cannot
be attained, it should at least be carried out by researchers with an adequate
interdisciplinary background.

3.3.2 Focus on public values

The importance of disclosive computer ethics is that it makes transparent
moral features of practices and technologies that would otherwise remain
hidden, thus making them available for ethical analysis and moral decision-
making. In this way, it supplements mainstream computer ethics, which runs
the risk of limiting itself to the more obvious ethical dilemmas in computing.
An additional benefit is that it can point to novel solutions to moral dilemmas
in mainstream computer ethics. Mainstream approaches tend to seek solu-
tions for moral dilemmas through norms and policies that regulate usage.
But some of these moral dilemmas can also be solved by redesigning, replac-
ing or removing the technology that is used, or by modifying problematic
background practices that condition usage. Disclosive ethics can bring these
options into view. It thus reveals a broader arena for moral action, in which
different parties responsible for the design, adoption, use and regulation of
computer technology share responsibility for the moral consequences of using
it, and in which the technology itself is made part of the equation.

In Brey (2000) I have proposed a set of values that disclosive computer
ethics should focus on. This list included justice (fairness, non-discrimination),
freedom (of speech, of assembly), autonomy, privacy and democracy. Many
other values could be added, like trust, community, human dignity and moral
accountability. These are all public values, which are moral and social values
that are widely accepted in society. An emphasis on public values makes it
more likely that analyses in disclosive ethics can find acceptance in society

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

54 Philip Brey

and that they stimulate better policies, design practices or practices of using
technology. Of course, analysts will still have disagreements on the proper
definition or operationalization of public values and the proper way of bal-
ancing them against each other and against other constraints like cost and
usability, but such disagreements are inherent to ethics.

The choice for a particular set of values prior to analysis has been criticized
by Introna (2005), who argues that disclosive computer ethics should rather
focus on the revealing of hidden politics, interests and values in technological
systems and practices, without prioritizing which values ought to be real-
ized. This suggests a more descriptive approach to disclosive computer ethics
opposed to the more normative approach proposed in Brey (2000).

3.4 Value-sensitive design

The idea that computer systems harbour values has stimulated research into
the question how considerations of value can be made part of the design
process (Flanagan, Nissenbaum and Howe 2008). Various authors have made
proposals for incorporating considerations of value into design methodology.
Value-sensitive design (VSD) is the most elaborate and influential of these
approaches. VSD has been developed by computer scientist Batya Friedman
and her associates (Friedman, Kahn and Borning 2006, Friedman and Kahn
2003) and is an approach to the design of computer systems and software that
aims to account for and incorporate human values in a comprehensive manner
throughout the design process. The theoretical foundation of value-sensitive
design is provided in part by the embedded values approach, although it is
emphasized that values can result from both design and the social context in
which the technology is used, and usually emerge in the interaction between
the two.

The VSD approach proposes investigations into values, designs, contexts of
use and stakeholders with the aim of designing systems that incorporate and
balance the values of different stakeholders. It aims to offer a set of methods,
tools and procedures for designers by which they can systematically account
for values in the design process. VSD builds on previous work in various
fields, including computer ethics, social informatics (the study of information
and communication tools in cultural and institutional contexts), computer-
supported cooperative work (the study of how interdependent group work
can be supported by means of computer systems) and participatory design (an
approach to design that attempts to actively involve users in the design process
to help ensure that products meet their needs and are usable). The focus of
VSD is on ‘human values with ethical import’, such as privacy, freedom from
bias, autonomy, trust, accountability, identity, universal usability, ownership
and human welfare (Friedman and Kahn 2003, p. 1187).

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

55 Values in technology and disclosive computer ethics

VSD places much emphasis on the values and needs of stakeholders. Stake-
holders are persons, groups or organizations whose interests can be affected
by the use of an artefact. A distinction is made between direct and indirect
stakeholders. Direct stakeholders are parties who interact directly with the
computer system or its output. That is, they function in some way as users
of the system. Indirect stakeholders include all other parties who are affected
by the system. The VSD approach proposes that the values and interests of
stakeholders are carefully balanced against each other in the design process.
At the same time, it wants to maintain that the human and moral values it
considers have standing independently of whether a particular person or group
upholds them (Friedman and Kahn 2003, p. 1186). This stance poses a possible
dilemma for the VSD approach: how to proceed if the values of stakeholders
are at odds with supposedly universal moral values that the analyst indepen-
dently brings to the table? This problem has perhaps not been sufficiently
addressed in current work in VSD. In practice, fortunately, there will often be
at least one stakeholder who has an interest in upholding a particular moral
value that appears to be at stake. Still, this fact does not provide a principled
solution for this problem.

3.4.1 VSD methodology

VSD often focuses on a technological system that is to be designed and
investigates how human values can be accounted for in its design. However,
designers may also focus on a particular value and explore its implications for
the design of various systems, or on a particular context of use, and explore
values and technologies that may play a role in it. With one of these three aims
in mind, VSD then utilizes a tripartite methodology that involves three kinds
of investigations: conceptual, empirical and technical. These investigations are
undertaken congruently and are ultimately integrated with each other within
the context of a particular case study.

Conceptual investigations aim to conceptualize and describe the values
implicated in a design, as well as the stakeholders affected by it, and consider
the appropriate trade-off between implicated values, including both moral
and non-moral values. Empirical investigations focus on the human context
in which the technological artefact is to be situated, so as to better anticipate
on this context and to evaluate the success of particular designs. They include
empirical studies of human behaviour, physiology, attitudes, values and needs
of users and other stakeholders, and may also consider the organizational con-
text in which the technology is used. Empirical investigations are important
in order to assess what the values and needs of stakeholders are, how techno-
logical artefacts can be expected to be used, and how they can be expected
to affect users and other stakeholders. Technical investigations, finally, study

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

56 Philip Brey

how properties of technological artefacts support or hinder human values and
how computer systems and software may be designed proactively in order
to support specific values that have been found important in the conceptual
investigation.

Friedman, Kahn and Borning (2003) propose a series of steps that may be
taken in VSD case studies. They are, respectively, the identification of the
topic of investigation (a technological system, value or context of use), the
identification of direct and indirect stakeholders, the identification of benefits
and harms for each group, the mapping of these benefits and harms onto
corresponding values, the conduction of a conceptual investigation of key
values, the identification of potential value conflicts and the proposal of
solutions for them, and the integration of resulting value considerations with
the larger objectives of the organization(s) that have a stake in the design.

3.4.2 VSD in practice

A substantial number of case studies within the VSD framework have been
completed, covering a broad range of technologies and values (see Friedman
and Freier 2005 for references). To see how VSD is brought into practice, two
case studies will now be described in brief.

In one study, Friedman, Howe and Felten (2002) analyse how the value of
informed consent (in relation to online interactions of end-users) might be bet-
ter implemented in the Mozilla browser, which is an open-source browser. They
first undertook an initial conceptual investigation of the notion of informed
consent, outlining real-world conditions that would have to be met for it, like
disclosure of benefits and risks, voluntariness of choice and clear communi-
cation in a language understood by the user. They then considered the extent
to which features of existing browsers already supported these conditions.
Next, they identified conditions that were supported insufficiently by these
features, and defined new design goals to attain this support. For example,
they found that users should have a better global understanding of cookie uses
and benefits and harms, and should have a better ability to manage cookies
with minimal distraction. Finally, they attempted to come up with designs of
new features that satisfied these goals, and proceeded to implement them into
the Mozilla browser.

In a second study, reported in Friedman, Kahn and Borning (2006), Kahn,
Friedman and their colleagues consider the design of a system consisting
of a plasma display and a high-definition TV camera. The display is to be
hung in interior offices and the camera is to be located outside, aimed at a
natural landscape. The display was to function as an ‘augmented window’
on nature that was to increase emotional well-being, physical health and
creativity in workers. In their VSD investigation, they operationalized some of

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

57 Values in technology and disclosive computer ethics

these values and sought to investigate in a laboratory context whether they
were realized in office workers, which they found they did. They then also
identified indirect stakeholders of the system. These included those individuals
that were unwittingly filmed by the camera. Further research indicated that
many of them felt that the system violated their privacy. The authors concluded
that if the system is to be further developed and used, this privacy issue must
first be solved. It may be noted, in passing, that, whilst in these two examples
only a few values appear to be at stake, other case studies consider a much
larger number of values, and identify many more stakeholders.

3.5 Conclusion

This chapter focused on the embedded values approach, which holds that
computer systems and software are capable of harbouring embedded or ‘built-
in’ values, and on two derivative approaches, disclosive computer ethics and
value-sensitive design. It has been argued that, in spite of powerful arguments
for the neutrality of technology, a good case can be made that technological
artefacts, including computer systems, can be value-laden. The notion of an
embedded value was defined as a built-in tendency in an artefact to promote
or harm the realization of a value that manifests itself across the central
uses of an artefact in ordinary contexts of use. Examples of such values in
information technology were provided, and it was argued that such values
can emerge because they are held by designers or society at large, because
of technical constraints or considerations, or because of a changing context
of use.

Next, the discussion shifted to disclosive computer ethics, which was
described as an attempt to incorporate the notion of embedded values into
a comprehensive approach to computer ethics. Disclosive computer ethics
focuses on morally opaque practices in computing and aims to identify,
analyse and morally evaluate such practices. Many practices in computing
are morally opaque because they depend on computer systems that contain
embedded values that are not recognized as such. Therefore, disclosive ethics
frequently focuses on such embedded values. Finally, value-sensitive design
was discussed. This is a framework for accounting for values in a comprehen-
sive manner in the design of systems and software. The approach was related to
the embedded values approach and its main assumptions and methodological
principles were discussed.

Much work still remains to be done within the three approaches. The embed-
ded values approach could still benefit from more theoretical and conceptual
work, particularly regarding the very notion of an embedded value and its
relation to both the material features of artefacts and their context of use. Dis-
closive computer ethics could benefit from further elaboration of its central

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

58 Philip Brey

concepts and assumptions, a better integration with mainstream computer
ethics and more case studies. And VSD could still benefit from further devel-
opment of its methodology, its integration with accepted methodologies in
information systems design and software engineering, and more case studies.
In addition, more attention needs to be invested into the problematic tension
between the values of stakeholders and supposedly universal moral values
brought in by analysts. Yet, they constitute exciting new approaches in the
fields of computer ethics and computer science. In ethics, they represent an
interesting shift in focus from human agency to technological artefacts and
systems. In computer science, they represent an interesting shift from utili-
tarian and economic concerns to a concern for human values in design. As
a result, they promise both a better and more complete computer ethics as
well as improved design practices in both computer science and engineering
that may result in technology that lives up better to our moral and public
values.

EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use

Stage 3: Project Assignment (YouTube)

The technology would be YouTube. 

In this paper, you will write a 1000-1250 word narrative that communicates your assessment of your technology’s impacts, biases, and political properties.  

Your paper should do the following: 

1. Introduce your technology by describing its intended use, its location, and its creators and their demographics. 

2. Answer the question, “What invisible biases, values or norms may have been introduced into the design, planning or intended use of this artifact by the creators or planners, and how can you determine this?”

3. Based on your observations of interactions between people and this technology, and the context of the technology (physical or digital), discuss who benefits from the technology the most and who benefits from it the least and offer some reasons why this is the case. 

4. Review pages 252-256 Winner’s article, “Do Artifacts Have Politics,” from Week 3 (Please see attached for article) and suggest how the technology you have chosen can create a certain types of social order in our world by including and benefiting some people while excluding others. Or, you could talk about how the artifact emphasizes some values and norms over others in its actual design as well, basing your observations on Brey’s ideas from “Disclosive Computer Ethics” in Week 4 (Please see attached for article), even if the technology is not digital. You can include any sociological, anthropological studies that have been done on the technology or lawsuits that have been filed etc. as part of your research as well.

·
For example, driver side safety air bags and car seats are often designed for the average height and weight of men, thus protecting them better in crashes, and even injuring women as many experience pelvic fractures from the impact of airbags. A technology that is not designed to equally protect people of all sizes may be an ethical issue. 

1. Have a concluding paragraph that sums up the conclusions you have drawn about your technology in the process of doing this project. 

2. Support and explain your points with observed evidence and any research you have done. Also, be sure to use the Required Learning Materials from weeks 3-4 to help you. 

3. Make sure to provide in-text citations for any paraphrases, summaries and quotes in MLA format and provide references for all sources in MLA format. In general, avoid long quotes from your resources (more than 3 lines) and make sure to use all of your resources in your paper. 

Format 

Submit this as a Word document. Your completed paper should be 4-5 pages (1000-1250 words) in length and be based on the work you did for stages 1-2. You can use the attached template as a guide for formatting and/or organization if you want to. 

Please see attached for stage 2 paper for assistance. 

3

7 February 2023

Social Media Platform (YouTube)

Who created and designed this technology?

The technology that I will be using for this assignment is YouTube. YouTube is an online platform where people can upload, watch, and share videos with others (Niebler 223). YouTube was founded by Chad Hurley, Steve Chen, and Jawed Karim, former PayPal employees. The co-founders are American men of Asian descent; Hurley was born in Pennsylvania in 1977, Chen was born in Taiwan in 1978, and Karim was born in Germany in 1979. The purpose of YouTube was to provide a space for people to upload and share videos with a global audience. According to Harley, the concept for YouTube arose from the co-founders’ dissatisfaction with their inability to share videos they filmed at a dinner gathering conveniently.

A brief history of YouTube:

YouTube was launched in 2005; it has been around for nearly 18 years and has undergone several significant changes since its launch. For example, in its early years, it consisted of user-generated content. Still, it has expanded to include a wider range of video content like music videos, movie trailers, and TV shows. YouTube has advanced features like live streams and virtual reality viewing options.

Users of YouTube

YouTube has a diverse user base that encompasses different ages and backgrounds. Young people, particularly teenagers and young adults, frequently use the platform for entertainment and educational purposes (Abuljadail et al. 18). On the other hand, businesses and content creators utilize YouTube for advertising and monetization purposes. Meanwhile, older individuals and those living in rural areas with limited internet access are less likely to use the website.

Beneficiaries of YouTube

YouTube benefits content creators, advertisers, and the company itself the most. Creators can earn money through ads and sponsorships, while advertisers can reach a large audience through the platform (Stokel-Walker 89). YouTube also benefits from the large amounts of user data it collects and can use for targeted advertising. Some groups, such as marginalized communities, have difficulty benefiting from YouTube due to its algorithm biases and the lack of representation in its content.

Potential changes to the technology

One change that needs to be addressed on YouTube is algorithm biases. YouTube can make the platform more inclusive and equitable by addressing algorithm biases and increasing the representation of marginalized communities. This involves incorporating more diverse content recommendations, elevating underrepresented creators, and combating hate speech and harassment on the platform. These changes would result in a platform that is accessible and beneficial to a wider range of users and creators.

Work Cited

Abuljadail, Mohammad, Et Al. 
The Audience and Business of YouTube and Online Videos. Rowman & Littlefield, 2018.

Niebler, Valentin. “‘Youtubers Unite’: Collective Action by YouTube Content Creators.” (2020): 223-227.

Stokel-Walker, Chris. 
You tubers: How YouTube Shook Up TV and Created a New Generation of Stars. Cranbury Press, 2019.

Expert paper writers are just a few clicks away

Place an order in 3 easy steps. Takes less than 5 mins.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00