Posted: April 25th, 2025

Computer Science HCI and UI – Assignment

Review the case studies in Chapter 6 (attached). Identify and share 3 lessons that you learned from them.

Then, imagine you are the new Chief Design Officer (CDO) of a start-up. Using those lessons and the concepts from Chapters 2 to 5 (attached), draft a directive to your UI/UX designers telling them how you want them to address the issues and challenges resulting from physical, cognitive, perceptual, personality, environmental, and cultural differences and diversities.

Need 6-8 pages in APA format and cite at least 6 peer-reviewed articles. Need an introduction and conclusion. No AI work.

CHAPTER

•· Prototyping helps you get ideas out of your head and into
some thing more tangible-something you can feel, experience,

work through, play with and test . . . you can’t afford n~t to ”
prototype on your next pro1ect.

Todd Zaki Warfel

• • Prototyping: A Practitioners Guide, 2009

A prototype is worth a thousand meetings. ”

Mik e Davidson

Vice Pres ident of Design for Twitter*

CHAPTER OUTLINE
6. 1 Introduction

6.2 Case Study 1: Iterative Design Evaluation of Automated Teller
Machines (ATMs)

6.3 Case Study 2: Design Consistency at Apple Computer

6.4 Case Study 3: Data-Driven Design at Volvo

6.5 General Observations and Summary

*http: //a lvinalexa nder.com/photos /prototype-worth-thousand-meetings

211

212 Chapter 6 Design Case Studies

6.1 Introduction

This chapter’s case studies present design contexts and applications to let readers
see how tradeoffs and choices are made. Readers may find the case studies valu­
able for encapsulating design learning and showing the challenges of a design
context so that they can be shared within teams or across an organization.

The three case studies were chosen to cover this book’s design methods. One
example of the design methods is whiteboard or digital sketching (Buxton, 2007;
Greenberg et al., 2011), where prototype screen designs are presented for dis­
cussion and collaboration using whiteboard drawing. A number of tools and
apps exist to support this technique.

User-interface designs are often proposed on a napkin at a favorite coffee
shop. Wireframes (Usability.gov, 2015) and supporting wire framing tools are
popular to define an interface design. Other design methods include sticky
notes placed strategically on a whiteboard sketch or computer -based mockup of
a screen. Higher-fidelity screen prototypes can be generated to illustrate the
state of the design by adding navigation options, icons, and animation for clari­
fication of design decisions.

Case Study 1 is titled “Iterative Design Evaluation of Automated Teller
Machines (ATMs),” a study in the user-interface process of developing ATMs
with details on how to perform usability testing of A TMs. This case stud y is a
good example of how an iterative HCI process could be performed, exposing
potential roadblocks while illustrating in a specific example the processes described
in previous chapters regarding user-interface design and development: observe,
refine, design, implement, eva luat e, itera te.

Case Study 2 is titled “Design Consistency at Apple Computer” (Apple,
2015a). This case study is part of the Apple Human Interfaces Guidelines (Apple,
2015b) that results in a perspective and suggested approach for practitioners.
Many product manufacturers besides Apple have developed style guidelines to
ensur e a consistent user interface across multiple products. For example, a com­
pany de veloping multiple techno logy products would prefer that its user inter ­
face be consistent across the product lines, follow a corporate style that reflects
branding, and ensure that it is easy for a new user to master a new product from
the same manufactur er. AJthoug h arguably App le may be one of the best at this,
other industries, such as automobile manufacturers and medical equipment
companies, work hard to pursue this goal.

Case Study 3: “Data-Driven Design at Volvo” (Wozniak et al., 2015) shows
successfu l collaboration methods in action to solve a diverse, distributed corpo­
rate data-analysis problem. By using user-interface development process meth­
ods, data are retrieved and presented in a tailorable format that empov.rers the
users to achieve their business and organization goals.

6.2 Case Study 1: Iterative Design Evaluation of Automated Teller Machines (ATMs) 213

BOX 1. 1

See also:

• Chapter 4, Design

• Chapter 5, Evaluation and the User Experience

• Chapter 12, Advancing the User Experience

The chapter concludes with general observations and a summary that com­
pares and contrasts all three case studies and their importance. There are many
design process models and user experience evaluation approaches. While read­
ing about the case studies in this chap ter, reflect on the process steps required
for a successful outcome.

6.2 Case Study 1: Iterative Design Evaluation of
Automated Teller Machines (ATMs)

Most of us have become familiar with ATMs of varying styles and sizes. Drive­
thru, stand-up, kiosk-style, standalone ATM structure, part of a bank wall or
lobby – these machines are everywhere. One can go to another bank and pay a
fee if he or she needs cash right away. An individual can travel the world and
get cash in the local currency just by inserting his or her A TM/ debit card, enter­
ing the P[N, and making a few choices, hopefully in a language that the user
understands. Many A TMs have multi-lingual options now. Many banking
mobile apps allow for many of the same transactions that an A TM provides
except getting cash. So, let’s limit this case study to just physical ATMs, an
example of which appears as Fig. 6.1.

As for any device, the user interface for ATMs has evolved, from a more prim­
itive electronic keypad to a magnificent, immersive experience of touchscreen
displays with animated advertisements, tones signaling completion of task steps
or key presses, color and font choices to improve the appearance while remain­
ing consistent with the bank brand, and the latest security features such as sma ll
mirrors to see behind the customer, security cameras recording the customer’s
presence for safety, copious lighting at night, and card entry points that hinder
“skimming” or copying of ATM cards by thieves and fraudsters.

\A/hen learning about usability of the user interface for an ATM, after reviewing
Chapters 1-5 of this text, go visit the nearest ATM. (Disclaimer: The authors cer­
tainly understand it takes more than a few minutes to understand usability meth­
ods and techniques, but please read on regarding this usability “experiment.”)

214 Chapter 6 Design Case Studies

FIGURE 6. 1
Samp le ATM.

Designers could run a stopwatch as the user withdraws a specified amount at
multiple ATMs, record their movements, watch them move from keypad for
PIN entry to touchscreen for the withdrawal step(s) wi th prompting for receipt­
the experienced user is like a one-person band playing multiple instruments-to
get that end result (cash in hand, with or without receipt, ATM card returned,
account updated, qt1ickly and safely). Visualize the statis tical data that can be
captured from this usability “experiment”:

• Time to complete all tasks over a statistica lly significant set of ATMs

• Time expended for these ATM steps or “subtasks”:

l. Entrance into ATM (approach ATM, read instructions to get started, insert
card, enter PIN, continu e following prompted instructions)

2. Enter commands to make withdrawa l
3. Receive cash, optional receipt, and card returned (with the preferred goal

of leaving a positive balance in the account)

• Objective and subjective user feedback and contextual observation regarding
user performance of the above A TM steps

To complicate things, let’ s add an eye-tracker, key-logging tool to record key­
board and/ or touchscreen data entries, record any errors, document navigation
steps taken, and have the user enunciate steps taken with comme ntar y in a
think-aloud protocol-for example, “I am now going to inser t my ATM card

6.2 Case Study 1: Iterative Design Evaluation of Automated Teller Machines (ATMs) 215

into the machine.” Consider having a team record this event in a video to
analyze later as is done in user experience labs. An excellent set of guiding
principles for user experience appears in Hartson and Pyla (2012).

The amount of data to analyze is growing! Tl1e previous chapter (Chapter 5)
discusses how to structure this usability evaluation to make this process practi­
cal and finite.

Designers who study neighborhood ATMs and review current literature
from A TM developers or other vendors can develop a useful competitive fea­
ture analysis. One intriguing design for an ATM kiosk in developing countries
is discussed in Birnie (2011). Numerous ATM screenshot examples and ATM
designs can be found by a quick web search, illustrating style alternatives world­
wide of current A TM machine design.

Look at the A TM design for accessibility, i.e., universal usability (see Chapter 2).
Consider some of the guide lines, principles, and theories that drive the design
(Chapter 3), often resulting in a style guide that merges these concepts with
product branding to ensure an end result that fits the business objective for the
A TM. Of course, manage the design process in an orgaiuzed, well-defined, user­
centered, iterative fashion (Chapter 4).

Once this usability experimentation and literature search is complete, design­
ers could enter the next life-cycJe (design) phase. Think of this as an incremental
continuottS improvement. The data collected can be analyzed to arrive at con­
crete, data-driven design interventions that ma y improve the user experience.
These alternative designs can then be sketched and prototyped as discussed in
Buxton (2007) and Greenberg et al. (2011). Make sure to review Chapters 4-5 of
this book for design and eva luati on processes.

Designs can then be documented, tradeoffs between alternative designs evaltt­
ated, and specifications written for at1 improved A TM design. Iterative design is the
best approach here, with design prototypes developed, evaluated, and improved.
Again, striving to make this process complete yet finite is the chaUenge. Typically, a
delivery deadline will drive the depth and fidelity of any prototyping effort-for
example, the next-generation prototype ATM needs to appear and be operational at
a trade show on a specific date and location. Some clients require “capability dem­
onstrations,” where the increasing fidelity of the prototype is shown in “proof of
concept” demonstrations following a plaimed, incremental development strategy.

Sales commitments are made, final implementation continues, ATMs built,
delivered, and fitted into a physical structure and integrated into the banking
network, bartk personnel are trained, customers are notified, and so on, to bring
these products online.

Observations
Does the analys is of the usab ility of the newly installed ATM stop here? Certainly
not. Continue gathering feedback from customers and monitoring implementation
success. Consider rolling out test sites (e.g., beta testing) to first ensure new designs

216 Chapter 6 Design Case Studies

are accepted on a smaller scale. These business decisions are tight ly coupled to the
design and usability results discussed here. Ultimately, as in many user-interface
designs, the success of a product is often judged by the user interface.

At this point in the life cycle of the ATM case study, the following possible
scenario could occur: In the ideal situation, everything works perfectly and the
bank clients love the new design. System perfonnance is terrific, cost per trans­
action drops, profits are up, and customers flock to the ba1.1k to use the new
ATMs.

Realistically, some changes may need to be made. There could be numerous
unanticipated user transition and acceptance issues. The bank could hire someone
to independently develop an alternative user experience. The baJ.lk captures data
from the deployed sys tems to methodically (like a software upgrade) roll ou t
improveme11ts to the ATM network . The feedback loop continues.

The remainder of this chapter focuses on two specific design case studies and
how the organizations approached their user-interface challenges.

6.3 Case Study 2: Design Consistency at
Apple Computer

Case Study 2 examines the process and decisions reflected in an Apple document
titled “From Desktop to iOS” (Apple, 2015a, 2015b ). In this analysis, Apple examined
products and design decisions made for Keynote® (for presentations), Mail (e-mail
for iPhone), and web con tent.

The case study reviews style guidelines from the Apple Development Guide­
lines and how these were applied in bringing apps to iOS-enabled devices. For
more information on related issues, see the iOS Human Interface Guidelines in
the iOS Developer Library that Apple provides (App le, 20156). Following are a
few samples of the referenced iOS Human Interface Guidelines (explanation,
use , and screenshot illustrations are given for the guidelines):

• Take advantage of the whole scree n

• Reconsider visua l indicators of physicality and reaUsm

• Let translucent user interface elements hint at the content behind them

• Let color simplify the user interface

• Ensure legibility by using the system fonts

• Use depth to communicate

There are guidelines for icons and image design, iOS techno logies, user inter ­
face elements, and more.

Keynote has presentation development tools, graphics and toolbars for rapid
generation of presentations. An example screenshot appears as Fig. 6.2.

FIGURE 6.2

6.3 Case Study 2: Design Consistency at Apple Computer 217

Add HMSitiOnf; ~nd builds, edit

pre-senter note$, and more

Sample Keynote display with help text.

Presentatio11 (graphics) styles as well as human interaction for touchscreen
devices are included in the case study, illustrating tl1e product’s ease of use. iOS
device-enabled features apply direct manipulation and gesturing interaction
(see Chapter 7) that are easy to use.

Next the Apple study looks at Mail for iPhon es . Fig. 6.3 illustrat es Apple’s
intuitive, predictable navigation for Mail that has a user interface consistent
with the Apple product line.

The case study finishes with a discussion on Safari on iOS devices, where
again the mobile \,veb-viewing us er experience on iOS devices is easy to use and
consistent with the product line and gives web designers val uable insight as
to the portability and ease of development of web content for iOS devices.
References to iOS Design Strategies consistent with the teachings in this text are
included – specify the app, determine who the user s are, identify desired fea­
tur es (requirements collection), pr eliminar y and detailed design, build and
deve lopment, evaluation and testing.

Observations
Some general observations are worth noting with respect to this case study. The
years of experience with the referenced Apple Human Interface Guidelines are
brou ght to bear on the problem. There is a consistent style across all products

218 Chapter 6 Design Case Stud ies

Cance l Re: Garden pix Send ( lnbox (2)

To: Maria G

Cc/Bee.From.

SubJect: Re: Garden pix

Wow! Awesome flowers!
You’re such a talented gardener, I

Sent from my iPod

I On Aug 20, 2013, at 10:58 AM,

A S D F G H J K L _____ ,_.a: __ –
♦ ZXCVBNM<El

1– – – — — ___, — —

.?123 space return

FIGURE 6.3
Samp le iPhone Mail screens.

MariaG
To: Maria G more.,,

Garden pix
August 20, 2013 at 10:58 AM

Herc arc some of the results of that grcal
sal’dening advice you g.:we ine!

V ( Yahoo! lnbox Edit

Q Search

• Maria G 10:58AM )
Garden pix
Here are some of the results of that
great gardening advice you gave me’

MariaG 10:51AM)
Out of the office next week
Well, I’m finally going to Italy. Can you
beheve 11? It’s been a dream or mines …

Kent H 10/4111
Reminder. Office party this Saturday
Join us at the park for fun. games. and
team-building events. Don’t fol’get to …

Kent H 10/3/11 )
Can’t m.ake the meeting today
Just got a call from Sarah’s school–1·ve
got to go pick her up early today. Can …

Maria G 2112/10

and devices that makes device operation comfortab le and intuitive. Rapid
device technology improvements (lighter weight, faster, more colors, more pix­
els, improved throughput, etc.) result in a constant reevaluation of the user
interface and improvements of the guidelines applied . Still, the princip les d is­
cussed in this text also ho ld true in this case study: universa l LLsability, gu ide ­
lines based on principles and theory, iterative user-centered design processes,
and a keen appreciation of the user experience and of style.

The following case study shows a successful col laboration in a large cor­
poration uti lizing user- interface development process methods to solve a
data-analysis challenge.

6.4 Case Study 3: Data-Driven Design at Volvo

The deve lopment of Volvo’s big data service provides a terrific example of a
case study with big data analy tics used in the corporate world tha t contains a
strong user-in terface design component (Wozniak et al., 2015). But first a defini­
tion: big data is defined (Google, 2015) as

Extremely large data sets that may be analyzed computationally to reveal
patterns, trends, and associations, especially relating to human behavior and
interactions.

6.4 Case Study 3: Data-Dr iven Design at Volvo 219

In this case study, stakeho lders were identi fied and empowered to help
design the service (toolset) that would be used by the company. A diver se set of
stak eholders (including many users) proved to be a success-oriented approach
for this participatory design team . Stakeholders included the in ten1al IT organi ­
zation, an inv ited (externa l) exper t on big data implementations, database engi ­
neers, and business iI1telligence analysts. Workshops were held for the
stakeholders as well as users of the data such as the organizations in charge of
vehic le ma intenance. The workshop attendee s (represen tative stakeholders)
strived to define how the resu lts might appear and how they could be appl ied to
the various stakeholder group missions. Attendees were encouraged to “think
outside the box” in terms of potential uses for the data. They had to make sure
the data were indeed collectable and would make sense to improve organiza ­
tion performance.

Taking huge datasets of Volvo truck service data and essentially prototyping
the analysis output that cou ld be performed worked successfully to identify tl1e
needed da ta in useful formats. In the work shop, a low-fid elity pro totyp e was
deve loped (see Fig. 6.4).

Through a series of refinements with rep resentative users worldwide, this
prototype evo lved into something useful for all concerned. Fig. 6.5 shows a
sampl e final version . Users cou ld bring up advanc ed information about their
prod ucts (vehicles) such as service history statistic s, perform querie s on marke t­
specific issues, see maps of vehicle usage, highlight interesting values, and more .

– BoNlnro——–

VIN: YV2ASW0A68B5000q672

Chass is nr: 8 5ooq72

._., “‘
lA(

2008-02.-18

Owner: Advoneed Trucks AB

No. of drivers: 4

F GURE 6.4

– l<ey…-vieeewn t,

* * Servic.eon~ &wffiOa’IC.OK

I – ice historv I
……

………

. Svlpe,cted oeo,1)0:,; ‘ E….,,
– ~.it.~tblorw

KEY IN!DCATOAS

/wg. •PMd on top geor · 86 4’7 kph *
1’of…,,.on1,op0’«’81 ,41′ *

‘ ‘


Alo rwt con.ontoc>Ot<W11l.41U tl()Ol(tl'I *
'""°""l'O'- per ,0010, 34,2 *

Keycompc,n,e,nt

Component

V0-41565

V0-65856

V0-84123

V0-41565

V0-65856

V0,84123

“”L-os-, .-.,..-v,-iee_v.,…is,-its——–, •

2013.02·04

2012.10-0q

2012-0q .2s

(<\'-'~– ———- _,) .

• Dot& replaced •
2012.10.oq

20 12-0q.2s

2011-03-12

2012·10·0 q

2012-oq.29

2011•0 3•12

Low-fidelity prototype resulti ng f rom big data analysis of t ruck service statist ics.


)

220 Chapter 6 Design Case Studies

_,
&lgiM

EMntion

wi-tb, w

Ct!assis:

f HI) 4×2 l tK IIOI’

…..
W!!l100

“”‘ – 12 (U >: t.$M1. A.otloc8N
Ena H,P : > 10’°””””‘

S… C. . .,._ (Glo6e) ~ IAwll II. (S.ld.)

Sith Spoilff

f”.ith Whffl l JPt ..,_ ;. ….
Ch. tkif1 ………. Wo Clll1•F…..,..


ContrKt into : ,rw

COfl1rad Ub Tn,.

• SUndlrd Pim. SffviN And 11:liiMMMct

• &wgiM Widl ~ Arid Eqi,……,.

• Elactric, ll’oww Supp’J; Ughling;

–• ~ Su’lpffl t.iOft And Slffri,ng

· F,…… ~- o.c,,ing And WhN11

· llitulJ;aneou,, ~ I Functions

·Unl.Aoiiwn

FIGURE 6.5

Tot.al c.ost Tat.I duro1tion Rmrenot
litQQt,..,, ptxr:,) —

Ful ly functional dashboard prototype .

Tot..a llilu gr.

8’ffk dist. :

Top g,t•d itt. :

CNlstdsH. :

Fvtl COl’IM,M’lption,:

Engine ownpHd :

Engine OW<IOld :

l:ltllcw ,l ….. b:l,rS

J
I

500·

Single view of vehicle

………

Tob i Fwt

PTO timr.

Tot.II.._ !

Adblw :

,._

0————–~ Jal’! 2010 Jiil 2011 JIii 2012 Jan 2011 JM 20M

fwl Conwmption 1nd t,pNd lt.AJ)

~- ~

1:_:~~———-7:!
~ 10· 20

o~————-~!:A.,_
, .. 2011 , .,. 20,2 Jan20tl J• ~ “4. WtD

,——- –, -i—;o;:-,r–, Lut 5ttviw ~tiOft a:l!J:

. . .
‘ …. ,

01N:

A big data service was developed with user -customizable reports. Custom ­
ization played a crucial role in the design. Users invo lved in the process knew
they would be able to choose features and rearrange views based on market-to ­
market difference s. The study showed how following a user-int erface develop ­
ment paradigm or development metl1odology led to a successful result that
had buy-in from a distributed set of users. Indeed, if brokering organizational
support is desired for a new inno vation or product, a design -thinking work­
shop of the sort practiced in thi s use case can be an effective technique.

The performer s of the study learned to first identify source s of data while
empowering the stakeholders of the data to choose ‘”‘hat data they could use

6.5 General Observations and Summary 221

(and how) in order to get their jobs done. Also, there were discussions in the
workshops to analyze the data outputs. The final output was corporate big data
policies that led to stakeholder-customizable report formats to better improve
internal corporate communication and decision making. The tool is now used in
all European Volvo truck dealerships.

Observations
There are some general observations worth making here. What was essentially a
process for developing a big data analysis strategy, service design, and support­
ing tools for a company to use to increase internal communication and profit­
ability turned out to apply methods and processes taken from the world of user
experience design and designing user interfaces! The authors applied a well­
known methodology and an interface design paradigm-the process worked.

Often, user-interface processes are applied to business processes without the
persons leading the change realizing its origiI1. For example, to analyze patient
flow in a hospital (patient recordkeeping, scheduling of resources, service bottle­
necks, prioritization of patient needs, etc.) may sound to some like a simple data­
base or queueing problem or an application of business-process reengineering.
However, when humans are mixed wi tl1 sotrrces of diverse data often iI1 a time ­
sensitive environment, quick and easy access to critical data can make an organi­
zation run more smooth ly, better allocate resources, and lead to better decisions to
improve customer satisfaction and patient outcomes. “Knowmg thy user,” opti­
mizing access to meaningful data, iteratively gathering feedback data, getting
stakeholder buy-in, ai1d so on, cai1 all be accomplisl1ed using user-interface design
and development methods as was performed in this case study.

6.5 General Observations and Summary

This chap ter’s case studi es are “a tip of th e iceberg” in what can be accomplished
by designers of user-interface systems. The case studies were chosen stra tegi­
cally to highlight design contexts, various applications, and incremental contin­
uous improvement.

The ATM design example illustrated where what may have started ou t as a
relatively straightforward task turned into a methodica l study of how to improve
a user interface to the machines and was not only accepted but embraced by the
general bankmg customers. Clearly a competitive edge and source of profitability
are the extension of the banking functions for cus tomers via well-designed A TMs.

222 Chapter 6 Design Case Studies

The Apple guidelines case study shows one company’s approach to a consis­
tent, easy-to-to use style for all the company’s products and iOS-enabled
devices. Lastly, the Volvo study shows how following a good user-interface
design process can result in a successful conclusio11 with a large, data-intensive
problem.

Additional sources for interesting user-interface case studies can be found on
the web and in Snyder (2003), Righi and James (2007), Karat and Karat (2010),
and Warfel (2011).

Practitioner’s Summary

Interface designers are aware of the challenge of working with multi-disciplinary
teams while striving for consensus in a timely manner to address the require ­
ments for a new or updated system. The challenge here is that many use varying
applications of interface design methodolog ies that are not standardized . What
might work for one company, organization, or industry may not work for
another.

Make sure to do some preliminary work up front to apprec iate and under ­
stand the differences in development methodologies and to apply what makes
sense for the application. This same rule applies to any software develop­
ment task. Organizations can benefit from methods used elsewhere but must
be careful ly managed in order to achieve a successfu l result within schedule
constraints. Defining the user and characteri zing end user needs make up the
engine to dri, re the successf ul user experience analysis.

Review interface designs for va lue-sensitive design issues-designs that cen­
ter on human wel l-being, human dignity, justice, welfare, and human rights.
Ensure interface designs meet universal usability-the design of information
and communications products and services that are usable for every citizen
(Friedman et al., 2013).

Researcher’s Agenda

There is ample opportunity for research and experin1entation wi th different
interface design methodologies and how they interface with software develop­
ment process models. Not starting from scratch, there are often examples on the
web of similar development challenges that can be extrapolated for a need or
application. It would be beneficial to develop a characterization of the nuances
in different user-interface development methodologies and how they interact
with current software development processes.

Discussion Questions 223

WORLD WIDE WEB RESOURCES

www. pearsonglobaleditions .com/shneiderman

Case study examples have a significant presence on the internet and are
growing. Check out ACM SIGCHI “CHI” conferences, which hold practitio ­
ner’s sessions. SIGCHI publishes “CHI Extended Abstracts” with example
case studies available via the ACM Digita l Library.

• ACM SIGCHI “CHI” conferences: http://www.sigchi.org/conferences

Discussion Questions

1. Consider additional requirements and technology to further complicate your
analysis of an Automated Teller Machine (ATM) design:

• Use eye-tracker data to further analyze the product.

• Consider accessibility (universal usability) issues such as lighting, physical
placement of ATM, etc.

• Consider user profile issues, e.g., if a user is using an ATM for the first
time.

• Requirement to perform beta and/or market tests.

• Are there other stress factors such as a looming time deadline or a personal
safe ty issue?

2. Review the iOS Human Interface Guidelines in the iOS Developer Library at
https:/ / developer .apple .com/library /ios/ documentation/UserExperience/
Conceptual /Mobi leHIG / . In groups of two, select one guide line that makes
perfect sense and seems easy to be incorporated into a design, and select
another guideline that is much less clear, requiring furth er explana tion or
analysis to be incorporated into a design. Share these ideas with the class to
see if there is any trend or pattern on the easy-to-do versus the harder-to -do
gu idelines.

3. At present, the drive to use Big Data to define or enhance corporate strategies
seems to be a global business trend. State an example of where this data can
improve a busine ss, focusing on user- interface aspects.

4. Cite a past experience where user-interface development methods might
apply to another sys tem development activity that might not have a stro ng
user-interface component.

224 Chapter 6 Design Case Stud ies

References

Apple, Fro111 Desktop to iOS (2015a). Available at https:/ /developer.apple.com/library/
ios I documentation/UserExperience / Conceptual/ MobileHIG / Desktop ToiOS.
htm l#//app le_ref/doc/uid/TP40006556-CH5 1-SW1.

App le, iOS Hunuzn Interface Guidelines (2015b). Available at https://developer .app le.
com/library/ ios/ documentation/UserExperience/Conceptual/MobileHIG/.

Birnie, S., The pillar ATM: NCR and community centered innovation, DPPI ’11 Proceed­
ings of the 2011 Conference on Designing Pleasurable Products and Interfaces, ACM (2011).

Buxton, W., Sketching User Experiences: Getting tfre Design Right and the Right Design (In­
teractive Technologies), New York: Morgan Kaufmann (2007), 77- 80, 135- 138.

Friedman, B., Kahn Jr., P.H., Borning, A., and Huldtgren, A., Value sensitive design
and information systems, in Doom et al. (Editors), Early Engagement and New Tech­
nologies: Opening Up the Laboratory, Springer (2013), 55-9 5.

Google, Big Data (2015). Availab le at https:/ /W\.vw.google.com/#q=what+is+big+da ta.

Greenberg, S., Carpendal e, S., Marquardt, N., and Buxton, B., Sketching User Experiences:
The Workbook, San Francisco: Morgan Kaufmann (2011), 29-66.

Hartson, R., and Pyla, P., The UX Book: Process and Guidelines for Ensuring a Quality User
Experience, Morgan Kaufmann (2012).

Karat, C., and Karat, J., Designing and evaluating usable technology in industrial research
three case studies, Synthesis Lectures on Human-Centered Informatics 3, 1 (2010), 1- 118.

Righi, C., and James, J., User-Centered Design Stories: Real-World UCO Case Studies (Inter­
active Technologies), San Francisco: Morgan -Kaufman (2007).

Snyder, C., Paper Prototyping: The Fast and Easy Way to Design and Refine User Interfaces,
San Francisco: Morgan Kaufmann (2003).

Usabi lity.gov, Wireframing (2015). Avai lable at http://www.u sability.gov/how-to ­
and-tools/ methods/ wirefra1ning.html.

Warfel, T. Z., Prototyping: A Practitioner’s Guide, Rosenfeld Media (2011).

Wozniak, P., Val ton, R., and Fjeld, M., Volvo single view of vehicle: Building a Big Data
service from scratch in the automotive industry, CHI EA ’15: Proceedings of the 33rd
Annual ACM Conference Extended Abstracts on Hurnan Factors in Co,nputing Systen1s,
ACM (2015), 671-678 .

CHAPTER

Usal1ilib>j

99 Social scientists have shown that teams and organizations whose
members are heterogeneous in meaningfu l ways, for examp le, in

skill set, education, work experiences, perspectives on a problem,
cultura l orientation, and so forth, have a higher potential for ”

innovation than teams whose members are homogeneous.

Beryl Nelson

Communications of the ACM, November 2014

99 I feel … an ardent desire to see knowledge so disseminated through

the mass of mankind that it may, at length, reach even the ”
extremes of society: beggars and kings.

Thomas Jefferson

Reply to Ame rican Philosophical Society, 1808

CHAPTER OUTLINE
2.1 Introduction

2.2 Variations in Physical Abilities
and Physical Workplaces

2.3 Diverse Cognitive and Perceptual
Abilities

2.4 Personality Differences

2.5
2.6
2.7
2.8
2.9

Cultural and International Diversity

Users with Disabilities

Older Adult Users

Children

Accommodating Hardware and
Software Diversity

57

58 Chapter 2 Universal Usability

2.1 Introduction

The remarkable div ers ity of hum an abil ities, background s, motivat ions , per­
sonalities, cultur es, and work styles challeng es int erface design ers. A youn g
female designer in India ,..,ith computer training and a desire for rapid interac ­
tion using densel y packed displa ys may have a hard time designing a success ­
ful interf ace for older male arti sts in France with a more leisurely and free-form
work style. Und ers tanding the phy sical, int ellectual , and personal ity differ­
ences among users is vital for expanding market share, supporting required
government services, and enabling crea tive participa tion by the broades t
possible set of users. As a profe ssion, we will be remembered for how well we
meet our users’ need s. That’ s the ultimate goal: addre ssing the needs of all
users (Fig. 2.1).

Raising the F1oor
OH£.QZE -FIT8.0 NE DfQITAL IHa.US ION

wtio ‘” .,. Wlltt – do RHOUrtel Get lnvotwd News. PrttS cont.Kt us

‘- -I

LATEST NEWS RtF people, Join us!

Tl’tl nslale to. .. 1:1 TEXT AND DISPLAY O RESET ALL

[B LAYOUT AND NAVl Tl N

l.’I LINKS AND BIITTONS

□ EMPHAS IZE LINKS
Makes links larger, bold, and
undertined

□ MAKE INPUTS LARGER
Makes buttons, drop-down menus,
texl-fiekls, and other inputs larger

Raising the Floor is an organization of diverse people from industry. academia, NGOs and other sectors who have c.ome together to
ensure that people who face barriers due to disability, literacy, digital-literacy, and aging are able to fully understand, access, and
use the digital world we are creating (the web, computers, tablets, phones, educational materials, ticket machines, thermostats, and
even home appliances). Our central focus is the development of the Global Public Inclusive Infrastructure (GPII).

Benefits of RtF for …………

users service providers employers

FIGURE 2. 1
The website of Raising the Floor includes universa l accessibi lity features suc h
as options fo r emphasizing the lin ks or making buttons large r, offering severa l
font sizes, contrast , tex t desc riptio ns of photos, t ranslatio n services , and so on
(http ://ww w . ra isi ngthef l oo r. net) .

2.2 Variations in Physical Abi lities and Physical Workplaces 59

The huge international consumer market in mobile de,,ices has raised the
pressure for designs that are universally usable. While skeptics suggest that
accommodating diversity requires dumbing-down or lowest-common­
denominator strategies, our experience is that rethinking interface designs for
differing situations often resul ts in a better product for all users. Measures to
accommodate the special needs of one group, such as curb cuts in sidewalks
for wheelchair users, often have payoffs for many groups, such as parents
with baby stroller s, skateboard riders, travelers with wheeled luggage, and
delivery people with handcarts. With this in mind, this chapter introduces the
challenges posed by physical, cognitive, perceptual, personality, and cultural
differences. It covers considerations for users with disabilities, older adults,
and young users, ending wi th a discussion of hardware and sof tware diver­
sity . The important issues of differerlt usage profiles (novice, intermittent, and
expert), wide-ranging task profiles, and multiple interaction styles are covered
in Chapter 3.

2.2 Variations in Physical Abilities and Physical
Workplaces

Accommodating diverse human perceptual, cognitive, and motor abilities is a
challenge to every designer. Fortunately, ergonomics researchers and practitio­
ners have gained substan tial experience from design projects with automobiles,
aircraft, cellphones, and so on. This experience earl be applied to the design of
user interfaces and mobile devices.

Basic data about human dimensions comes from research in anthropometry
(Preedy, 2012). Thousands of measures of hundreds of features of peopl e- male
and female, young and adult, European and Asian, underweight and over­
weight, tall and short – provide data to construct 5- to 95-percentile design
ranges. Head, mouth, nose, neck, shoulder, chest, arm, hand, finger, leg, and
foot sizes have been carefully catalog ed for a va riety of populations. The great
diversity in these static measures reminds us that there can be no image of an
“average” user and that compromises must be made or multiple versions of a
system must be constructed.

Cellphone keypad design parameters – placement, size, distance between
keys, and so forth (Section 10.2)-e vo lved to accommodate differences in users’
physical abilities. People with especially large or smal l hands may have diffi­
culty using standard cellphones or keyboards, but a substantial fraction of the
population is well served by one design. On the other hand, since screen­
brightness preferences vary substan tially , designers often enab le users to con­
trol this parameter. Similarly, controls for chair seat and back heights and for

60 Chapter 2 Universal Usability

disp lay angles allow individual adjustment. When a single design cannot
accommodate a large fraction of the population, multiple versions or adjust­
ment controls are helpful.

Physical measures of static human dimensions are not enough. Measures of
dynamic actions-such as reach distance while seated, speed of finger presses,
or strength of lifting-are also necessary.

Since so much of work is related to perception, designers need to be aware of
the ranges of human perceptual abilities, especially with regard to vision (Ware,
2012). For example, researchers consider human response time to varying visual
stimuli or time to adapt to low or bright light. They examine human capacity to
identify an object in context or to determine the velocity or direction of a moving
point. The visual system responds differently to various colors, and some peo­
ple have color deficiencies, either permanently or temporarily (due to illness or
medication). People’s spectral range and sensitivity vary, and peripheral vision
is quite different from the perception of images in the fovea (the central part of
the retina). Designers need to study flicker, contrast, motion sensitivity, and
depth perception as well as the impact of glare and visual fatigue. Finally,
designers must consider the needs of people who wear corrective lenses, have
visual impairments, or are blind.

Other senses are also important: for example, touch for keyboard or touch­
screen entry and hearing for audible cues, tones, and speech input or output
(Chapter 10). Pain, temperature sensitivity, taste, and sme ll are rare ly used for
input or output in interactive systems, but there is room for imaginative
applications.

These physical abilities influence elements of the interactive-system design.
They also play a prominent role in the design of the workplace or workstation
(or p laystation). The Hurnan Factors Engineering of Con1puter Workstations stan ­
dard (HFES, 2007) lists these concerns:

• Worktable and display-support height

• Clearance under work surface for legs

• Work-surface width and depth

• Adjustabi]jty of heights and angles for chairs and work surfaces

• Posture-seating depth and angle, backrest height, and lumbar support

• Availability of armrests, footrests, and palmrests

• Use of chair casters

Workplace design is important in ensuring high job satisfaction, good
performance, and low error rates. Incorrect table heights, uncomfortable
chairs, or inadequate space to place documents can substantially impede
work. The standards document also addresses such issues as illumination
levels (200 to 500 lux); glare reductio11 (antiglare coatings, baffles, mesl1,

2.3 Diverse Cognitive and Perceptual Abilities 61

positioning); luminance balance and flicker; equipment reflectivity; acoustic
noise and vibration; air temperature, movement, and humidity; and equipment
temperature.

The most elegant screen design can be compromised by a noisy environ­
ment, poor lighting, or a stuffy room, and that compromise will eventually
lower performance, raise error rates, and discourage even motivated users.
Thoughtful designs, such as workstations that provide wheelchair access and
good lighting, will be even more appreciated by users with disabilities and
older adults.

Another physical-environment consideration involves room layout and the
sociology of human interaction. With multiple workstations in a classroom or
office, differer1t layouts can er1courage or limit social interaction, cooperative
work, and assistar,ce with problems. Because users can often quickly help one
another with minor problems, there may be an advantage to layouts that
group several terminals close together or that enable supervisors or teachers to
view all screens at once from behind. On the other hand, programmers, reser­
vations clerks, or artists may appreciate the quiet and privacy of their own
workspaces.

Mobile devices are increasingly being used while walking or driving and
in public spaces, such as restaurants or trains where lighting, noise,
movement, and vibration are part of the user experience. Designing for these
more fluid environments presents opportunities for design researchers and
entrepreneurs.

2.3 Diverse Cognitive and Perceptual Abilities

A vital foundation for interactive-system designers is an understanding of the
cognitive and perceptual abilities of the users (Radvansky and Ashcraft, 2013).
The journal Ergonomics Abstracts offers this classification of human cognitive
processes:

• Short-term and working memory

• Long-term and semantic memory

• Problem solving and reasoning

• Decision making and risk assessment

• Language communication and comprehension

• Search, imagery, and sensory memory

• Learning, skill development, knowledge acquisition, and concept attainment

62 Chapter 2 Universal Usability

It also suggests this set of factors affecting percep tual and motor performance:

• Arousal and vigilance

• Fatigue and sleep deprivation

• Perceptual (mental) load

• Knowledge of results and feedback

• Monotony and boredom

• Sensory deprivation

• Nutrition and diet

• Fear, anxiety, mood, and emotion

• Drugs, smoking, and alcohol

• Physiological rhythms

These vital issues are not discussed in depth in this book, but they have a pro­
found influence on the design of user interfaces. The term intelligence is not
included in this list because its nature is controversial and measuring different
forms of intelligence is difficult.

In any application, background experience and knowledge in the task and
interface domains play key roles in learning and performance. Task- or computer­
skill inventories can be helpful in predicting performance.

2.4 Personality Differences

Some people are eager to use computers and mobile devices, while others find
them frustrating. Even people who enjoy using these techno logies may have very
different preferences for interaction styles, pace of interaction, graphics versus
tabular presentations, dense versus sparse data presentation, and so on. A clear
understanding of personality and cognitive styles can be helpful in designing
interfaces for diverse communities of users.

One evident djfference is between men and women, but no clear pattern of
gender -related preferences in interfaces has been documented. While the major­
ity of video -game players and designers are young males, some games (such as
The Si1nsTM, Candy Crush Saga, and Farmville) draw ample numbers of female
players. Designers can get into lively debates about why many women prefer
certain games, often speculating that women prefer less violent action and
quieter soundtracks. Other conjectures are that women prefer socia l games,
characters with appealing personalities, softer color patterns, and a sense of
closure and completeness. Can these informal conjectures be converted to
measurable criteria and then validated?

2.5 Cultural and Internationa l Diversity 63

Turning from games to productivity tools, there is also a range of reactions to
violent terms such as KILL a process or ABORT a program. These and other
potentially unfortunate mismatches between the user interface and the users
might be avoided by more thoughtful attention to individual differences among
users.

Unfortunately, there is no siJnple taxonomy of user personality types. A
popular, but controversial, technique is the Big Five Test, based on the OCEAN
model (Wiggins, 1996): Openness to Experie11ce/lntellect (closed/ ope11), Consci­
entiousness (disorganized/organized), Extra version (introverted/ extraverted),
Agreeableness (disagreeable/agreeable), and Neuroticism (calm/nervous).
There are hundreds of other psychological scales, including risk taking versus
risk avoidance; internal versus external locus of control; reflective versus
unpulsive behavior; convergent versus divergent thinking; high versus low
anxiety; tolerance for stress; tolerance for ambiguity, motivation, or compul ­
siveness; field dependence versus independence; assertive versus passive per­
sonality; and left- versus right-brain orientation. As designers explore comp ut er
applications for the home, education, art, music, and enter tainm ent, they may
benefit from paying greater attention to personality types. Consumer-oriented
researchers are especially aware of the personality distinctions across market
segments, so as to tune their advertising for niche products designed for tech­
savvy youngsters versus family-oriented parents.

Another approach to personality assessment is by studying user behavior.
For example, some users file thousands of e-mails in a well -organized hierarchy
of folders, while others keep them all in the inbox, using search strategies to find
what they want later. These distinct approaches may well relate to personality
var iables, giving designers the clear message that multiple requirements must
be satisfied by their designs.

2.5 Cultural and International Diversity

Another perspective on individual differences has to do with cultural, ethnic,
racial, or linguistic background (Quesenbery and Szuc, 2011; Marcus and Gould,
2012; Salgado, 2012). Users who were raised learning to read Japanese or Chi­
nese will scan a screen differently from users who were raised learning to read
English or French. Users from reflective or traditional cultures may prefer inter­
faces with stable displays from which they select a single item, while users from
action-orien ted or novelty-based cultures may prefer animated screens and
multiple clicks. Preferred content of webpages also varies; for example, univer­
sity home pages in some cultures emphasize their impressive buildings and
respected professors lecturing to students, while others highlight student team

64 Chapter 2 Universal Usability

projects and a lively social life. Mobile device preferences also vary across cul­
tures that lead to rapidly changing styles in successful apps, which may include
playful designs, music, and game-like features.

More arld n1ore is being learned about computer users from different cultures,
but user experience designers are still struggling to establish guidelines that are
appropriate across multiple languages and cultu res (Sun, 2012; Pereira and
Baranauskas, 2015). The growth of a worldwide computer and mobile device
market means that designers must prepare for internationalization. Software
architectures that facilitate customization of local versions of user interfaces
offer a competitive advantage (Reinecke and Bernstein, 2013). For example, if all
text (instructions, help, error messages, labels, and so on) is stored iI1 files,
versions in other languages can be generated with little or no additional
programming. Hardware issues include character sets, keyboards, and special
input devices. User-interface design concerns for internationalization include
the following:

• Characters, numerals, special characters, and diacriticals

• Left-to-right versus right-to-left versus vertical input and reading

• Date and time formats

• Numeric and currency formats

• Weights and measures

• Telephone numbers and addresses

• Names and titles (Mr., Ms., Mme., M., Dr.)

• Social Security, national identification, and passport numbers

• Capitalization and punctuation

• Sorting sequences

• Icons, buttons, and colors

• Pluralization, grammar, and spelling

• Etiquette, policies, tone, formality, and metaphors

The list is long and yet incomplete. Recent studies of consumer use show perfor ­
mance and preference differences for information density, animation, cute char­
acters, eagerness for timely updates, incentives for social participation, and
game-like features. Whereas early designers were often excused from cultural
and linguistic slips, the current highly competitive atmosphere means that more
effective localization may produce a strong advantage. To develop effective
designs, companies run usability studies with users from different coU11tries,
cultures, and language communities.

The role of information technology in intematio11al development is steadily
growing, but much needs to be done to accommodate the diverse needs of users

2.5 Cultural and Inte rnationa l Diversity 65

wi th vas tly different language skills and technology access. To promote interna­
tional efforts to foster successful implementation of information technologies,
representatives from around the world meet regularly for the United Nations
World Summit ort the Information Society. They declared their

desire and commitm ent to build a people -centered, inclusive and deve lopmen t­
or iented Information Society, where everyone can create, access, utilize and
share information and knowledge, enabling individuals, communities and
peoples to achieve their full potential in promoting their sustainable develop­
ment and improving their quality of life, premised on the purposes and prin­
ciples of the Charter of the United Nations and respecting fully and upholding
the Unive rsal Declaration of Human Rights.

The plan calls for applications to be “accessible to all, affordable, adapted to
local needs in languages and culture, and [to] support sustainable develop­
ment.” The UN Sustainability Development Goals include erad icate extreme
poverty and hunger; reduce child mortality; combat HIV/AIDS, malaria, and
other diseases; and ensure environmental sustainab ility. Information and com­
mU11ications technologies can play important roles in developing the infrastruc ­
ture that is needed to achieve these goals (Fig. 2.2).

FIGURE 2.2
Designing for cellphones can open the door to a wider audien ce (Medh i et al., 2011 ),
for example, in developing countries where feature phones often are the only way to
access the internet, literacy may be an issue, and users have very low monthly limits
on the data volume they can use.

66 Chapter 2 Universal Usability

2.6 Users with Disabilities

When digital content and services can be flexibly presented in different formats,
all users benefit (Horton and Quesenbery, 2014). However, flexibility is most
appreciated by users witl, disabilities who now can access content and services
using diverse input and output devices. Blind users may utilize screen readers
(speech output such as JAWS or Apple’s VoiceOver) or refreshable braille dis­
plays, while low-vision users may use magnification. Users with hearing impair­
ments may need captioning on videos and trru,scripts of audio, and people witl,
limited dexterity or other motor impairments may utilize speech recognition,
eye-tracking, or alternative keyboards or pointing devices (Fig. 2.3). Increas­
ingly, especially on Apple products, iliese alternate forms of input or output are
integrated into technology out of the box (other laptops, tablets, ru,d smart­
phones have add-on screen reader and magnification capability, and a small
number of laptops have built -in eye tracking).

There is a long history of research on how users with perceptua l or motor
impairments (such as iliose described above) interact with technology, and
research on intellectual or cognitive impairments is now also increasing (Blanck,
2014; Chourasia et al., 2014). In some cases, people wiili intellectual impairments

FIGURE 2.3

A young man uses a wheelc hair-moun ted augmentative communication and contro l
device to control a standard television. New universal remo te console standards can
allow people to use communication aids and other personal electronics as alternate
interfaces for digital electronics in their environments (http://trace.wisc.edu ).

2.6 Users with Disabilities 67

need transformation of content, but in other cases, no modifications or assistive
technologies are needed. Designing for accessibility helps everyone. The same
captioning on video that is utilized by users with hearing impairments is also
used by users watching video in noisy locations, such as gyms, bars, and airpor ts.
Many accessibility features help with graceful presentation of content in multiple
formats, allowing for flexibility in presentation on small screens of mobile devices
or with audio output instead of visual output. As users are increasingly on the go
and experience “situational impairments,” these accessibility features help all
users, who may be in situations where they canno t see their screen (e.g., they are
driving a car) or cannot play audio out loud (e.g., on a plane).

For interfaces to be accessible for people with disabilities, they generally need
to follow a set of design guidelines for accessibility. The ir,ternational standard s
for accessibility come from the Web Accessibility Initiative, a project of the
World Wide Web Consortium. The best-known standards are the Web Content
Accessibility Guidelines (WCAG); the current version is WCAG 2.0 (since 2008,
http:/ /www .w3.org /TR/WCAG20/). There are also other guidelines such as
the Authoring Tool Accessibility Guidelines (ATAG) for developer tools and the
User Agent Accessibility Guidelines (UAAG) for browsers. Other guidelines,
such as EPUB3, exist for ebooks. Because WCAG 2.0 is the best -known, best­
understood, and most-documented set of accessibility guidelines in the wor ld,
there is a companion guide, known as Guidance on Applying WCAG 2.0 to
Non-Web Information and Communications Technologies (WCAG21CT), for
utilizing WCAG concepts in non -vveb technologies (Cunningham, 2012).

These concepts of digital accessibility are not new. The first version of WCAG
came out in 1999, and cap tioning of video has existed for more than 30 years.
The accessibi lity features are not technica lly hard to accomp lish. WCAG
requires, for instance, that all graphics have ALT text describing the image, that
a webpage not have flashing that could trigger seizures, that tables and forms be
marked up with appropriate labels (such as first name, last name, street address
instead of FIELDl, FIELD2, FIELD3) to allow for identification. Another WCAG
requirement is that all content on a page can be accessed even if you cannot use
a pointing device through keyboard access. Creating accessible digital content
is simply good coding, and it doesn’t change, in any way, how informatio n is
visual ly presented.

Similar concepts apply for creating accessible word-processing documents,
presentations, and PDF files-appropriate labeling and descriptions ensure that
a document or presentation will be accessible . Multiple approaches for accom­
plishing a task allow for successfu l task completion for a diverse population of
users. Even when properly utilizing guidelines such as WCAG 2.0, it is a good
idea to evaluate for success by usability testing with people with disabilities,
expert reviews, and automated accessibility testing.

The Web Content Accessibility Guidelines form the basis for many of the
laws and regulations around the world. Section 508 of the Rehabilitation Act in

68 Chapter 2 Universal Usability

the United States requires that when the federal government develops, pro­
cures, maintains, or uses electronic and information technology, that technology
must be accessible for employees and members of the general public who have
disabilities. This applies to procurement of both l-1ardware and software technol­
ogy as well as ensuring that websites are accessible (Lazar and Hochheiser, 2013;
Lazar et al., 2015).

The Americans with Disabilities Act, as interpreted by federal courts and the
U.S. Department of Justice, also reqt1ires accessibility of state and local govern­
ment websites as well as those of private companies and organizations that are
considered “public accommodations” (stores, museums, hotels, video rental,
etc.). The U.S. Department of Justice is also enforcing accessibility of websites
and instructional materials at universities. Lawsuits such as those against Tar­
get, Netflix, Harvard University, and MIT highlight the increasing importance
and expectations of digital accessibility.

The European Union Mandate 376 (http:/ /www.mandate376.eu/) will
reqt1ire procurement and de, relopment of accessible technologies by EU gov­
ernments and will coordinate with U .S. Section 508, utilizing WCAG 2.0 and
enabling developers to easily satisfy both U.S. and EU legal requirements.
Prior to EU Mandate 376, many European countries, such as the UK, Italy, and
Germany, and other countries around the world, including Australia and
Canada, also had information technology accessibility requirements. The cov­
erage (only government technology or also public accommodations), required
reporting requirements, and penalties for noncompliance differ from country
to country.

The United Nations Convention on the Rights of Persons with Disabilities
(CRPD, http:/ /wvvw .un.org / disabilities/ convention/ conventionfull.shtml), an
international human rights agreement, also addresses accessible technology.
Article 9 of the CRPD calls upon countries to “Promote access for persons with
disabilities to new information and communications technologies and systems,
including the Internet,” and article 21 encourages countries to “[provide] infor­
mation intended for the general public to persons with disabilities in accessible
formats and technologies appropriate to different kinds of disabilities.”

Accessibility is a core feature of contemporary information systems, baked
into development from the start. Programmers who follow coding standards
and guidance from WCAG 2.0 add minimal cost in development ye t provide
valuable services to all users. By contrast, implementers wl10 seek to retrofit for
accessibility find that their effort is much greater (Wentz et al., 2011).

Increasingly, a person’s economic success depends on equal access to digi­
tal content and services. University classes take place online, job postings are
made online, and job applications must be submitted online. Prices are often
lo\,ver when using a company website instead of calling the company on the
phone. When people wi th disabilities l,ave equal access to digital conten t and
services, they have access to the full ra11ge of economic opportunities. The

2 .7 Older Adu lt Users 69

good news is that computer scientists, software engineers, developers, design­
ers, and user experience professionals have the opportunity, through good
design, appropriate coding standards, and proper testing and evaluation, to
ensure equal access.

2.7 Older Adult Users

Seniority offers many pleasures and all the benefits of experience, but aging can
also have negative physical, cognitive, and social consequences. Understanding
the human factors of aging can help designers to create user interfaces that facil­
itate access by older adult users (Fig. 2.4). The benefits include improved chances
for productive employment and opportunities to use writing, e-mail, and other
computer tools plus the satisfactions of education, entertainment, social interac­
tion, and challenge (Newell, 2011; Czaja and Lee, 2012) . Older adults are partic­
ularly active participants in health support groups. The benefits to society
include increased access to older adults, which is valuable for their experience
and tl1e emotional support they can provide to others.

F GURE 2.
HomeAssist is an assisted liv ing platform for older adu lts installed in homes is
Bordeaux, France. The tablet is used to show alerts (e.g ., when t he front doo r was
left opened) and rem inders but also to run a slide show of photog raphs when not in
use ( http ://phoe nix.in ri a. fr/resea rch-pro jects/ homeassist) .

70 Chapter 2 Universal Usability

The National Research Council ‘s report Human Factors Research Needs for an
Aging Population describes aging as

a nonuniform set of progressive changes in physiological and psychological
functionin g …. Average visual and auditory acuity decline considerably with
age, as do average strength and speed of response … . [People experience]
loss of at least some kind s of memory function, declines in perceptual flex­
ibility, slo,,ving of “stimulus encoding,” and increased difficulty in the
acquis ition of comp lex mental skills, … visual functions such as static visua l
acuity, dark adaptation, accommodation, contras t sens itivity, and per iph eral
vision decline, on average, with age. (Czaja, 1990)

This list has its di scouraging side, especially since older adults ma y have
multiple impairments, but many older adults increasingly experience only
moderate effects, allowing them to be active participants, even throughout tl1eir
ninetie s .

The further good news is that interface designers can do much to accommo­
date older adult users (Chisnell et al., 2006). Improved user experiences give older
adults access to the beneficial aspects of computing and network communication,
thu s bringing man y societal advantages. How many young peop le’s lives might
be enriched by e-mail access to grandparents or great-grandparents? How many
businesses might benefit from electronic consultations with experienced older
adults? How many government agencies, universities, medical centers, or law
firms could advance their goals from easily available contact with knowl edge­
able, older adult citizens? As a socie ty, how might we all benefit from the contin ­
ued creative work of older adults in literature, art, music, science, or philosophy?

As the world’s population ages, designers in many fields are adapting their work
to serve older adults, which can benefit all users. Baby boom ers have already begun
to pu sh for larger street signs, brighter traffic lights, and better nighttim e lighting to
make driving safer for drivers and pedestrians. Similarly, desktop, web, and mobile
devices can be improved for all users by providin g users with control over font
sizes, display contrast, and audio levels. Interfaces can also be designed with easier­
to-use pointing devices, clearer navigation path s, and consistent layout s to impro ve
access for older adults and every user (Hart et al., 2008; Czaja and Lee, 2012).

Considering older and disabled users during the design process often pro ­
duces novel designs (Newell, 2011), such as ballpoint pens (for people with
impair ed dexterity), cassette tape recorders (for blind users to listen to audio­
books), and auto-completion software (to reduce keystrokes). Texting interfaces
that suggest words or web-address comp letion were originally designed to ease
data input for older and disabled us ers but have become expec ted conveniences
or all users of mobile devices and web brow sers. These conveniences, which
reduce cognitive load, percep tual difficu lty, and motor control demands,
become vital in difficult environments, such as while traveling, injured, stressed,
or under pressure for rapid correct completion. Similarly, subtitles (closed

2.8 Children 71

captioning) and user-controlled font sizes were designed for users with hearing
and visual difficulties, but they benefit many users.

Researchers and designers are actively working on improving interfaces for
older adults (Czaja and Lee, 2012). In the United States, the AARP’s Older Wiser
Wired initiatives provide education for older adults and guidance for designers.
The European Union also has multiple initiatives and research support for com­
puting for older adults.

Networking projects, such as the San Francisco-based SeniorNet, are provid­
ing adults over the age of 50 with access to and education about computing and
the Internet “to enhance their lives and enable them to share their knowledge
and wisdom” (http:/ /www.seniomet.org/). Computer games are attractive for
older adults, as shown by the surprising success of Nintendo’s Wii, because they
stimulate social interaction, provide practice in sensorimotor skills such as eye­
to-hand coordination, enhance dexterity, and improve reaction time. In addi­
tion, meeting a challenge and gaining a sense of accomplishment and mastery
are helpful in improving self-image for anyone.

In our experiences in bringing computing to two residences for older adults, we
also encountered residents’ fear of computers and belief that they were incapable
of using computers. These fears gave way quickly after a few positive experiences.
The older adu lts, who explored e-mail, photo sharing, and educational games, felt
quite satisfied with themselves and were eager to learn more. Their newfound
enthusiasm encouraged them to try automated bank machines and supermarket
touchscreen kiosks. Suggestions for redesigns to meet the needs of older adults
(and possibly other users) also emerged – for example, the appeal of high-precision
touchscreens compared with the mouse was highlighted (Chapter 10).

In summary, making computing more attractive and accessible to older
adults enables them to take advantage of technology, enables others to benefit
from their participation, and can make technology easier for everyone. For more
information on this topic, check out the Human Factors & Ergonomics Society
(http:/ /www.hfes.org), which has an Aging Technical Group that publishes a
newsletter and organizes sessions at conferences.

2.8 Children

Another lively community of users is children, whose uses emphasize entertain­
ment and education (Hourcade, 2015). Even pre-readers can use computer­
controlled toys, music generators, and art tools. As they mature, begin reading,
and gain limited keyboard skills, they can use a wider array of desktop
applications, web services, and mobile devices (Foss and Druin, 2014). When
they become teenagers, they may become highly proficient users who often help
their parents or other adults. This idealized growth path is followed by many

72 Chapter 2 Universal Usability

children who have easy access to technology and supportive parents and peers.
However, many children without financial resources or supportive learning
environments struggle to gain access to technology. They are often frustrated
with its use and are endangered by threats surroUI,ding privacy, alienation, por­
nography, unhelpful peers, and malevolent strangers.

The noble aspirations of designers of children’s software include educational
acceleration, facilitating socialization with peers, and fostering the self-confidence
that comes from skill mastery (Fig. 2.5). Advocates of educational games promote
intrinsic motivation and constructive activities as goals, but opponents often
complain about the harmful effects of antisocial and violent games.

For teenagers, the opportunities for empowerment are substantial. They often
take the lead in emp loying new modes of communication, such as text messag­
ing on cellphones, and in creating cultura l or fashion trends tl,at surprise eve11
the designers (for example, playing with simulations and fantasy games and
participating in web-based virtual worlds).

Appropriate design principles for childr en’s software recognize young peo­
ple’s intense desire for the kind of interactive engagement that gives them con­
trol with appropriat e feedback and supports their social engagement with peers
(Bruckman et al., 2012; Fails et al., 2014). Designers also have to find the balance
between children’s desire for challenge and parents’ requirements for safety.

Children can deal w ith some frustra tions and with threatening stories, but
they also want to know that they can clear the screen, start over, and try again
without severe penalties. They don’t easily tolerate patronizing comments or

FIGURE 2.5

._., ….
•••

—–.. –

–‘GI’

Using Digita l Mysteries on a tablet, two elementary school children work together
to read information slips, group them, and create a sequence to answer the
quest ion “Who killed King Ted?” The blue pop-up pie menu allows the selection
of tools. A larger tabletop version allows larger groups to collaborate
(http ://w ww. reflectiveth inking .com).

2.8 Children 73

inappropriate humor, but they like familiar characters, exploratory environ­
ments, and the capacity for repetition. Younger children will sometimes replay a
game, reread a story, or replay a music sequence dozens of times, even after
adul ts have tired of it. While too mucl-1 “screen time ” can interfere wi th cl-1ild­
hood development, well-designed applications can help children with physical,
relationship, and emotional problems (Borjesson et al., 2015).

Some designers work by observing children and testing software with chil­
dre11, while the innovative approach of “children as our technology-des ign part­
ners” engages them in a long -term process of cooperative inquiry during which
children and adults jointly design novel products and services. A notable suc­
cessful product of working with children as design partners is the International
Children’s Digital Library, which offers 4500-plus of the world’s best children’s
books it1 SO-plus languages using an interface in 19 languages while supporting
low-and high -speed networks.

Designing for younger children requires attention to their limitations. Their
evo lving dexterity means that mo11se dragging, double-clicking, and sma ll tar­
gets cannot always be 11sed; their emerging literacy means that wri tten instruc­
tions and error messa ges are not effective; and their low capacity for abstractio n
means that complex sequences must be avoided unless an adult is involved.
Other concerns are short at tention spans and limited capaci ty to work with mul ­
tiple concep ts simul taneously . Designers of children’s software also have a
responsibi lity to attend to dangers, especially in web -based environments,
where parental control over access to violent, racist, or pornographic materials
is unfortunately necessary. Appropriate information for the education of chil­
dren about privacy issues and threats from strang ers is also a requirement.

The capacity for playful creativity in art, music, and writing and the value of
educational activities in science and math remain potent reasons to pursue chil­
dren’s software. Enabling them to make high-quality images, photos, songs, or
poems and then share them with friends and family can accelera te childr en’ s
personal and social development. Offering access to educa tional materials from
libraries, museums, government agencies, schools, and commerc ial sources
enriches their learning experiences and serves as a basis for children to construct
their own web resources, participate in collaborative efforts, and contribut e to
community-service projects.

Providing programming tools, such as the Scratch project (https:/ /sc ratch.
mit.edu/), and simulation-building tools enables older children to take on com­
plex cognitive challenges and construct ambitious artifacts for others to use.
These and other opportunities have motivated efforts (such as One Laptop Per
Child, http:/ /one .laptop.org/) to bring low-cost computers to children around
the world. Advocates point to enthusiastic adoption and tell stories of it1divid­
ual enablement. However, critics encourage a shift from the technology-centered
goals to greater attention to rich content, social engagemen t, parental guidance
materials, and effective teacher training.

74 Chapter 2 Universal Usability

2.9 Accommodating Hardware
and Software Diversity

In addition to accommodating different classes of users and skill levels, design­
ers need to support a wide range of hardware and software platforms. The rapid
progress of technology means that newer systems may have a hundred or a
thousand times greater storage capacity, faster processors, and higher­
bandwidth networks. However, designers need to accommoda te older devices
and deal with newer mobile devices that may have low-bandwidth connections
and small screens (Fig. 2.2).

The challenge of accommodating diverse hardware is coupled with the need
to ensure access through many generations of software. New operating systems,
web browsers, e-mail clients, and application programs should provide back­
ward compatibility in terms of their user -interface design and file structures.
Skeptics will say that this requirement can slow innovation, but designers who
plan ahead carefully to support flexible interfaces and self-defining files will be
rewarded with larger market shares .

For at least the next decade, three of the main technical challenges wi ll be:

• Producing satisfying and effective Internet interaction on high-speed (broad­
band) and slower (dial-up and som.e wireless) connections. Some techn ologi­
cal breakthroughs ha ve already been made in compression algorithms to
reduce file sizes for images, music, animations, and even video, but more
are needed. New technologies are needed to enable pre-fetching or sched­
uled downloads. User con trol of the amount of material downloaded for
each request could also prove benefici al (for example, allowing users to
specify that a lar ge image should be reduced to a smaller size, sent with
fewer colors, converted to a simplified line drawing, replaced with just a
text description, or downloaded at night when Internet charges are perhaps
lower).

• Responsive design enabling access to web services from large displays (3200 x 2400
pixels or larger) and smaller mobile devices (1024 x 768 pixels and s1rialler). Rev.rrit­
ing each webpage for different display sizes may produce the best quality,
but this approach is probably too costly and time-consuming for most web
providers. Software tools such as Cascading Style Sheets (CSS) allow design­
ers to specify their content in a way that enables automatic conversions for an
increasing range of display sizes.

• Supporting easy n1aintenance of or automatic conversion to 1r1ultiple languages.
Commercial operators recognize that they can expand their markets if
they can pr ovide access in multipl e langua ges and across multiple countri es.
This means isolating text to allow easy substitution, choosing appropriate

Researcher’s Agenda 75

metaphors and colors, and addressing the needs of diverse cultures
(Section 2.5).

Practitioner’s Summary

The good news is that when designers think carefully about the needs of diverse
users, they are like ly to come up wi th desktop, laptop, web, and mobile device
designs that are better for all users. A frequent path to success is through par­
ticipatory methods that bring designers in close and continuing contact with
their intended users. In some cases, improved tools and designs mean that one
design can be made so flexible that it can be presented automatical ly in text
(with a wide range of font sizes, colors, and contrast ratios ), in speech (with
male or female styles and at varying volumes and speeds), and in a wide range
of display sizes. Adjustments for different cuJtures, personalities, disabilities,
ages, input devices, and preferences may take more design effor t, bt1t the pay­
offs are in larger markets and more satisfied users. As for costs, with appro­
priate software tools, e-commerce providers are finding that a small additiona l
effort can expand markets by 20% or more. Although it can require additional
effort, designing for diverse users is cost effective and sometimes leads to major
breakthroughs.

Researcher’s Agenda

While market forces provide incentives for changes, additional legal and policy
interventions could speed progress in ensuring that desktop, laptop, web, and
mobile device user iI1terfaces continue to be accessib le to all . The expanding
worldwide research community, especially the ACM Special Interest Group on
Accessible Computing (SIGACCESS), hosts international conferences, publishes
journals, and encourages further research.

Research on diversity often brings innovations for all users; for example,
input devices for users wi th poor motor control can often help all passengers in
rough riding cars, buses, trains, or planes. Impro ved automated assistance for
conversions to diverse languages and cuJtures would improve designer produc­
tivity and facilitate changes to prices, dimensions, colors, and so on. Research on
cultural diversity is still needed about the acceptability by differing user groups
of novel features like emoticons, animation, personalization, gamification, and
musical accompaniments.

76 Chapter 2 Universal Usability

WORLD WIDE WEB RESOURCES

www. pearsonglobaleditions . com / shneiderman

Major suppliers offer diverse accessibi lity too ls:

• Apple: https://www.apple.com/accessibility/

• Microsoft: http://www .microsoft.com/enable/
• Google: https:/lwww.google.com/accessibility/

And many consumer-oriented and government groups provide assistance,
such as:

• AARP : http://www.aarp.org/home-family/personal-technology/
• Older Adults Technology Services: http://oats .org/
• U.S. Section 508: http://www.section508.gov/
• Resource list from Trace Center: http://trace.wisc.edu/resources/

Discussion Questions

1. Describe three populations of users with special needs. For each of these pop ­
ulations, suggest three ways current interfaces could be improved to better
serve them.

2. Suppose you need to design a system for users in two countries that are very
different from each other culturally. What are some of the design concerns
that you should be aware of to create a successful design?

3. In certain interfaces, it is necessary to inform users of an abnormal condition
or time-dependent information. It is important that the display of this infor­
mation catches the user’s attention. Suggest five ways a designer can success­
fully attract attention.

4. Name a piece of software you often use where it is easy to produce an error.
Explain ways you could improve the interface to better prevent errors.

5. What factors should designers co11sider to address the needs of individuals
with different physical abilities?

References

Blanck, P., eQuality: The Struggle for Web Accessibility by Persons with Cognitive Disabilities,
Cambridge University Press (2014).

References 77

Borjesson, P., Barendregt, W., Eriksson, E., and Torgersson, 0 ., Designing technology
for and with developmentally diverse children: A systematic literature review,
Proceedings of ACM SIGCHI Interaction Design and Children Conference, ACM Press,
New York (2015), 79-88.

Bruckman, Amy, Bandlow, Alisa, Dimond, Jill, and Forte, Andrea, Human-computer
interaction for kids, in Jacko, Julie (Edi tor), The Human-Computer Interaction Handbook,
3rd Edition, CRC Press (2012), 841-862.

Center for Information Technology Accommodation, Section 508: The road to
accessibility, General Services Administration, Washington, DC (2015). Available
at http://www .section508.gov/ .

Chisnell, Dana E., Redish, Janice C., and Lee, Amy, New heuristics for understanding
older adults as web users, Technical Conununications 53, 1 (February 2006), 39-59 .

Chourasia, A., Nordstrom, D., and Vanderheiden, G., State of the science on the cloud,
accessibility, and the future, Universal Access in the Inforn1ation Society 13, 4 (2014), 483- 495.

Cunningham, Katie, Accessibility Handbook, O’Reilly Publishing (2012).

Czaja, S. J. (Editor), Human Factors Research Needs for an Aging Population, National
Academy Press, Washington, DC (1990).

Czaja, S. J., and Lee, C. C., Older adults and information technology: Opportunities and
challenges, in Jacko, Julie (Editor), The Hu1nan-Cornputer interaction Handbook, 3rd
Edition, CRC Press (2012), 825-840.

Fails, J. A., Guha, M. L., and Druin, A., Methods and techniques for involving children
in the design of new technology, Foundations and Trends in Hurnan-Cornputer
Interaction 6, 2, Now Publishers Inc ., Hanover (2014), 85-166.

Foss, E., and Druin, A., Children’s internet Search: Using Roles to Understand Youth Search
Behavior, Morgan & Claypool Publishers (2014).

Hart, T. A., Chaparro, B. S., and Halcomb, C. G., Evaluating websites for older adults:
adherence to “senior-friendly” guidelines and end-user performance, Behavior &
Information Technology 27, 3 (May 2008), 191- 199.

Horton, Sarah, and Quesenbery, Whitney, A Web for Everyone: Designing Accessible User
Experiences, Rosenfeld Media (2014).

Hourcade, J. P., Child-Cornputer Interaction, CreateSpace Independent Publishing (2015).
Available at http:/ / homepage.di vms.uiowa.ed u /-hourcade/boo k/index.php.

Human Factors & Ergonomics Society, ANST/HFES 100-2007 Human Factors Engineering
of Cornputer Workstations, Santa Monica, CA (2007).

Lazar, Jonathan, Goldstein, Daniel F., and Taylor, Anne, Ensuring Digital Accessibility
through Process and Policy, Morgan Kaufmann (2015).

Lazar, J., and Ho chheiser, H., Lega l aspects of interface accessibility in the U.S .,
Communications of the ACM 56, 12 (2013), 74-80.

Marcus, Aaron, and Gould, Emile W., Globalization, localization and cross-cultural
user-in terface design, in Jacko, Julie (Editor), The Human-Computer interaction Hand­
book, 3rd Edition, CRC Press (2012), 341- 366.

Medhi, I., Patnaik, S., Brunskill, E., Gautama, N., Thies, W., and Toyama, K., Designing
mobile interfaces for novice and low -literacy users, ACM Transactions on Con-1puter­
Hu1nan Interaction 18, l (2011), Article 2, 28 pages.

78 Chapter 2 Univ ersa l Usabil ity

Newell, Alan, Design and the Digital Divide: Insights fron1 40 Years in Con1puter Support for
Older and Disabled People, Synthesis Lectures on Assistive, Rehabilitative, and Health­
Preserving Technologies (Ron Baecker, Editor), Morgan & Claypool Pub lishers (2011).

Pereira, Roberto, and Baranauskas, Maria C. C., A va lue-orien ted and culturall y in­
form ed approach to the des ign of int erac tive systems, International Journal of Hu1nan­
Cornputer Systems 80 (2015), 66-82.

Preedy, V. R. (Editor), Handbook of Anthropometry: Handbook of Hun1an Physical Form in
Health and Disease, Springer Publi shers (2012).

Quese nbery, Whitn ey, and Szuc, Dan iel, Global UX: Design and Research in a Connected
World, Morgan Kaufmann (2011).

Rad va nsky, Gabr iel A., and Ashcraft, Mark H., Cognition, 6th Edition, Pearson (2013).

Reinecke, Katharina, and Bern stein, Abraham, Knowing what a user likes: A design
science approach to iI1terfaces that automatically adapt to cu ltur e, MIS Quarterly 37,
2 (2013), 427-453.

Salgado, L. C. C., Leitao, C. F., and de Souza, C.S., A JournetJ through Cultures: Metaphors
for Guiding the Design of Cross-Cultural Interactive Syste1ns, Springer (2012).

Sun, Huatong, Cross-Cultural Technology Design, Oxford University Press (2012).

Ware, Colin , Information Visualization: Perception for Design, 3rd Edition , Morgan
Kaufmann Publ., San Franci sco, CA (2012).

Went z, B., Jaeger, P., and Lazar, J., Retrofi tting accessibility: The inequality of after -the­
fact access for persons with disabilities in the United States, First Monday 16, 11 (2011).

Wiggms, J. S., The Five-Factor Model of Personalitt;: Theoretical Perspecti11es, Guilford Press
(1996).

This page intentionally left blank

CHAPTER

Guideli@esi •

99 We want principles, not only developed-the work of the closet- ,,
but applied, which is the work of life.

Horace M ann
Thoughts , 1867

•• There never comes a point where a theory can be said to be true .
The most that anyone can claim for any theory is that it has shared

the successes of all its rivals and that it has passed at least one test ”
which they have failed .

CHAPTER OUTLINE
3. 1 Introduction

3.2 Guidelines

3.3 Principles

3.4 Theories

A. J. Ayer
Philosophy in the Twentieth Centur y, 1982

81

82 Chapter 3 Guidelines, Principles, and Theories

3.1 Introduction

User-in terface designers have accumulated a weal th of experience and research­
ers have produced a growing body of empirical evidence and theories, all of
which can be organized into:

1. Guidelines. Low- level focused advice about good practices and cautions
against dangers.

2. Principles. Middle -level strategies or rules to analyze and compare design
alternatives.

3. Theories. High-level widely applicable fra1neworks to draw on during
design and evaluation as well as to support comrnunicatio11 ai1d teaching.
Theories can also be predictive, such as those for pointing times by indi­
viduals or posting rates for community discussions.

In many con temporary sys tems, designers have a grand opportuni ty to
improve the user interface by applying es tablished guidelines to clean up clut­
tered displays, inconsistent layouts, and unnecessary text. These sources of debil ­
itating stress and frustration can lead to poorer performance, minor slips, and
ser ious errors, all contributing to job dissatisfaction and consumer resistance .

Guidelines, principles, and theories, which offer preventive medicine and
remedies for these problems, ha ve matured in recent yea rs (Grudin, 2012). Reli­
able methods for predicting pointing and input times (Chapter 10), better social
persuasion principles (Chapter 11), and helpful cognitive or perceptual theories
(Chapter 13) now shape research and guide design. International or national
standards, which could be described as commonly accepted and precisely
defined so as to be enforceable, are increasin gly influentia l (Carrol l, 2014).

This chapter begins with a sampling of guidelines for navigating, organizing
displays, getting user attention, and facilitating data entry (Section 3.2). Then
Section 3.3 covers some fundamental princip les of interface design, such as cop­
ing with user skill levels, task profiles, and interaction sty les. It presents the
Eigh t Golden Rules of Interface Design, explores ways of preventing user errors,
and closes with a section on ensuring human control while increasing automa­
tion. Section 3.4 reviews micro-HCI and macro-HCI theories of interface design.

3.2 Guidelines

From the earliest days of computing, interface designers have written down
guidelines to record their insights and to try to guide the efforts of future design­
ers. The ear ly App le and Microsoft guidelines, which were influen tial for

Interface Elements v

Labels

Images

Groups

Pickers

Tablt-s

Buttons

Switchts

Sliders

Maps

Movies

Date and Timer labE-ls

Menus

Watch Technologies

Resources

FIGURE 3.1

3.2 Guidelines 83

Pickers
Pickers display lists of items tl\at are navigable using the Digital Crown. They are meant to be a precise
and enga,ging way to mtinage selections. Pickers present their items in one of three styles:

List style displays text and image$ Stack style di.splays images in a card Sequence style displays one image

In a scrolling list Tius style displays stack sryle lnte,face. As che user from a sequence of Images. As the
the selected item and the previous scrolls, images are animated into user rurns the Oigitail CrO’lvn, the

and next Items If those Items are position with the se&ected Image on picker displays the previous or next

available. top. This style is best for photo image in the sequence without

browser interfaces. animations. This style Is good fot

custom picker interfaces built using

your own Images.

Example of Apple gu idelines for design ing menus for the iWatch.

desktop-interface designers, have been followed by dozens of guidelines documents
for the web ai1d mobile devices (Fig. 3.1) (see the list at the end of Chapter 1). A
guidelines document helps by developing a shared language and then promoting
consistency among 1nultiple designers in terminology usage, appearance, and
action sequences. It records best practices derived from practical experience or
empirical studies, with appropriate examples and counterexamples. The creation
of a guidelines document engages the design community in lively discussions
about input and output formats, action sequences, terminology, and hardware
devices (Lynch and Horton , 2008; Hartson and Pyla, 2012; Johnson, 2014).

Critics complain that guidelines can be too specific, incomplete, hard to
apply, and sometimes wrong. Propo11ents argt1e that building on experience
from design leaders contributes to steady improvements. Both groups recognize
the value of lively discussions in promoting awarerless.

The following four sections provid e exampl es of guidelines, and Section 4.3
discusses how they can be integrated into the design process. The examples
address some key topics, but they mere ly sample the thousands of guidelines
that have been written.

3.2. 1 Navigating the interface

Since navigation can be difficult for many users, providing c.lear ru.les is helpful.
The sample guidelines pr esented here come from the U.S. government’s efforts to

84 Chapter 3 Guidelines, Principles, and Theories

promote the design of informative webpages (National Cancer Institute, 2006), but
these guidelines have widespread application. Most are stated positively (“reduce
the user’s workload”), but some are negative (“do not display unsolicited win­
dows or graphics”). The 388 guidelines, which offer cogent examples and impre s­
sive research support, cover the design process, general principles, and specific
rules. This sample of the guidelines gives useful advice and a taste of their style:

Standardize task sequences. Allow users to perform tasks in the same sequence
and manner across similar conditions.

Ensure that links are descriptive. When using links, the link text should accu­
rately describe the link’s destination.

Use unique and descriptive headings. Use headings that are distinct from one
another and concep tuall y related to the conten t they describe.

Use radio buttons for rnutually exclusive choices. Provide a radio button control
when user s need to choose one response from a list of mutually exclusive
options .

Develop pages that will print properly. If users are likely to print one or more
pages, develop pages with widths that print properly.

Use thu111bnail i111ages to preview larger i111ages. When viewing full-size images
is not critica l, first pro vide a thumbnail of the image .

Guidelines to promote accessibility for t1sers with disabilities were included il,
the U.S. Rehabilitation Act. Its Section 508, with guidelines for web design, is pub ­
lished by the Access Board (http://www.access-board.gov/508.htm), an indepen­
dent U.S. government agency devoted to accessibility for people with disabilities.
The World Wide Web Co11sortium (W3C) adapted these guideline s (http:/ /www.
w3.org/TR/WCAG20/) and organized them into three priority levels, for which it
has provided automated checking tools. A few of the accessibility guidelines are:

Text alternatives. Provide text alternatives for any non-text content so that
it can be changed into other forms people need, such as large print, braille,
speech, syn,bol s, or simp ler languag e.

Tinie-based 111edia. Provide alternatives for time-based media (e.g., movies or
animations). Synchronize equivalent alternatives (such as captions or audi­
tory de scriptions of the visual track) \>vith the presentation.

Distinguishable. Make it easier for users to see and hear conten t, including
separating foreground from back groun d . Color is not used as the only visual
means of conveying information, indicating an action, prompting a response,
or distinguishing a visual element.

Predictable. Make Web pages appear and operate in predictable ways.

The goal of the se guidelines is to have webpage designers use features that per ­
mit users witl, disabilities to employ screen readers or other specia l technologies
to give them access to webpage content.

3.2 Guidelines 85

3.2.2 Organizing the display
Display design is a large topic with many special cases. An ear ly influential
guidelines document (Smith and Mosier, 1986) offers five high-level goals for
data display:

l. Consistency of data display. During the design process, the terminology,
abbreviations, formats, colors, capitalization, and so on should all be
standardized and controlled by use of a dictionary of these items.

2. Efficient information assimilation by the user. The format shou ld be familiar
to the opera tor and should be related to the tasks required to be performed
with the data. This objective is served by rules for neat columns of data,
left justification for alphanumeric data , right justification of integers,
lining up of decimal points, proper spacing, use of comprehensible labels,
and appropriate measurement units and numbers of decimal digits.

3. Mini1nal n1enwry load on the user. Users shou ld not be required to remember
information from one screen for use on another screen. Tasks sho uld
be arranged such that completion occurs with few actions, minimizing
the chance of forgetting to perform a step . Labels and common formats
shou ld be provided for novice or intermittent users.

4. Compatibility of data display ivith data entry. The format of displayed
information should be linked clearly to the format of the data entry.
Where possib le and appropriate, the output fields shou ld also act as
editable input fields.

5. Flexibility for user control of data display. Users should be able to get the
information from the display in the form most convenient for the task on
which they are working. For example, the order of columns and sorting
of rows sho uld be easily changeab le by the users.

This compact set of high-level objectives is a useful starting point, but each proj­
ect needs to expa nd these into application-spec ific and hardware-dependent
standards and practices.

3.2.3 Getting the user’s attention
Since substant ial information may be presented to users, exceptional conditions
or time-dependent information must be presented so as to attract attention
(Wickens et al., 2012). These guidelines detail several techniques for getting the
user’s attention:

• Intensity. Use two levels only, with limited use of high intensity to draw
atterltion.

• Marking. Underline the item, enclose it in a box, point to it with an arrow, or
use an indicator sttch as an asterisk, bullet , dash, plus sign, or X.

• Size. Use up to four sizes, with larger sizes attracting more attention.

• Choice of fonts. Use up to three fonts .

86 Chapter 3 Guidelines, Principles, and Theories

• Blinking. Use blinking displays (2-4 Hz) or blinking color changes w ith great
care and in limited areas, as it is distracting and can trigger seizures.

• Color. Use up to four standard colors, with additional colors reserved for
occasional use.

• Audio. Use soft tones for regular positive feedback and harsh sounds for rare
emergency condi tions.

A few words of caution are necessary. There is a danger of creating clut ­
tered displays by overusing these techniques. Some web designers use blink­
ing advertisements or animated icons to attract attention, but users almos t
uni versal ly disapprove. Animation is appreciated primarily when it provides
meaningfu l information, such as for a progress indicator or to show move ­
ment of files.

Novices need simple, logically organ ized, and well- labeled displays that
guide their actions. Expert users prefer limited labe ls on fields so data values are
easier to extract; subtle highlighting of changed values or positional presentation
is sufficient. Display formats must be tested with users for comprehensib ility.

Similarly highlighted items will be perceived as being related. Color-coding is
especially powerful in linking related items , but this use makes it more difficult to
cluster items across color codes (Section 12.5). User contro l over highlighting is
much appreciated, for example, allowing cellphone users to select the color for
contacts that are close family members or for meetings that are of high importance.

Audio tones, like the clicks in keyboards or cellphone ring tones, can provide
informative feedback about progress. Alarms for emergency conditions do alert
users rapidly, but a mechanism to suppress alarms must be provided. If several
types of alarms are used, testing is necessary to ensure that users can distinguish
between the alarm levels. Prerecorded or synthesized voice messages are a use­
ful alternative, but since they may interfere with communications between oper­
ators, they should be used cautiously (Section 9.3).

3.2.4 Facilitating data entry
Data-entry tasks can occupy a substantial fraction of users’ time and can be the
source of frustrating and potentially dangerous errors . Smith and Mosier (1986)
offer five high-leve l objectives as part of their guidelines for data entry
(Courtesy of MITRE Corporate Archives: Bedford, MA):

1. Consistency of data-entry transactions. Similar sequences of actions speed
learning.

2. Minin·,al input actions by user. Fewer input actions mean greater operator
productivity and -u sually-f ewer chances for error. Making a choice
by a sing le mouse selection or finger press, is preferred over typing in
a lengthy string of characters. Selecting from a list of choices eliininates

3.2 Guidelines 87

the need for memorization, structures the decision-making task, and
eliminates the possibility of typographic errors.
A second aspect of this guideline is that redundant data entry should be
avoided . It is annoying for users to enter the same information in two
locations, such as entering the billing and shipping addresses when they
are the same . Duplicate entry is perceived as a waste of effort and an
oppor tunit y for error.

3. Mini,nal n1e1nory load on users. When doing data entry, users should not be
required to remember lengthy lists of codes.

4. Con1patibility of data entry with data display. The format of data -entry
information should be linked closely to the format of displayed
infor1nation, such as dashes in telephone numbers.

5. Flexibility for user control of data entry. Experienced users prefer to enter in­
formation in a sequence that they can control, such as selecting the color
first or size firs t, when clothes shopping .

Guidelines documents are a wonderful star ting point to give designers the ben­
efit of experience (Fig. 3.2), but they will always need processes to facilitate edu­
cation, enforcement, exemption, and enhancement (Section 4.3).

PowerChart SNtcl’I Sl)ifliGuldu

Glyph

FIGURE 3.2

Small Glyph Medium Glyph

,. “‘I
Mut.ple l tcms Sde<:ted

film:
<dor•~soso I
wt : 12 P'.lt 25 Pit

Moltople Items Sdected
E()f{I;

•1r N ~ ~ a'lfomwtion CMW"IOt bt
kilded is leu thin 120 POI:.._ .- text only to —-,._,.,..,._

<dor•SOSOSO
sue: 12 Slit

lS"' I
Multiple Items Selected
film: -•S05050
sia: 16PMONTH/08: DAY/21

a. Command line

JAN
FEB
MAR
APR
MAY
JUN
JUL

Month AUG
SEP
OCT
NOV
DEC

MM/DD I 08/21

b. Form fill-in to
reduce typing

Day I 21 l~I

d. Pull-down menus offer meaningful
names and eliminate invalid values

MM~ DDl}I]

c. Improved form fill-in
to clarify and reduce
errors

◄ August ►
s M T w T F s

1 2 3 4 5 6

7 8 9 10 11 12 13

14 15 16 17 18 19 20

(ID 22 23 24 25 26 27

28 29 30 31

e. 2-D menus to provide
context, show valid dates, and
enable rapid single selection

Form fill-in When data en try is required, menu selection alone usually be­
comes cumbersome, and form fill-in (also called fill in the blanks) is appropriate.
Users see a display of related fields, move a cursor among the fields, and enter
data where desired. With the form fill-in interaction style, users must under­
stand the field labels, know the permissible value s and the data-entry method,
and be capable of responding to error messages. Since knowledge of the key­
board, labels, and permissible fields is required, some training may be neces­
sary. This interaction sty le is most appropriate for knowledgeable intermittent
users or frequent users. Chapter 8 provides a thorough treatment of form fill-in.

Command language For frequent users, command languages (discussed in
Section 9.4) provide a strong feeling of being in control. Users learn the syr,­
tax and can often express complex possibilities rapidly without having to read
distracting prompts. However, error rates are typically high, training is neces­
sary, and retention may be poor. Error messages and online assistance are hard
to provide because of the diversity of possibilities. Command languages and
query or programming languages are the domain of expert frequent users, who
often derive great satisfaction from mastering a complex language. Powerful
advantages include easy scripting and history keeping.

3.3 Principles 95

Natural language Increasingly, user interfaces respond properly to arbitrary
spoken (for example, Siri on the Apple iPhone) or typed natural-language state­
ments (for example, web search phrases). Speech recognition can be helpful
with familiar phrases such as “Tell Catl1erine that I’ll be there i11 ten minutes,”
but with novel situations users may be frustrated with the results (discussed in
Chapter 9).

Blending several interaction styles may be appropriate when the required
tasks and users are diverse. For example, a form fill-in interface for shopping
checkout can include menus for items such as accepted credi t cards, and a
direct-manipulation environment can allow a right-click that produces a pop-up
menu with color choices. Also, keyboard commands can provide shortcuts for
experts who seek more rapid performance than mouse selection.

Increasingly, these five interaction styles are complemented by using context,
sensors, gestures, spoken commands, and going beyond the screen to include
enriched environments that enable users to activate doors, change sound volume,
or turn on faucets. These enriched environmen ts, such as those found in au tomo­
biles, game arcades, projected displays, wearable interfaces, musical instrumen ts,
and sound spaces, go beyond the desktop and mobile devices to produce playful
and useful effects. The expansion of user interfaces into clothing, furniture, build­
ings, implanted medical devices, mobi le platforms such as drones, and the Inter­
net of Things enriches traditional strategies and expands the design possibilities.

Chapters 7-9 expand on the constructive guidance for using the different
interaction styles outlined here , and Chapter 10 describes how input and output
devices influence these interaction styles. Chapter 11 deals with interaction
when using collaborative interfaces and participating in soc ial media.

3.3.4 The Eight Golden Rules of Interface Design
This section focuses attention on eight principles, called “golden rules,” that are
applicable in most interactive systems and enriched environments. These prin­
ciples, derived from experience and refined over three decades, require valida­
tion and tuning for specific design domains. No list such as this can be complete,
but it has been well received as a useful guide to students and designers. The
Eight Golden Rules are:

1. Strive for consistency. Consistent sequences of actions should be required
in simi lar situations; identical terminology should be used in prompts,
menus, and help screens; and consistent color, layout, capitalization, fonts,
and so on, should be emp loyed throughout. Exceptions, such as required
confirmation of the delete command or no echoing of passwords, should be
comprehensible and limited in number.

2. Seek universal usability. Recognize the needs of diverse users and design
for plasticity, facilitating transformation of content. Novice to expert differences,

96 Chapter 3 Guidelines, Principles, and Theories

age ranges, disabilities, international ,,ariations, and technological diversity
each enrich the spectrum of requirements that guides design. Adding
features for novices, such as explanations, and features for experts, such
as shortcuts and faster pacing, enriches the interface design and improves
perceived quality.

3. Offer informative feedback. For every user action, there sho uld be an interface
feedback. For frequent and minor actions, the response can be modest,
whereas for infrequent and major actions, the response should be more
substantial. Visual presentation of the objects of interest provides a
convenient environment for showing changes explicitly (see the discussion
of direct manipulation in Chapter 7) .

4. Design dialogs to yield closure. Sequences of actions should be organized
into groups with a beginning, middle, and end. Informative feedback
at the completion of a group of actions gives users the satisfactio n of
accomplishment, a sense of relief, a signal to drop conting ency plans from
their minds, and an indicator to prepare for the next group of actions. For
examp le, e-commerce websites move users from selecting products to
the checkout, ending with a clear confirmation page that completes the
transaction.

5. Prevent errors. As much as possible, design the interface so that users
canno t make serious errors; for example, gray out menu items that are
not appropriate and do not allow alphabetic characters in numeric entry
fields (Section 3.3.5). If users make an error, the interface should offer
simple, constr uctive, and specific instructions for recovery. For example,
users should not have to retype an entire name-address form if they enter
an invalid zip code but rather should be guided to repair only the faulty
part. Erroneous actions should leave the interface state unchanged, or the
interface should give instructions about restori ng the sta te.

6. Permit easy reversal of actions. As much as possible, actions should be
reversible. This feature relieves anxiety, since users know that errors can
be undone, and erlcourages exploration of unfamiliar options. The units of
reversibility may be a single action, a data-entry task, or a complete group of
actions, such as entry of a name-address block.

7. Keep users in control. Experienced users strongly desire the sense that they
are in charge of the interface and that the interface responds to their actions.
They don’t wan t surprises or changes in familiar behavior, and they are
annoyed by tedious data-entry sequences, difficulty in obtaining necessary
information, and inability to produce their desired result.

8. Reduce short-te·mt memory load. Humans’ limited capacity for information
processing in short-term memory (the rule of thumb is that people can
remember “seven p lus or minus two chunks” of information) requires that
designers avoid interfaces in which users must remember information from

3.3 Principles 97

one display and then use that information on another display. It means that
cellphones should not require reentry of phone numbers, website loca­
tions should remain visible, and lengthy forms should be compacted to fit a
single display.

These underlying principles must be interpreted, refined, and extended for
each environment. They have their limitations, but they provide a good start ­
ing point for mobile, desktop, and web designers. The principles presented in
the ensuing sections focus on increasing users’ productivity by providing sim ­
plified data -entry procedures, comprehensible displays, and rapid informa­
tive feedback to increase feelings of compe tence, mastery, and control over the
system.

3.3.5 Prevent errors

•• There is no medicine against death, and agains t error no ru le has been found. ,,

Sigmund Freud
(inscription he wrote on his portrait)

The importance of error prevention (the fifth golden rule) is so strong that it
deserves its own section. Users of cellphones, e-mail, digital cameras, e-commerce
websites, and other interactive systems make mistakes far more frequently than
might be expected.

One way to reduce the loss in productivity due to errors is to improve the
error messages provided by the interface. Better error messages can raise suc­
cess rates in repairing the errors, lower future error rates, and increase subjec­
tive satisfaction. Superior error messages are more specific, positive in tone,
and constructive (telling users vvhat to do rather than merely reporting the
problem). Rather than using vague (? or WHAT?) or hostile ( ILLEGAL OPERATION or
SYNTAX ERROR) messages, designers are encouraged to use informative messages,
such as PRI NTER IS OFF, PLEASE TURN IT ON or MONTHS RANGE FROM 1 TO 12.

Improved error messages, however, are only helpful medicine. A more effec­
tive approach is to pre, 1ent the errors from occurring. This goal is more attain­
able than it may seem in many interfaces.

The first step is to understand the nature of errors. One perspective is that
people make mistakes or slips (Norman, 1983) that designers can help them to
avoid by organizing scree11s and me11us functionall y, designing commands and
menu choices to be distinctive, and making it difficult for users to take irrevers­
ible actions. Norman also offers other guidelines, such as providing feedback
about the state of the interface (e.g., changing the cursor to show whether a map
interface is in zoom-in or select mode) and designing for consistency of actions
(e.g., ensuring that yes/no buttons are always displayed in the same order).

98 Chapter 3 Guidelines, Principles, and Theories

Norman’s analysis provides practical examples and a useful theor y. Additional
design techniques to reduce errors include the following:

Correct actions. Industrial designers recognize that successfu l products must be
safe and must prevent users from dangerously incorrect usage of the products.
Airplane engines cannot be put into reverse until the landing gear has touched
down, and cars canno t be put into reverse while traveling for,,.,ard at faster than
five miles per hour. Similar principles can be applied to interactive sys tems- for
example, inappropriate menu items can be grayed out so they can’t be inad ­
vertently selected, and web users can be allowed to simply click on the date on
a calendar instead of having to type in a month and day for a desired air)jne
flight departure. Likewise, instead of having to enter a 10-digit phone number,
cellphone users can scroll through a list of frequent ly or recently dialed num ­
bers and select one with a single button press. A variant idea is to provide users
with auto-completion for typing words, selecting from menus, or entering web
addresses.

Complete sequences. Sometimes an action reqt1ires several steps to reach comp le­
tion. Since users may forget to complete every step of an action, designers may
attempt to offer a sequence of steps as a single action. In automobiles, drivers
do not have to set two switches to signa l a left turn; a single switch causes both
(front and rear) tum-signal ligh ts on the left side of the car to flash. Likewise,
when a pilot throw s a swi tch to lower the landing gear, hundreds of mechanical
steps and checks are invoked automatically.

As another example, users of a text ed itor can indicate that all section titles are
to be centered, set in uppercase letters, and underHned without having to make a
series of selections each time they enter a section title. Then if users want to change
the title style-for example, to eliminate underlining – a single change will guar­
antee that all section titles are revised consistently. As a final example, an air-traf­
fic controller may formulate plans to change the altitude of a plane from 14,000
feet to 18,000 feet in two increments; after raising the p lane to 16,000 feet,
however, the controller may get distracted and fail to complete the action. The
controller should be able to record the plan and then have the computer prompt
for completion. The notion of complete sequences of actions may be difficult to
implement because users may need to issue atomic actions as well as complete
sequences . In this case, users should be allowed to define sequences of their
own. Designers can gather information about potential complete sequences by
studying sequences of actions that people actually take and tlle patterns of
errors that peop le actually make.

Thinking about universal usability also contributes to reducing errors-for
example, a design with too many small buttons may cause unacceptably high
error rates among older users or otllers with limited motor control, but enlarg­
ing the buttons will benefit all users. Section 4.6 addresses the idea of logging
user errors so designers can continuously improve designs.

3.3 Principles 99

3.3.6 Ensuring human control while increasing automation

The guidelines and principles described in the previous sections are often
devoted to simplifying the users’ tasks . Users can then avoid routine, ted iou s,
and error-prone actions and can concentrate on making critical decisions, select­
ing alternatives if the original approach fails, and acting in unanticipated situa­
tions. Users can also make subjective value-based judgm ents, request help from
other human s, and develop new so lution s (Sanders and McCormick, 1993).
(Box 3.3 provides a detailed comparison of human and machine capabilities.)

Computer system designers have generally been increasing the degree of auto­
mation over time as procedures become more standardized and the pressure for
productivity grows. With routine tasks, automation is desirable, since it red uces
the potential for errors and the users’ workload (Cummings, 2014). However,
even with increased automation, informed designers can still offer the predictable
and controllable interfaces that users usually prefer. The human supervisory role
needs to be maintained because the real world is an open system (that is, a nonde­
numerable number of unpredictable events and system failures are possible). By
contrast, computers constitute a closed system ( only a denumerable number of nor-

mal and failure situations can be accommodated in hardware and software).

BOX 3.3
Relative capabilities of humans and machines.

Hum ans Generally Bett er

• Sense-making from hearing,
sight, touch, etc.

• Detect familiar signals in noisy
background

• Draw on experience and adapt
to situations

• Select alternatives if original
approach fails

• Act in unanticipated situations

• Apply principles to solve varied
problems

• Make subjective value-based
judgments

• Develop new solutions

• Use information from external
environment

• Request help from other humans

Machine s Generally Bett er

• Sense stimu li outside human’s
range

• Rapid consistent response for
expected events

• Retrieve detailed information
accurately

• Process data with anticipated
patterns

• Perform repetitive actions
reliably

• Perform several act ivities
simultaneously

• Maintain performance over
time

100 Chapter 3 Guidelines, Principles, and Theories

For example, in air-traffic control, common actions include changes to a
plane’s altitude, heading, or speed. These actions are well understood and
potentially can be automated by scheduling and route-allocation algorithms,
but the human controllers must be present to deal with the highly variable and
unpredictable emergency situations. An automated system might deal success­
fully with high volumes of traffic, but what would happen if the airport man­
ager closed a runway because of turbulent weather? The controllers would
have to reroute planes quickly. Now suppose that one pilot requests clearance
for an emergency landing because of a failed engine, while another pilot reports
a passenger with chest pains who needs prompt medical attention. Value-based
judgment, possibly with participation from other controllers, is necessary to
decide which plane should land first and how much costly and risky diversion
of normal traffic is appropriate. Air-traffic cor1trollers cannot just jump into ar1
emergency; they must be intensely involved in the situation as it develops if
they are to make informed and rapid decisions. In short, many real-world situ ­
ations are so complex tha t it is imp ossible to an ticipat e and program for every
contingency; human judgment and values are necessary in the decision­
making process.

Another example of the complexity of life-critical situations in air -traffic con­
trol was illustrated by an incident on a plane that had a fire on board. The con­
troller cleared other traffic from the flight path and began to guide the plane in
for a landing, but the smoke was so thick that the pi lot had troub le reading his
instruments. Then the onboard transponder burned out, so the air-traffic con­
troller could no longer read the plane’s altitude from the situation display. In
sp ite of these multiple failures, the controller and the pilot managed to bring
down the plane quickly enough to save the lives of many-but not all-of the
passengers. A computer could not have been programmed to deal with this par ­
ticul ar unexpected series of events.

A tragic outcome of excess of aut omation occurred during a flight to Ca li,
Colombia. The pilots re lied on the automatic pilot and failed to realize that the
plane was making a wide turn to return to a location that it had already passed.
When the ground-collision alarm sounded, the pilots were too disoriented to
pull up in time; they crashed 200 feet below a mountain peak, killing all but four
peop le on board.

The goal of design in many applications is to give users sufficient information
about current status and activities to ensure that, when intervention is neces­
sary, they have the knowledge and the capacity to perform correctly, even under
partial failures (Endsley and Jones, 2004). The U.S. Federal Aviation Agency
stresses that designs should place the users in contro l and automate only to
“improve system performance, without reducing human involvement” (U.S.
FAA, 2012). These standards also encourage managers to “train users when to
question automation .”

3.3 Principles 101

The entire user interface must be designed and tested not only for normal situ­
atio11s but also for as wide a range of anomalous situations as can be anticipated.
An extensive set of test conditions might be included as part of the requirements
document. Users need to have enough information that they can take responsi­
bility for their actions. Beyond decision making and handling of failures, the
users’ role is to improve the interface design.

Advocates of increased autonomy, such as in driverless cars or unmanned
aircraft, believe that rapid autonomous responses improve performance and
produce fewer errors. However, autonomy has risks for unanticipated situa­
tions, such as changing weather or unusual trading activity. In 2015, Toyota
shifted its driverless car research from autonomous designs to ones that leave
drivers in control. The dangers of unanticipated situations for Unmanned Aerial
Vehicles (UAVs) resulted in shifting to Remotely Piloted Vehicles (RPVs) with
human control to improve reliability. While autonomy has its benefits, designs
that allow human supervisory control, activity logging, and the capacity to
review logs after failures appear to improve performance.

In costly business situations, such as high-speed stock market trading, clarify­
ing responsibility for failures could lead to improved designs. Ensuring account­
ability and liability in advance can encourage designers to think more carefully
abou t potential failures. Advocates of “algorithmic accountab ility” want devel­
opers who implement systems such as Google’s search rankings or employee
hiring systems to enable open access so as to limit bias and expose errors.

Questions about integrating automation with human control also emerge in
consumer product user interfaces. Many designers are eager to create an auton­
omo us agent that knows people’s likes and dislikes, makes proper inferences,
responds to novel situations, and performs competent ly with litt le guidance.
They believe that human-human interaction is a good model for human­
computer interaction, and they seek to create computer-based partners, assis­
tants, or agents.

By contrast, many designers believe that tool-like interfaces are often more
attractive than autonomous, adaptive, or anth ropomorphic agents that carry out
the users’ intentions and anticipate needs. The agent scenarios may show a bow­
tied butler-like human, like the helpful young man in App le Computer’s famous
1987 video on the Knowledge Navigator. Microsoft’s ill-fated 1995 BOB program
used cartoon characters, while its much-criticized Clippit, nicknamed Clippy,
character was also withdrawn. Human -like bank machines or postal-service sta­
tions have largely disappeared, but avatars representing users, not computers,
in game-playing and 3-D socia l environments have remained popular; users
appear to enjoy the theatrical experience of creating a new identity, sometimes
with colorful hair and clothes (Section 7.6).

The success of Apple’s Siri’s speech recognition and personality-rich voice
response system shows that with careful development, useful tools ca11 be

102 Chapter 3 Guidelines, Principles, and Theories

deve loped, but there is little evidence of the benefit of a talking face (Moreno
and Mayer, 2007). Robot designers have perennially used human and animal
forms as an inspiration, encouraging some researchers to pursue human-like
robots for care of older adults or as team members in work situations. These
designs attract journalists and have entertainmen t value but have yet to gain
widespread acceptance.

A variant of the agent scenario, which does not include an ai1thropomorphic
realization, is that the compu ter program has a built-in user model to guide at,
adaptive interface. The program keeps track of user performance and adapts the
interface to suit the users’ needs. For example, when users begin to make menu
selections rapidly, indicating proficiency, advanced menu items may appear.
Au tomatic adaptations have been proposed for interface features such as the
content of menus, order of me11u items, and type of feedback (graphic or tabu ­
lar) . Advocates point to video games that increase the speed or number of dan ­
gers as users progress through game levels. Ho\.vever, games are notably
different from most work situation s, where users bring their goals and motiva­
tion s to accomplish tasks.

There are opportunities for adaptive user models to tailor designs (such as for
e-mail spam filters or search results ranking), but unexpected interface behavior
can have nega tive effects that discourage use. If adaptive sys tems make surpri s­
ing cha11ges, such as altering the search results ranking, users may be puzzled
about what has happened. Users may become anxious because they cannot pre ­
dict the next change, interpret what has happened, or return to the previous
state . Users may also be annoyed if a one-time purchase of a children’s book as
a gift leads to repeated promotions of more children’s books.

An application of user mode ling is reco1nmender systems in web applications.
In this case, there is no agent or adaptation in the interface, but the program
aggregates information from multiple sources in some (often proprietary) way.
Su.ch approaches have great practical val ue such as suggesting movies, books,
or music; users are often intrigued to see what suggestions emerge from their
purchasing patterns. Amazon.com and other e-commerce companies success ­
fully suggest that “customers who bought X also bought Y.”

The philosophical alternative to agen ts and user mod eling is to design com­
prehensible sys tems that provide consistent interfaces, user control, and predict­
able behavior. Designers who emphasize a direct -manipulation style believe that
users have a strong desire to be in control and to gain mastery over the system,
which allows them to accept responsibility for their actions and derive feelings of
accomplishment (Shneiderman, 2007). Historical evidence suggests that users
seek comprehensible and predictable sys tems and shy away from those that are
complex or unpredictable; for example, pilots may disengage automatic piloting
devices if they perceive that these systems are not performing as they expect.

Agent advocates promote au tonomy, but this means they must take on the
issue of responsibility for failures. Who is responsible when an agent viola tes

Settings

@ PERSONALIZATION

Background

Colors

Lock screen

Themes

Start

FIGURE 3.4

3.3 Principles 103

I Find a setti ng

Start

Show most used apps

tll) On

Show recently added apps

tll) On

Use Start full screen

X’e}£3′(~:. · &a\·n\’~ •;;–~·~·.;.;.:·:••~–:;_·- :

:: ·. . . .. $ki\lS ‘ S•–\

‘•

., ·~—.—,•. •
.. . . : •’ . : \ . . ‘ . ‘· . ,’ . . . …. .. ·· .. : .. •’ .. _;,• . .,

·.•· .. · .. ••.•·· ‘· .,. ….. ; .. . .. .
• • ‘ I • • ,•., • • . ,,;• , .·

• ‘ • •• t • ,,. . … . …. . .,

. .

. .

‘ . ‘: ‘ . . .. . . . . …
:·••. ,-.’ · • ·
:, !, • • ~ .. .

.-


. ..

. . . . ·:.• . · … · . . ·.. . ‘ .. . . ‘ •
•, .

. .
• .. .. .

• ..

., .

• ..
.. . . …. .

‘ … . . . . .
. .

· .

,

‘· .
• • •

. . .
‘ _. . . . . .

. .

••

‘ . • • . … . . . . . . . . . ‘ ‘ .. . . .. . . ~. . .,,

I• –

, . . .•

·.

.’ · ,,,,-. ‘ . : , . ..

” •
•’ • •• . ‘ . .

-·-
..

, ·space

CHAPTER

99 Just as we can assert that no product has ever been created in a
single moment of inspiration … nobody has ever produced a set
of requirements for any product in a similarly miraculous manner.
These requirements may wel l begin with an inspirational moment
but, a lmost certainly, the emergent bright idea wil l be developed
by iterative processes of evaluation until it is thought to be worth

starting to put pencil to paper. Especially when the product is
entirely new, the developmen t of a set of requirements may wel l

depend upon testing initia l ideas in some depth. ”

W. H. M ayall

Principles in Design, 1979

99 The Plan is the generator. Without a plan, you have lack of order

and willfu lness. The Plan ho lds in itself the essence of sensation . ”

Le Co rbusier

Towards a New Architecture, 1931

CHAPTER OUTLINE
4.1 Introduction 4.5 Design Methods

4.2 Organizational Support for Design 4.6 Design Tools, Practices, and Patterns

4.3 The Design Process 4.7 Social Impact Analysis

4.4 Design Frameworks 4.8 Legal Issues

125

126 Chapter 4 Design

4. 1 Introduction

Design can be loosely defined as the outcome or the process of creating
specifications for synt hetic artifacts, such as products, services, and processes.
All manufactured objects in the world-objects that were made by people and
are not found in nature-are the result of some form of design process, whether
a deliberate one or otherwise. User interfaces, which are very much synthetic
and decidedly do not occur in nature, are no exception. However, while early
computer manufacturers were quick to enlist industrial designers to shape the
physical form factors of the first computers, they were much less agile in recog­
nizing the need for interaction design (Moggridge, 2007): the design of the digital
interface itself. Now an established design discipline in its own right, interaction
design” is defined as makiI1g plans and specifications for digital objects, wl1ich
include devices, interfaces, services, and information.

Every time designers create a new digital artifact, they make decisions­
unconscious or not-on how the artifact will look, feel, and function. If they
carefully consider how digital products and services are created, they cai1 make
appealing products and services that respond to human 11eeds witl1 user inter­
faces that are easy to learn, comprehensible, and efficient to use. Early computer
applications were designed by programmers to be highly functional for the pro­
grammers themse lves and their peers, but this approach quickly failed when the
audience for computers grew to non-technical fields. Bill Moggridge (2007) calls
this phenomenon being “kind to chips but cruel to people,” and it was an early
failing of interaction design.

The curr ent generatio n of users for smartphon es, soc ial media, and e­
cornmerce have vas tly different backgrounds from programmers and engineers .
They ha ve no interest in obscure interfaces but are more oriented toward
their professional or recreational needs and are less dedicated to the technol ­
ogy itself. Therefore, effective interaction design takes the intended user as its
s tarting point and focuses on facilitating the function of the artifact. As a
result, professional interaction designers carefully observe their users, itera­
tively refine their prototypes based on thoughtful analysis, and systemati ­
cally validate their interfaces through ear ly usability and acceptance tests.
However, as for any design discipline, function is not the only imp or tan t
aspect of a digital object. Form is another aspect, and it sometimes comes in
conflict with function. While on the one hand it can be argued that good form
will facilitate function (since an aesthetically appealing artifact can invite
use), it is also true that a highly convoluted form may inhibit it. Consider a
kitchen cabinet door with no handles: slick and appealing according to
contemporary design sense but lacking visual indications of how and v.rhere
to open the door. In fact, it may not even be immediately obvious that a door

4.1 Introduction 127

with no handles is in fact a door, let alone that it can be opened! The same
is true for interfaces: It often makes sense to let form follow function
(Sullivan, 1896). Tradeoffs between form and function are discussed in
Chapter 12.

If there are several similarities between interaction design and other design
disciplines, what is particularly unique about interaction design? One of the
key cl1aracteristics of digital media is that they are freely reproducible without
consuming the original copy or costing additional resources. They also have
few of the physical requirements that real materials must obey, such as cost,
ease of manufacturing, or physical robustness. In essence, information tech­
nology is thus a “material without qualities” (Lowgren and Stolterman, 2004).
For software engineering, this fact has led to the global open source move­
ment, where programmers, eve11 professional ones, are willing to give away
the results of their hard work for free. In the context of interfaces and interac­
tions, digital media mean that designers work under few physical constraints
compared to tangible artifacts. A digital button can be arbitrarily large or
small, or it can be entirely gold- or diamond-plated, with no added or reduced
cost to the overall project. In fact, the designer can freely experiment with any
number of alternate designs during the process without incurring any other
cost than time. However, this added freedom is a double-edged sword in that
constraints can often be helpful in reducing the space of potential designs (also
known as the design space) and even boosting the creativity of the designer;
after all, necessity is said to be the mother of invention. With no such helpful
constraints to reduce the design space for digital interfaces and objects, inter­
action designers are often left with a much more daunting problem than
industrial designers working in the real world.

The key to good design starts in the organization itself. The primary reason
for this is that design is unpredictable, which requires an agile organizational
structure as well as a comprehensive business strategy oriented around diverse
design processes. In fact, some companies, such as Apple, Pepsi, Philips, and
Kia Motors, have hired chief design officers (CDOs) in recognition of this
unpredictability. Section 4.2 offers examples of such structures and strategies
that managers can adapt to suit their organizations, projects, schedules, and
budgets.

This unpredictable and dynamic nature requires a robust and flexible
design process. Section 4.3 describes a four-phase iterative design process
consisting of requirements analysis (Phase 1), preliminary and detailed
design (Phase 2), build and implementation (Phase 3), and finally evaluation
(Phase 4, described in Chapter 5). This cycle is repeated until the outcome
from the evaluation phase is acceptable given the requirements. The design
cycle itself is part of a larger cycle that incorporates the entire life cycle of a
prodt1ct, including deployment, maintenance, and 11ew updates to the
system.

128 Chapter 4 Design

Design frameworks are discussed in Section 4.4 and permeate the entire
design philosophy and design methods used in the process. Three specific
frameworks are of particular interest to interaction designers: agile and rapid
prototyping, user-centered design, and participatory design. The exact choice of
which design framework to use depends on the organization, the project team,
and the product being designed.

If frameworks provide the high-level structure, then the design methods are
the building blocks that are used to populate the structure. Section 4.5 reviews
popular interaction design methods for each phase of the design process, includ­
ing ethnographic observation and sketching (Phase 1), storyboarding and sce­
nario development (Phase 2), and paper mockups and prototyping (Phase 3).
Evaluation methods for Phase 4 are described separately in Chap ter 5.

Design is a cl1aller1gir1g activity and is difficult to learn in a purely theoretical
setting, particularly for newcomers but also for seasoned designers entering a
new domain. Section 4.6 offers several practical, hands-on best practices to facil­
itate the design process, including UX pro totyping tools, UX guidelines docu­
ments, and the notio11 of design patterns for interaction design and UX. Originally
derived for as disparate areas as urban planning (Alexander, 1977) and software
engineering (Freeman et al., 2004), design patterns are concrete and reusable
solutions to commonly occurring problems. The section shows how such design
pat terns can be applied to interaction design.

This chapter concludes with Section 4.7, which describes lega l concerns that
should be addressed in the design process, including topics such as privacy,
safety, intellectual property, standardi zation, and certification.

See also:

Chapter 5, Evaluat ion and the User Experience

Chapte r 12, Adva nci ng th e User Experience

4.2 Organizational Support for Design

Most companies may not yet have chief usability officers (CUOs) or vice
presidents for usability, but some companies are beginning to employ chief
design officers (CDOs), which may help to promote usability and design think­
ing at every level. A case it1 poin t is Apple Inc., which was one of the first com­
panies with a COO and which accordingly has bee11 praised for its iru1ovative,

4.2 Organizational Support for Design 129

well -designed, and usab le products. Even if a company has no CDO, organiza­
tional awareness can be stimulated by presentations, internal seminars, news­
letters, and awards. However, resistance to new techniques and changing roles
for software engineers can become a problem in traditional organizations.

Organizational change is difficult, but creative leaders blend inspiration and
provocation. The high road is to appeal to the desire for quality that most profes­
sionals share. When they are shown data on shortened learning times, faster per­
formance, or lower error rates on well-designed interfaces, managers are likely to
be more sympatl1etic to applying usability-engineering methods. Even more
compelling for e-commerce managers is evidence of higher rates of conversion,
enlarged market share, and increased customer retention. For managers of con­
sumer products, the goals include fewer returns/ complaints, increased brand
loyalty, and more referrals. The low road is to point out the frustration, confu­
sion, and high error rates caused by current complex designs while citing the
successes of competitors \,vho apply usability -engineering methods.

Collecting momentum for organizational change can come from different
sources. Major corporations almost always question the return on invest1nent
(ROI) for usability engineering and interaction design. However, the business
case for focusing on usability has been made powerfully and repeatedly (Karat,
1994; Marcus, 2002; Bias and Mayhew, 2005; Nielsen, 2008). Claire-Mar ie Kar­
at’s (1994) business-like reports within IBM became influential documents
when they were publ ished externally. She reported up to $100 payoffs for each
dol lar spent on usability, with identifiable benefits in reduced program -devel ­
opment costs, reduced maintenance costs, increased revenue due to higher cus­
tomer satisfaction, and improved user efficiency and productivity. Other
economic analyses showed fundamental changes in organizational productiv­
ity (with improvements of as much as 720%) when designers kept usability in
mind from the beginning of development projects (Landauer, 1995).

The necessary pressure for change may also come from the customers them­
selves. Corporate marketing and customer-assistance departmen ts are becom­
ing more awa re of the importance of usability and are a source of constructive
encouragement. When competing products provide similar functionality,
usability engineering is vita l for product acceptance. Today’s customers are dis­
cerning and expect high quality, and their overal l brand loyalty is steadily
diminishing. Retaining as well as increasing its customer base can provide a
powerful incentive for an organization to improve its focus on interaction design
and usabi lity engineering.

Finally, usability engineering is required for certification and standardization
in some industries. For examp le, the aerospace industry has Human Systems Inte­
gration (HSI) requirements that deal with a combination of human factors, usabil­
ity, display design, navigation, and so forth (National Research Council, 2007).

As a result, most large ai1d many sn1all organizations now maintain a
ce11tralized human factors group or usability /UX laboratory as a source of

130 Chapter 4 Design

expertise in design and testing techniques. In fact, many organizat ions have
created dedicated usability laboratories to provide expert reviews and to con­
duct usability tests of products during development in carefully supervised
conditions (Rubil1 and Chisnell, 2008). Beyo11d il1terna l usability teams, outside
experts can sometimes provide fresh and unbiased il1sights on difficult design
and usability decisions. These and other evaluation strategies are covered in
Chapter 5.

Organizational support for usability testing is not sufficient, however, but
should also include the creative parts of the design process. Eacl1 project should
have its own user-interface architect who develops the necessary skills, man­
ages the work of other people, prepares budgets and schedules, and coordinates
with internal and external human-factors professio11als when further expertise,
references to the literature, or usability tests are required. Organizations with a
strong design ethos understand this, and their example can be used in enacting
change in more traditiona l corporations.

There are interaction design activities where the ROI for usability analysis
during the development cycle is not immediately apparent but true usability of
the delivered system is crucial for success. One familiar example is voting
machines. An end result of confused, misinterpreted voting results would be
catastrophic and counter to the best interests of the voting popu lation, but the
usability analysis and associated developmen t costs should be manageable by
the go, ,ernment contractor bui lding the electronic voting system.

As the field of interaction design has matured, projects have grown in com­
plexity, size, and importance. Role speciali zation is emerging, as it has in fields
such as archi tecture, aerospace, and book design. Interac tion design takes on
new perspectives when writing web, mobile, or desktop applications, w ith an
emerging discipline in translating the same information across each of these
media. Eventually, individuals will become highly skilled in specific problem
areas, such as user-interfac e- building tools, graph ic display strategies, voice
and audio design, shortcu ts, navigation, and online tutorial writing.
Consultation with graphic art ists, book designers, adve rtising copywr iters, text­
book authors, game designers, or animators is expected. Perceptive system
deve lopers recognize the need to employ psychologists for conducting experi­
mental tests, sociologists for evaluating organizationa l impact, educational psy­
chologists for refining training procedures, and social workers for guiding
customer-service personnel.

Usability engineers and user-interface architects, sometimes called the user
experience (UX) team, are gainmg experience in managmg organizational
change. As attention shifts away from software engineering or managemen t­
information systems, battles for control and power manifest themselves in
budget and personnel allocations. Well-prepared managers who have a concrete
organizatio11al plan, defensible cost/benefit analyses, and practical develop­
ment methodologies are mos t likely to be wiru1ers.

4.3 The Design Process 131

4.3 The Design Process

Design is inherently creative and unpredictable, regardless of discipline. In the
context of interactive systems, successful designers blend a thorough knowl­
edge of technical feasibility with an uncanny aesthetic sense of what attracts and
satisfies users. One way to define design is by its operational characteristics
(Rosson and Carroll, 2002):

• Design is a process; it is not a state, and it cannot be adequately represented
statically.

• The design process is nonhierarchical; it is neither strictly bottom-up nor
strictly top-down.

• The process is radically transforn1ational; it involves the development of partial
and interim solutions that may ultimately play no role in the final design.

• Design intrinsically involves the discovery of new goals.

These characterizations of design convey the dynamic nature of the process. An
iterative design process based on this operational definition would consist of
four distinct phases (Fig. 3.1): requirements analysis (Phase 1), preliminary and
detailed design (Phase 2), build and implementation (Phase 3), and evaluation
(Phase 4). This is a bare -bones process that describes its overa ll structure; indi ­
vidual app lications of this process in specific design teams and for specific
design artifacts will differ in terms of the frameworks, methods, and tools used.
The primary feature of this process is that it is iterative and cyclical; unlike linear

Project start

~
INCEPTION

Project end

F GURE 4. 1

Phase 1:
Requireme,1ts

analysis

Phase 4:
Evaluation

An iterative design process for interaction design.

Phase 2:
Preliminary &
detailed design

Phase 3:
Implementation

132 Chapter 4 Design

waterfall models where one phase of a pipeline feeds the next, our design pro­
cess repeats each phase over and over until the final product is of acceptable
quality. Second, there are several cross-cutting factors that contribute to each
phase of the cycle, including academic and user research, guidelines and
standards, and tools and patterns. Each of these is described below.

Our focus here is purely on the human and social aspects of an interactive
system or product, but the overall design process encompasses also technical
aspects. Many technical design processes, such as in software engineering, fol­
low a similar four-phase cycle, allowing interaction design and engineering to
be integrated with them easily.

4.3.1 Phase 1: Requirements analysis
This phase collects all of the necessary requirements for an interactive system or
device and yields a requirements specification or document as its outcome. In
genera l, solicit ing, capt uring, and specifying user requirements are major keys
to success in any development activity (Selby, 2007). Methods to elicit and reach
agreement upon interaction requirements differ across organizations and indus­
tries, but the end result is the same: a clear specification of the user community
and the tasks the users perform.

Collecting interaction design requirements is part of the overall requirements
analysis and management phase and often has a direct impact on the engineer­
ing aspects of the design; for example, a finger painting app requires a multi­
touch display with low touch latency. Thus, even requirements documents
written specifically for user experience and interaction design aspects are ofterl
specified in terms of three components (see Box 4.1 for specific examples):

• Functio1111l requirements define specific behavior that the sys tem should sup­
port (often cap tured in so-called use cases, see below);

• Non1unctional requirements specify overall criteria governing the operation
of the interactive system without being tied to a specific action or behavior
(hardware, software, system performance, reliability, etc.); and

• User experience requirernents explicitly specify non-functional requirements for
the user interaction and user interface of the interactive system (navigation,
input, colors, etc.).

Requirements documents provide a shared understanding between the
members of the product team. The success or failure of software projects often
depends on the precision and completeness of this understanding between all
the designers, developers , and users. What happens without adequate require­
ments definition? You are not sure what problem you are solving, and you do
not know when you are done.

Box 4.1 gives an example of interaction design requirements for arl
e-commerce website, an ATM, and a mobile messaging app. Be careful not to

4.3 The Design Process 133

BOX 4. 1
Examples of requirements regarding system behavio r for three dist inct types of
interactive systems: an e-commerce website, an ATM, and a mobile messaging app .

Functional requirements:

• Website : The website shall allow users to purchase items and sha ll pro­
vide other, related merchandise based on past vis its and purchases .

• ATM: The system shall let users enter a PIN code as identi f ication and
shal l ensu re that the code matches t he one on f ile.

• Mobile app: The app sha ll be able to send messages at all times, even when
out of the service area (in which case they are saved for later sending) .

Non -functional requirements:

• Web site : The website shall give use rs the abi lity to access t heir user
account at al l times, allowing them to view and modify name, mai l
address, e-mai l address, phone, etc.

• ATM: The system shall permit the ATM customer 15 seconds to make a
select ion . The customer shall be wa rned that the session wi ll be ended if
no selection is made .

• Mobile app: Messages should send within 2 seconds, returning the user
to the new message window (continuing in the background if necessary).

User experience requirements:

• Website : The website shall always have a visible navigat ion menu in the
same position on the screen.

• ATM: On-screen prompts and instructions shall be clear and accessible.
The ATM shou ld return the user’s commands within half a second.

• Mobil e app: The mobi le app shall support custom ization such as color
schemes, skins, and sounds.

impo se ht1man operator actions (reqt1irements) onto the interaction design
requirements. For example, it is best not to specify a requirement like this: “The
user shall decide how much to withdraw from the ATM within 15 seconds.”
Rath er, allocate that same requirement to the computer sys tem : “The A TM sha ll
permit a user 15 seconds to selec t a withdrawal amount … before prompting for
a response.”

V\’hile it is possible to write functional requirements as simply an informal list
of actions (as in Box 4.1), the concept of a use case from software eng ineering can
come in handy here because of its direct connection to users and interaction. Put
simply, a use case is a formalized scenario that captures an operation between
an actor and the system (in general software engineering, the actor could be

134 Chapter 4 Design

another system, but the focus is on human users here) in a step-by-step manner.
The rule is that a system should simply be a sum of its use cases: No functional­
ity should be implemented that does not explicitly support at least one use case.
This also gives a straightforward recipe for evaluating the system (in Pl1ase 4); if
all use cases can be completed successfully, the system is correct and val id.

Several methods exist for actually collecting and analyzing interaction design
requirements, including ethnographic observation, focus groups, and user inter­
views. Common among all of these is that they are intended to mo11itor the con­
text and environmen t of real users, either in action or in their own words. Section
4.5 describes these methods in detail. Tradeoffs between what functions are
done best by computers versus humans in human-computer interaction (Section
3.3.6) should also be discussed at this point in the development process.

4.3.2 Phase 2: Preliminary and detailed design
The core of the design process is realizing the requirements from the previous
phase. The design phase in turn consists of two stages: a preliminary stage,
where the high-level design or architecture of the interactive system is deri ved,
and a detailed stage, whe re the specifics of each interaction are planned out. The
out come from the design phase is a detailed design document.

The preliminary design is also known as architectural design, and in engineer­
ing settings this stage often entails deriving the architecture of the system. For
user experience and interaction design, preliminary design consists of mapping
out the high-level concepts such as the user, con trols, interface displays, naviga­
tion mechanisms, and overall workflow . Preliminary design can also be called
conceptual design, particularly in software engineering, because it is sometimes
useful to organize the high- level concepts into a conceptual map with their rela ­
tions. Overall, this activity is about developing the mental model that users
sho uld have about the interactive system when using it. ls your syste m focused
on a central view, such as a map or a table, or is it a sequence of forms or a set of
linked displays? Is it an app that integrates with other apps to pop up on
demand, or is it intended for focused, sustained use? These are questions to
answer and refine during this stage.

The high-level concepts and their relations provide a starting point for the
detailed design. This stage entails planning out all of the operations that take
place between user and interactive system to a level where only implementation
and technical details remain. Regardless whether you are using the use case
concept discussed in the previous section, this can be done by creating and refin­
ing a step-by-step list for the exchanges between the user and the system.

One difficulty in designing interactive systems is that customers and users
may not have a clear idea of what the system will look like when it is done.
Since interactive sys tems are novel in many situations, users may not realize
the implications of desigi1 decisions. Unfortunately, it is difficult, costly, and

4.3 The Design Process 135

time-consuming to make major changes to systems once those systems have
been implemented. Although this problem has no complete solution, some of
the more serious difficulties can be avoided if, at an early stage, the customers
and users can be given a realistic impressio11 of what the final system will look
like. Suitable methods for the design phase should thus go beyond eliciting the
needs of the users and instead find ways to fulfill these needs.

Examples of suitable design methods include sketching, paper mockups, and
high-fidelity prototypes. Furthermore, all methods can be informed through the
use of tools, patterns, and best practices. For example, guidelines documents
give direction on specific design choices, such as menu design, display layout,
and navigation techniques. Patterns suggest effective ways to design an inter­
face, such as single-page applications for websites or multi-document interfaces
for desktop tools. Dedicated wireframing tools allow for rapidly creating mock­
ups of a design. Section 4.6 discusses these tools and patterns in depth.

4.3.3 Phase 3: Build and implementation
The implementation phase is where all of the careful (or not very careful at all,
depending on your design approach; see the agile development framework in
Section 4.4.1) p lanning gets turned into actual, running code. The outcome from
this phase is a working system, albei t not necessarily the final one. The actua l
software and hardware engineering needed to achieve this are outside the scope
of this book. It is worth, however, briefly mentioning some suitable software
development platforms for interactive applications based on your computing
platform:

• Mobile: Building mobile apps typically requires using the SDK (software
development kit) and development environment provided by the manu­
facturer of the operating system: the Android SDK in Java, the Apple iOS
SDK in Objective -C, and the Windows Phone/Mobile SDKs. Most of these
SDKs require registering as a developer to have access to tl1e app exchange
for making your app available to general users. Since mobile app develop­
ment typically is cross-platform- the deve lopment is actually conducted on a
personal computer – all of these SDKs include emulators for testing the app
on a virtual phone residing on the personal computer itself.

• Web: The browser has become a ubiquitous information access platform,
and modem web technologies are both pervasive and full-featured to the
point tha t they can emu late or replace traditional computer software. Web
applications and services typically consist of both client and server software:
Client-side software runs in the user’s browser and is accordingly built in
JavaScript-the programming language of the browser-whereas server-side
software runs on the web server or co1mected hosts and is often implemented
in languages such as PHP, Ruby, Java, or even JavaScript (using Node.js).

136 Chapter 4 Design

A recent change in web development has been to build mobile apps using
web technologies; the resulting app runs in a dedicated browser instance and
is almost indistinguishable from a normal app built using the native SOK
yet has the benefit of being cross-platform across different mobile operatit1g
sys tems.

• Personal Computers: Developing dedicated app lications for a personal
computer typically requires using the native SDKs for the specific operating
system. Development environments such as Microsoft’s Visual Basic/C++ are
easy to get star ted with yet have an excellent set of features. C# and the .NET
Framev.rork are other good candidates for your project. For cross-p latform
software deve lopment that works regardless of operating system, Oracle’s
Java TM is a popular choice. People who \<Vant to write their own Java programs
can use the Java Development Kifl' M (JDK).

Regardless of platform, make sure to evaluate tool capabilities, ease of use,
ease to learn, cost, and performance. Tailor your tool choices for the size of the
job. Building a software architecture that support s your user-interface project is
just as important as it is for any other (particularly large-scale) software devel­
opment activity.

4.3.4 Phase 4: Evaluation
In the final phase of the design cycle, developers test and validate the system
implementation to ensure that it conforms to the requirements and design set
out ear lier in the process. The outcome of the va lidation process is a validation
report specifying test performance. As discussed above, a straightforward
approach to va lidate a system specified LLsing use cases is simp ly to check that
each use case can be completed successfully. Since an interactive system is the
sum of aU of its conceivabl e user operations, such a test covers all of the syste m
functionality. Depending on this outcome, the design team can decide to pro­
ceed with production and deployment of the system or to continue another
cycle through the design process.

Validation is a vital part of the design process. Theatrica l producers know
that extensive rehearsals and previews for critics are necessary to ensure a suc­
cessful opening night. Early rehearsals may involve only the key performers
wearing street clothes, but as opening night approaches, dress rehearsals with
the full cast, props, and lighting are required. Aircraft designers carry out wind­
tunnel tests, build plywood mockups of the cabin layout, construct complete
simulations of the cockpit, and thoroughl y flight-test the first prototype.
Similarly, website designers know that they must carry out many small and
some large pilot tests of components before release to customers (Rubin and
Chisnell, 2008). In addition to a varie ty of exper t review methods, tests wi th the
intended users, surveys, and automated analysis tools are proving to be

4.4 Design Frameworks 137

valuable. Procedures vary greatly depending on the goals of the usability study,
the 11umber of expected users, the danger of errors, and the level of investment.
Chapter 5 covers a range of suitable evaluation methods for this phase in depth.

4.4 Design Frameworks

While the design process discussed above generally should remain the same for
all your projects, the approach to performiI1g it may vary. The concept of design
frameworks captures this idea: the speci fic flavor and approach the design takes to
conducting the design process . More specifically, interaction design practice over
the past few decades has unearthed several unique approaches to conducting the
design process. Tlus section reviews the concepts of user-ce11tered design (UCD),
participatory design (PD), and the nascent idea of agile interaction design.

4.4. 1 User-centered design

Many software development projects faiJ to achje, re their goals; some estimates
of the failure rate put it as high as 50% (Jones, 2005). Much of tlus problem can
be traced to poor communication between developers and their business clients
or between developers and their users. The result is often systems and interfaces
that force the users to adapt and change their behavior to fit the interface rather
than an interface that is customized to the needs of the users.

User-centered design (UCD) is a counterpoint to this fallacy and prescribes a
design process that primarily takes the needs, wants, and limitations of the actual
end users into account during each phase of the design process (Lowdermilk,
2013). Directly involving the intended users in the process constantly challenges
the assumptions of the design team about user behavior in the real world and
gives designers a much-needed understanding of what their users actually
need. In particular, careful attention to user-centered design issues during the
early stages of software developme11t dramatically reduces both development
time and cost. UCD leads to systems that generate fewer problems during devel­
opment and have lower maintenance costs over their lifetimes. They are easier
to learn, result in faster performance, reduce user errors substantially, and
encourage users to explore features that go beyond the minimum required to
get by. Mos t importantly, UCD reduces the risk of designers building the
"wrong system": a system that the end users neither need nor asked for. In addi­
tion, user-centered design practices help organizations align system functional­
ity with their business needs and priorities.

While the main premise of UCD – user involvement – is straigl-1tforward, it is
also its most significant challenge. For example, finding users may be difficult

138 Chapter 4 Design

becau se of the need to select a manageable number of representative users,
because the users may be unable or unwilling to participate, and because the
users often lack the technical expertise needed to communicate effectively with
the designers. Even when tltese challenges have been overcome, many users
may not have a clear understanding of wha t they need in the new sys tem or
product. Successful developers work carefully to understand the business's
needs and refine their skills in elicitin g accurate requirements from non­
technical business mai,agers. In addition, since business managers may lack the
technical knowledge to understand proposals made by the developers, dialogue
is necessary to reduce confusion about the orgaiuzational implications of design
decisions.

4.4.2 Participatory design
Going beyond user-centered design, participatory design (PD) (also known as
cooperative design in Scandinavia) is the direct involvement of people in the col­
laborative design of the things and techno logies they use. The arguments in
favor suggest that more user involvement brings more accurate information
about tasks and an opportunity for users to influence design decisions. The
sense of participation that builds users' ego investment in successful implemen­
tation may be the biggest influence on increased user acceptance of the final
system (Kujala, 2003; Muller and Druin, 2012). On the other hand, extensive user
involvement may be costly and may lengthen the implementation period. It
may also ge11erate antagonism from people who are not involved or whose
suggestions are rejected and potentially force designers to compromise their
designs to satisfy incompetent participants.

Participatory design experiences are usually positive, however, and advo­
cates can point to many important contributions that would have been missed
without user participation. Many variations of participatory design have been
proposed that engage participants to create dramatic performances, photogra­
phy exhibits, games, or merely sketches and written scenarios. For example,
users can be asked to sketch interfaces and use slips of paper, pieces of plastic,
and tape to create low-fidelity early prototypes. A scenario walkthrot1gh can be
recorded on video for presentation to managers, users, or other designers.
High-fidelity prototypes and simulations can also be key in eliciting user
requirements.

Careful selection of users helps to build a successful participatory desigi,
experience. A competitive selection increases participants' sense of importance
and emphasizes the seriousness of the project. Participants may be asked to
commi t to repeated meetings and shou ld be told what to expec t abou t their roles
and their influence. They may have to learn about the technology ai,d business
plans of the organization and be asked to act as a communication chaimel to the
larger group of users that they represent.

4.4 Design Frameworks 139

The social and political environment surrounding the implementation of
complex interfaces is not amenable to study by rigidly defined methods or con­
trolled experimentation. Social and industrial psychologists are interested in
these issues, but dependable research and implementation strategies may never
emerge. The sensitive project leader must judge each case on its merits and must
decide on the correct level of user involvement. The personalities of the partici­
patory design team members are such critical determinants that experts in
group dynamics and social psychology may be useful as consul tant s. Many
questions remain to be studied, such as ,-vhether homogeneous or diverse
groups are more successful, how to tailor processes for small and large groups,
and how to balance decision-making control between typical users and profes­
sional designers.

The experienced interaction designer ki1ows tl1at organizational politics and
the preferences of individuals may be more important than technical issues in
governing the success of an interactive system. For example, warehouse manag­
ers ,-vho see their positions threatened by an interactive system that provides
senior managers with up-to-date information through digital displays may try
to ensure that the system fails by delaying data entry or by being less than dili ­
gent in guaranteeing data accuracy. The interaction designer should take into
accoun t the sys tem's effect on users and shou ld solicit their participation to
ensure that all concerns are made explicit early eno ugh to avoid counterproduc­
tive efforts and resistance to change. Novelty is threatening to many people, so
clear statements about what to expect can be helpful in reducing anxiety.

Ideas about participatory design are being refined with diverse users, rang­
ing from children to older adults. Arranging for participation is difficult for
some users, such as those with cognitive disabilities or those whose time is lim­
ited (for example, surgeons). The levels of participation are becoming clearer;
one taxonomy describes the roles of children in developing interfaces for chil­
dren, older adults in developing interfaces whose typical users wiU be other
older adults, and so on, with roles varying from testers to informants to partners
(Druin, 2002; Fig. 4.2). Testers are merely observed as they try out novel designs,
while informants comment to designers through interviews and focus groups.
The key characteristic of participatory design is that the design partners are
active, first-class members of the product design team.

4.4.3 Agile interaction design
Traditional design processes can be described as heavyweight in that they
require significant investments in time, manpower, and resources to be success­
ful. In particular, such processes are often not sufficiently reactive to today's
fast-moving markets and dynamic user audiences. Originally hailing from
software engineering, agile development is a fami ly of development methods for
self-organizing, dynamic teams that facilitate flexible, adaptive, and rapid

140 Chapter 4 Design

FIGURE 4.2
Intergenerationa l and interdisciplinary design team from the University of
Maryland's KidsTeam working on new human-computer interaction technologies
using pa per prototypes (http://hci l.umd.edu/ch ii d ren•as•desig n-pa rtners/).

deve lopment that is robust to changing requirements and. needs. These methods
are based on evolutionary development, where software is built incrementally and
in rapid release cycles. Similarly, rapid prototyping comes from manufacturing
disciplines and describes a family of techniques for quickly fabricating physical
parts or assemblies using computer-aided design (CAD). Both methods counter
traditional heavyweight processes that l1ave plagued design and facilitate a
more flexible and, indeed, agile approach to design. Taken together, the meth­
ods can also be applied to interaction design to enable the rapid creation of
interactive systems to meet user needs. In fact, taking users and usability into
account during agile development may help to address a common weakness of
agile developmen t methods: Constant interface changes due to continuous iter­
ative design may lead to inconsistent and confusing user experience poorly
matched to the user.

Thus, agile interaction design uses lightweight design processes that facilitate
the incremental and iterative nature of agile software developments. Instead of

4.5 Design Methods 141

\
.

··• ' ••" … ~·' l .,r'•·

• • .. ' •• • I
l• tr


J';,.

FIGURE 4.3
Professor Jon Froehlich and his students working in the HCIL Hackerspace at University of
Maryland, College Park.

costly and time-consuming documentation, high-fidelity prototypes, and usabil­
ity evaluation and workshops that are common to heavyweight design processes,
agile interaction design will use ske tches, low-fidelity mockups, and fast usabil­
ity inspections (Gundelsweiler et al., 2004). This enables practical and pragmatic
design, short development cycles, and dynamic designs that are responsive to
changing needs. Good resources for more information on agile interaction design
and extreme usabi lity (XU) can be found in Ambler (2002, 2008).

The contemporary "maker culture" movement on technology-based tinkering
and manufacturing is a prime example of agile methods in action, where the
focus is heavily on rapid and informal experimentation and prototyping by
like-minded individuals gathering in so-called makerspaces, hackerspaces, or
fablabs. Fig. 4.3 shows the Hackerspace at University of Maryland, College Park.
For more information on maker culture, see Anderson (2014).

4.5 Design Methods

Design methods are the practical building blocks that form the actual day-to­
day activities in the design process. There are dozens of design methods in the
literature, but designers may want to focus on the most common ones (discussed

142 Chapter 4 Design

below). See Holtzblatt and Beyer (2014) and Jacko (2012) for more details on
specific methods or additional methods beyond these.

What is the relation between design frameworks and design methods? It is
certainly true that specific design frameworks I-lave an affinity to specific desigrl
methods; for example, participatory and user-centered design tends to incorpo­
rate a lot of ethnographic observation, whereas rapid and agile development
employs sketching to a high degree. However, the design frameworks also
provide a flavor for the overall process ai,d each of the design methods: An agile
approach to sketching will focus on collecting quick ideas from the design team,
whereas a user-centered or participatory approach will let the intended users
themselves be part of the sketching process. The description below discusses
such varia tions and affinities.

4 .5. l Ideation and creativity

One way to think about design is as an incremental fixation of the solut ion
space, where the range of possible solutions is gradually whittled down until
only a sing le solution exists. This is the final product or service that then goes on
to ship and be deployed. Gradually redttcing the solution space in this manner
is called convergence or convergent thinking, particularly for teams of designers

Initial
specification

0
V)

Divergence

<.,Q., <..,Q.,

Convergence Divergence Convergence ~ ~vners have to give notice
about their use of technical protection measures? Journal of Telecon1munications &
High Technology Law 6 (2007), 41- 76.

164 Chapter 4 Design

Selby, Richard W. (Editor), Software Engineering: Barry W. Boehn1’s Lifetin1e Contributions
to Softivare Developn1ent, Managen1ent, and Research, John Wiley & Sons, New York, NY
(2007), 663-685.

Schell, Martina, and O’Brien, James, Co1nn1unicating the UX Vision: 13 Anti -Patterns That
Block Good Tdeas, Morgan Kaufmann (2015).

Shneiderman, Ben, and Rose, Anne, Social impact statements: Engaging public
participation in information technology design, In Proceedings of the ACM SIGCAS
Syn1posiu1n on Con·1puters and the Quality of Life, ACM Press, Nevv York (1996), 90-96.

Sullivan, Louis H., The tall office bu ilding artistical ly considered, Lippincott’s Magazine,
403-409 (March 1896).

Tidwell, Jenifer, Designing Interfaces: Patterns for Effective Interaction Design, O’Reilly
Media, Sebastopo l, CA (2005) .

Yang, Xing-Dong, Mak, Ed,vard, McCallum, David, Irani, Pourang, and Izadi, Shahram,
LensMouse: Aug1nenting the mouse with an interactive touch display. In Proceedings
of the ACM Conference on Human Factors in Con-1puting Systems, ACM Press, New York
(2010), 2431- 2440.

This page intentionally left blank

CHAPTER

~Malua ti •
Exraer.iemce

99The test of what is real is that it is hard and rough …. ”
What is pleasant belongs in dreams.

CHAPTER OUTLINE
5. 1 Introduction

5.2 Expert Reviews and Heuristics

5.3 Usability Testing and Laboratories

5.4 Survey Instruments

5.5 Acceptance Tests

5.6 Evaluation during Active Use and Beyond

5.7 Controlled Psychologically Oriented Experiments

Sim one Weil

Gravity and Grace, 1947

167

168 Chapter 5 Evaluation and the User Experience

5. 1 Introduction

Designers can become so entranced with their creations that they may fail to
evaluate them adequately. Experienced designers have attained the wisdom
and humility to know that extensive testing and eva luation are necessities. If
feedback is the ‘breakfast of champions,” then testing and evaluation is the
“dinner of the gods.” However, careful choices must be made from the large
menu of evaluation possibilities to create a balanced meal.

Tl1ere are many factors that influence when, where, and how evaluation is per­
formed within the developmen t cycle. Some sample factors include the following:

• Stage of design (early, middle, late)

• No velty of the project (well -defined versus exploratory)

• Number of expected users

• Criticality of the interface (for example, life-critical medical system versus
museum -exhibit system)

• Costs of the product and finances allocated for testing

• Time available

• Experience of the design and evaluation team

• Environment where interface is used

The range of evaluation plans might be anywhere from an ambitious two-year
test with multiple phases for a new national air-traffic-control sys tem to a one­
day test with six users for a sma ll internal website. The range of costs can vary
from a sma ll amount to a larger and substantial cost. Testing shou ld occur at dif­
ferent times in the evaluation cycle, ranging from early to just before release.

A few years ago, considering evalua ting usability was seen as a good idea
that might help you get ahead of the competi tion. However, the rapid growth of
interest in the user experience means that failing to test is now risky indeed. Not
only has the competition strengthened, but customary design practice now
requires adequate testing and follow-through with recommended changes as
appropriate and as time and budgeting permit. Failure to perform and docu­
ment testing as well as not heeding the changes recommended from the evalua­
tion process could lead to failed contract proposals or malpractice lawsuits from
users where errors arise that may have been avoided if the problems had been
detected and changes made.

One troubling aspect of testing is the uncertainty that remains even after
exhaustive testing by multiple methods. Perfection is not possible in complex
human endeavors, so planning must include continuing methods to assess and
repair prob lems during the life cycle of an int erface. Second, even though prob-

5.1 Introduction 169

lems may continue to be found, at some point a decision has to be made about
completing prototype testing, moving forward with the final design, and deliv­
ering the product. Third, most testing methods will account appropriately for
normal usage, but performance is extremely difficult to test in unpredictable
situations or times with high levels of input, such as nuclear reactor control, air­
traffic-control emergencies, or heavily subscribed voting times (e.g., presiden­
tial elections). Development of testing methods to deal with stressful situations
and even with partial equipment failures will have to be undertaken as user
interfaces are developed for an increasing number of life-critical applications.
Traditional lab testing (Section 5.3) may not accurately and with sufficient fidel­
ity represent the high-stress and often hostile environments in which systems
developed for healthcare providers, first responders, or the military are
employed. Likewise, testing a global-positioning driving system will not work
in a laborator y or other stationary location; it can only be tested out in the field.
Some special medical devices may also need to be tested in their natural envi­
ronments, such as a hospital, an assisted living facility, or even a private home.
Mobile devices are better evaluated in their natural con texts as well. Evaluations
might need to be done “in -the-wild” as field studies creating situations where
the evaluator may not be nearby recording and observing (Rogers et al., 2013).

Discussions about the best ways to do usabil ity testing and how to report the
results generate lively debate among researchers. The choice of evaluati on meth­
odology (Vermeeren et al., 2010) must be appropriate for the problem or research
question under consideration. Usability evaluators must broaden their methods
and be open to non-empirical methods such as user sketches (Greenberg et al.,
2012) and ethnographic studi es. Producing sketches of possible user-interface
designs, similar to the design sketches used by architects, is one interesting
approach. This allovvs more alternatives to be explored in the early stage before
the design becomes permanent.

Usability is about more than just ease of use; the entire user experience needs
to be considered. A portion of that is defining whether the system is useful
(MacDona ld and Atwood, 2014). Today, complex systems exist that are hard to
test with simple controlled experiments. Active discussions continue concern­
ing the numb er of users (Schmettow, 2012; Kohavi et al., 2013) that participate in
a usability stud y. Although the number of participants adds power and st rength
to the recommendations that come forth from usability studies, it is equally
important to focus on common tasks and potentially troublesome tasks. Usabil­
ity and user experience must be viewed as a multi-dimensional concept from
varying perspectives. Testing novel devices, such as direct touch tabletops, may
require special considerations. Usability inspection techniques may ha ve to be
modified to take into account the concept of shared and personal spaces when
using large displays. Devices today range from the very small up to wall-size
and even mall-size, and today’s u sers are sophis ticated with high levels of
expectations based upon a multitude of previous experie11ces. Being aware that

170 Chapter 5 Evaluation and the User Experience

some sys tems are used by thousands, e,,en millions, of users can affect the
usability testing process and the user experience.

Usability testing has become an established and accepted part of the design
process (see Chapter 4), but it needs to be broadened and understood in the con­
text of today’s highly sophisticated systems, a diversity of users with high
expectations, mobile and other innovative devices (such as gaming systems and
controllers), and competition and speed in the marketplace. A series of usability
evaluations and related analyses have been conducted over the years by Rolf
Malich, referred to as the Comparative Usability Evaluation (CUE) studies
(http:/ /www .dialogdesign.dk /CUE.htrnl). These findings have shown that
the number of usability problems in a website is so large, only a fraction of the
problems will be fou11d, and even professional usabil ity evalua tors can make
mistakes in the evaluation process. Spool (2007) sugges ts three radical changes
to the usability evaluation process: (1) stop making recommendations and
instead present observational findings, (2) stop conducting evaluations and
push the research onto the design team, and (3) seek out new techniques because
new tools are 11eeded. Others studies look at the relevance of empirical studi es
(Barga~Avila and Hombrek, 2011) and the move from quantitative data to qual­
itative data (Dimond et al., 2012). As HCI is maturing, perspectives are changing
(Rota and Lund, 2013). Experiential computing requires an expanded perspec­
tive to include situat ed, cul tural, emo tional , and phenomenological aspects.
More studies are being done in real-life environmen ts. Researchers are still mea­
suring, but now adding social dimensions and affective states including fun,
emotion, enjoyment, and creating a fulfilling user experience. An interesting
30-year history of usability and lessons learned for future usability testing is
presented by Lewis (2014). This is an exciting and provocative time in usabili ty
and user experience evaluation; practitioners should heed this advice, look
closely at current procedures, and continue to grow in the area of user experience.

This chap ter is organi zed as follows. Section 5.2 discusses expert reviews and
heuristics, including heuristics for specialized devices like mobile and gaming.
Section 5.3 covers conventional usability labs and the spectrum of usability
testing. Section 5.4 provides some advice on survey instruments. Section 5.5
covers acceptance testing with Section 5.6 continuing with eval uation during

See also:

Chapter 1, Usability of Int eractive Systems

Chapter 2, Universal Usability

Chapter 4, Design

Chapter 13, Th e Timely User Experience

5.2 Expert Reviews and Heuristics 171

active use and beyond. Fina lly, the chapter concludes with Section 5.7, which
covers controlled psychologically oriented experiments.

5 .2 Expert Reviews and Heuristics

A natura l starting point for evaluating new or revised interfaces is to present
them to colleagues or customers and ask for their opinions. Such informal demos
with test subjects can provide some useful feedback, but more formal so-called
expert reviezvs have proven to be far more effective. These methods depend on
having experts (whose expertise may be in the application or user -interface
domain) available on staff or as consultants. The reviews can then be conducted
rapidly and on short notice by having the expert walk through the key function­
ality of the interface using a disciplii1ed approach.

Expert reviews can occur early or late in the design phase. The outcome may
be a formal report with problems identified or recommendations for changes.
Alternatively, the expert review may culminate in a discussion with or presenta­
tion to designers or managers. Expert reviewers shou ld be sensitive to the design
team ‘s ego, involvement, and professional skill; suggestions should be made
cautiously in recognition of the fact that it is difficult for someone freshly
inspecting an interface to fully understand the design rationale and develop­
ment history. When reviewing complex interfaces, such as gam.it1g applications,
domain expertise can be a critical component (Barcelos et al., 2012). The review­
ers can note possible problems to discuss with the designers, but development
of solutions generally shou ld be left to the designers.

Expert reviews ust1ally take from half a day to one week, although a lengthy
training period may be required to explain the task domain or operational pro­
cedures. It may be useful to employ the same expert reviewers as well as fresh
ones as the project progresses. There are a variety of expert-review methods
from which to choose.

Heuristic evaluation . The exper t reviewers critique an interface to determine
conformance with a shor t list of design heuristics, such as the Eight Golden Rules
(Section 3.3.4). It makes an enormous difference if the experts are familiar with
the rules and are able to interpret and apply them. Although interfaces have
changed vas tly over the years, the creation of most sets of heuristics is based
on those proposed by Nielsen (1994). Today, there are many different types of
devices that may be subject to a heuristic evaluation, and it is important that the
heuristics match the application. Box 5.1 lists some heuristics developed specif ­
ically for video games. A similar set of 29 playability heuristics also exists. This
set splits the heuristics into three categor ies: game usability, mobi]jty heuristics,
and gameplay heuristics (Korhonen and Koivisto, 2006). Gameplay heuristics are

172 Chapter 5 Evaluation and the User Experience

BOX 5.1
Heurist ics for the gaming environment (Pinelle et al., 2008).

• Provide consistent responses to user’s actions.

• A llow users to customize video and audio setting, difficulty, and game
speed .

• Provide predictable and reasonable behavior for computer controlled
units.

• Provide unobstructed views that are appropriate for the user’s current
actions.

• A llow users to skip non-playable and frequently repeated content .

• Provide intui t ive and customizable input mappings.

• Provide controls that are easy to manage and that have an appropriate
level of sensitivity and responsiveness.

• Provide users with information on game status .

• Provide instructions, training, and help.

• Provide visua l representations that are easy to interpret and that minimize
the need for micromanagement.

the most difficult to evaluate because familiarity with all aspects of the game is
required. Using the heuristics to follow good interaction design principles while
maintaining the challenge and suspense of the game is a difficult balance. Other
specialized heuristics exist, such as for mobile app design Goyce et al., 2014) and
interactive systems (Masip et al., 2011)

Guidelines review. The interface is checked for conformance with the organi­
zational or other guidelines document (see Chapter 1 for a list of organizational
guidelines documents and Section 3.2 and Chapter 4 for more on guide lines).
Because guidelines documents may contain a thousand items or more, it may
take the expert reviewers some time to absorb them and days or weeks to review
a large interface.

Consistency inspection. The experts verify consistency across a family of
interfaces, checking the terminology, fonts, color schemes, layout, input and
output formats, and so on, within the interfaces as well as any supporting ma­
terials. Software tools (Section 5.6.5) can help automate the process as well as
produce concordances of words and abbreviations. Often large-scale interfaces
may be deve loped by several groups of designers; this can help smooth over the
interface and provide a common and consistent look and feel.

Cognitive walkthrough. The experts simulate users walking through the in ­
terface to carry out typical tasks. High-frequency tasks are a starting point, but
rare critical tasks, such as error recovery, also should be walked through. Some

5.2 Expert Reviews and Heuristics 173

form of simulating a day in the life of a user should be part of the expert review
process. Cognitive walkthroughs were initially developed for interfaces that
can be learned by exploratory browsing (Wharton et al., 1994), but they are use­
ful even for interfaces that require subs tantial training. An expert might try the
walkthrough privately and explore the system, but there also should be a group
meeting with designers, users, or managers to conduct a walkthrough and
provoke discussion. Extensions can cover website navigation and incorporate
richer descriptions of users and their goals. Newer walkthrough models include
the collaborative critique method assessing the user’s cognitive and physical ef­
fort with the interaction (Babaian et al., 2012).

Formal usability inspection. The experts hold a courtroom-style meeting,
with a moderator or judge, to present the interface and to discuss its merits and
weaknesses. Design-team members may rebut the evidence about problems in
an adversarial format. Forma l usability inspections can be educational experi­
ences for no, ,ice designers and managers, but they may take longer to prepare
and need more personnel to carry out than do other types of review.

Expert reviews can be scheduled at several points in the development pro­
cess, when experts are avai lable and when the design team is ready for feed­
back. The number of expert reviews will depend on the magnitude of the project
and on the amount of resources allocated. Often a domain expert might review
the tool, but be aware the expert may not be skilled in the tool itself.

An expert review report shou ld aspire to comprehensiveness rather tl1an mak­
ing opportunistic comments about specific features or presenting a random col­
lection of suggested improvements. The evaluators might use a guidelines
document to structure the report, then comment on novice, intermittent, and
expert features and review consis tency across all displays, paying attention to
ensure that tl1e usability recommendations are both useful and usable. Son1e sug­
gestions for writing effective usability recommendations can be found in Box 5.2.

If the report ranks recommendations by importance and expected effort level,
managers are more likely to implement them (or at least the high-payoff, low-cost
ones). For example, in one expert review, the highest priority was to shorten a
three- to five-minute login procedure that required eight dialog boxes and pass ­
words on two networks. The obvious benefit to already over-busy users was apparent,
and they were delighted with the improvement. Common middle-level recommen ­
dations include reordering the sequence of pages, providing improved instructions
or feedback, and removing non-essential actions. Expert reviews should also include
required small fixes such as spelling mistakes, poorly aligned data -entry fields, or
inconsistent button placement. A final category includes less vital fixes and novel
features that can be addressed in the next version of the interface.

Expert reviewers should be p laced in a situation as similar as possib le to the
one that intended users wil l experience. They should take training courses, read
the documentation (if it exists), take tutorials, and try the interface in as close as
possible to a realistic work env ironment, complete with noi se and distractions.

174 Chapter 5 Evaluation and the User Experience

BOX 5.2
Making usability recommendations usefu l and usable (Molich et al., 2007).

• Communicate each recommendation clearly at the conceptual level.

• Ensure that the recommenda t ion improves the overall usability of the
application.

• Be aware of the business or technical constraints .

• Show respect for the product team’s constraints.

• Solve the whole problem, not just a special case.

• Make recommendations specific and clear .

• Avoid vagueness by including specific examples in your
recommendat ions.

However, expert reviewers may also retreat to a quieter environment for a
detailed and extensive review of the entire interface.

Another approach, getting a bird’s-eye view of an interface by studying a full
set of printed pages laid out on the floor or pinned to walls, has proved to be
enormously frt1itful in detecting inconsistencies and spotting t1nusual patterns.
The bird’s -eye view enables reviewers to quickly see if the fonts, colors, and
terminology are consistent and whether the multiple developers have adhered
to a common style.

Expert reviewers may also use software tools to speed their analyses, especially
with large and complex interfaces. Sometimes string searches on design documents,
help text, or program code can be valuable, but more specific interface­
design analyses – such as web-accessibility validation, privacy-policy checks,
and download-time reduction – are growing more effective. A further discus­
sion of automated tools can be found in Section 5.6.5.

The danger with expert reviews is that the experts may not have an adequate
understanding of the task domain or user communities. Different experts tend
to find different problems in an interface, so involving three to five experts in
the review can be highly productive. Usabi lity testing can offer additional
advice and shou ld be used as a necessary comp lement. Experts come in many
flavors, and conflicting advice can further confuse the situation (cynics say, “For
every Ph.D., there is an equal and opposite Ph.D.”). To strengthen the possibility
of successful expert review and an enhanced user experience, it helps to choose
knowledgeable experts who are familiar with the project and task domain and
who have a long-term relationship with the organization. These people can be
called back to see the results of their intervention, and they can be held account­
able. However, even experienced expert reviewers have difficulty knowing how
typical users-especially first-time users-will behave.

5.3 UsabilityTesting and Laboratories 175

5.3 Usability Testing and Laboratories

The emergence of usability testing and laboratories since the early 1980s is an
indicator of the profound shift in attention toward user experience and user
needs. Traditional managers and developers resisted at first, saying that usabil­
ity testing seemed like a nice idea but that time pressures or limited resources
prevented them from trying it. As experience grew and successful projects gave
credit to the testing process, demand swelled and design teams began to com­
pete for the scarce resource of the usability laboratory staff. Managers came to
realize that having a usability test on the schedu le was a powerful incentive to
complete a design phase. The usability test report provided supportive confir­
mation of progress and specific recommendations for changes. Designers sought
the brighJ light of evaluative feedback to guide their work, and managers saw
fewer disasters as projects approached delivery dates. The remarkable surprise
was that usability testing not only sped up many projects but also produced
dramatic cost savings (Rubin and Chisnell, 2008; Lund, 2011; Hartson and Pyla,
2012). As a matter of fact, the words usability, usability testing, ai1d user experience
(UX) have now made their way into our common vocabulary.

Usability laboratory advocates split from their academic roots as these practi­
tioners developed innovative approaches that were influenced by advertising
and market research. While academics were developing controlled experiments
to test l1ypotheses and support theories, practitioners developed usability­
testing methods to refine user interfaces rapidly. Controlled experiments (Sec­
tion 5.7) have at least two treatments and seek to show statistically significant
differences; usability tests are designed to find flaws in user interfaces. Both
strategies use a carefully prepared set of tasks, but usability tests l1ave fewer
participants (maybe as few as three), and their outcome is a report with recom ­
mended changes as opposed to validation or rejection of a hypothesis. The move
to gather qualitative data is taking a larger role in the user evaluation process.
Sometimes because of tl1e novelty or size of the device, conventional testing
tasks may not be appropriate. Of course, there is a useful spectrum of possibili­
ties between rigid controls and informal testing, and sometimes a combination
of approaches is appropriate while always keeping the user experience in mind.

5.3. 1 Usability labs
The movement tov.rard usability testing stimu lated the construction of usability
laboratories (Nielsen, 1993; Rubin and Chisnell, 2008). Having a physical labora ­
tory makes an orgaiuzation’s commitment to usability clear to employees, cus­
tomers, and users. A typical modest usability laboratory would have two
10-by-10-foot areas, divided by a half-silvered mirror-one for the participants

176 Chapter 5 Evaluation and the User Experience

to do their work and the other for the testers and observers ( designers, manag­
ers, and customers). IBM was an early leader in developing usability laborato­
ries. Microsoft started later but has wholly embraced the idea with many
usability test labs. Many other software development companies have followed
suit, and a consulting community that will do usability testing for hire also has
emerged. See Fig. 5.1 for a layout of a typical usability lab.

Usability laboratories are typically staffed by one or more people with exper­
tise in testing and user-interface design who may serve 10 to 15 projects per year
throughout an organization. The laboratory staff meet with the user experience
architect or manager at the start of the project to make a test plan with scheduled
dates and budget allocations. Usability laboratory staff members participate in
early task analysis or design reviews, provide information on software tools or

FIGURE 5.1
Noldus Usability Lab
The usability lab consists of two areas, the testing room and the observation
room. The testing room is typically sma ller and accommodates a small number of
people. Those in the observation room can see into the testing room typically via a
one-way mirror. The observation room is larger and can hold the usability testing
facilitators w ith ample room to bring in others, such as the developers of the product
being tested . There may be recording equipment as well.

5.3 UsabilityTesting and Laboratories 177

literature references, and help to develop the set of tasks for the usability test.
Two to six weeks before the usability test, the detailed test plan is developed; it
contains the list of tasks plus subjective satisfaction and debriefing questions.
The number, types, and sources of participants are also identified – sources
might be customer sites, temporary personnel agencies, or advertisements
placed in newspapers. A pilot test of the procedures, tasks, and questionnaires
with one to three participants is co11ducted approximately one week before the
test, while there is still time for changes. This typical preparation process can be
modified in many ways to suit each project’s unique needs. Fig. 5.2 provides a
detailed breakdown of steps to follow when conducting usability assessments.

After changes are approved, participants are chosen to represent the intended
user communities, with attention to their backgrounds in computing, experi­
ence with the task, motivation, education, ability with the natural language
used in the interface, and familiarity with the environment. Usability laboratory
staff also must control for physical concerns (such as eyesight, left- versus
right-handedness, age, gender, education, and computer experience) and for
other experimental conditions (such as time of day, day of the week, physical

Step-by-Step Usab ility Guide

Analyze

Develop Assemble Hold
a

Plan
Project ,….. Kick-off – ,
Team Meetin I .

‘ ‘ ~ I’.
Write a

Statement
of Worl<

' '
Hire a

Usability
Specialist

I
I
I
I
I
I
I
I

◄-:—————— J

' .
Evaluate Leam

Your ~ About
Current Site Your Users

Conduct
Task — Develop

Ana sis Personas

White Set
Scenarios_. Measurable • – .,

I I
I

, I
' ◄-~——————————————J
' •

Design ~
Determine Conduct Perform_. Define_. Writing Use Develop Launch

Site a Content Card for the Parallel a f-,
Requirements Inventory Sorting IA Web Design Prototype Site :

Test and
Refine

' I
' ◄-~—————————————————•

' Types of ….-. usability
Evaluations Testing

Heuristic
Evaluations

Implement
and

Retest

Create Prepare Data
Test ,…. and J–+-1 Analyses
Plan Test and Re ort

FIGURE 5.2
Step -by-Step Usability Guide
This guide from Usabi lity.gov shows all the steps from planning a usability test to
performing the actual test and reporting the results.

178 Chapter 5 Evaluation and the User Experience

surroundings, noise, room temperature, and level of distractions). The main
goal is to find participants who are representative of the intended user audiei1ce.

Recording participants performing tasks is often valuable for later review and
for showing designers or managers the problems that users encounter. Review­
ing the recordings is a tedious job, so careful logging and annotation (preferably
automatic) during the test are vital to reduce the time spent finding critical
incidents. Most usability laboratories have acquired or developed software to
facilitate logging of user activities (e.g., typing, mousing, reading displays, read­
ing manuals, etc.) by observers with automatic timestamping. Some of the more
popular data -logging tools include Adobe Prelude Live Logger, Morae from
TechSmith ®, LogSquare from Mangold, Bit Debris, Observert XT from Moldus,
and Ovo Logger. Participants may be anxious about the recording process at the
start of the test, but witlun minutes they usually focus on the tasks arld ignore the
recording process. The reactions of designers seeing the actual recordings of
users failing witl1 their interfaces are sometimes po,.verful ai1d may be highly
motivating. When designers see participants repeatedly picking the wrong menu
item, they ofter\ realize that the label or placement rleeds to be changed.

Another useful technique available to today's usability evaluation profes ­
sional is ei;e-tracking hardware and software (Fig. 5.3). The eye-tracking data can

FIGURE 5.3

This shows a picture of the glasses worn for eye-tracking. This particular device is
being used to track the participant's eye movements when using a mobile device.
Tobii is one of several manufacturers. {Tobii AB)

5.3 UsabilityTesting and Laboratories 179

FIGURE 5.4
The eye-tracking software is attached to the airline check-in kiosk. It allows the
designer to collect data observing how the user "looks" at the screen. This helps
determ ine if various inter face e lemen ts (e.g ., butt ons) are diff icult (or easy) to find.
(Tobii AB)

show where participants gazed at the display and for how long. The results are
displayed in color-coded heat maps (Fig. 14.3) that clearly demonstrate which
areas of tl1e display are viewed ai1d which areas are ignored. This software has
dropped in price substantially and become less cumbersome; it can now be sup­
plied as a simple and affordable add-on to a computing device (Fig. 5.4). When
testing with small mobile devices, special equipment may be needed to capture
the user's device and associated activities (Fig. 5.5). Sometimes with mobile and
other technology platforms, appropriate testing may need to be do11e "il1 the
wild" with different constraints and procedures (Rogers et al., 2013) .

At each design stage, the interface can be refined iteratively and tl1e improved
version can be tested. It is important to fix even small flaws (such as spelling
errors or inconsistent layout) quickly because they influence user expectations.

5.3.2 Ethics in research practices with human participants
Participants should always be treated with respect and should be informed that
it is not they who are being tested; rather, it is the software and user interface
tha t are under study. They should be told about what they will be doing

180 Chapter 5 Evaluation and the User Experience

(for example, finding products on a
website, creating a diagram using a
mouse, or studying a restaurant
guide on a touchscreen) and how
long they will be expected to stay.
Participation should always be
voluntary, and informed consei1t in
research is important (Box 5.3).
Sometimes deception may need to
be included in an experiment to fully
test the hypothesis. Ethical practices
would allow this as long as the ben­
efits outweighed any pote11tial or
real harm.

In the United States, the Institu­
tional Review Board (IRB) governs
any research conducted on univer­
sity campuses with human partici­
pants. There are different levels of
review and precise procedures that
must be followed. Special popula­
tions may also have unique consid­
erations that need to be attended to.
Most universities have a representa­
tive who can exp lain these proce­
dures in detail. Other institutions
and organizations have guidelines
on ethical research practices with
hu.man participants.

FIGURE 5.5
Special mobile camera to track and record
activities on a mobile device. Note the
camera is up and out of the way, allowing
the user to use his or her norma l finger
gestures to operate the device.
(© by Noldus Information Technology)

5.3.3 Think-aloud and related techniques
An effective technique during usability testing is to invite users to think aloud
(sometimes referred to as concurrent think-aloud) about what they are doing as
they are performing the task . The designer or tester should be supportive of the
participants, not taking over or giving instructions but prompting and listening
for clues about ho,.v they are dealing with the interface. Think-aloud protocols
yield interesting clues for observan t usabil ity testers; for example, they may
hear comments such as "This webpage text is too small … so I'm looking for
something on the menus to make the text bigger … maybe it's on the top in the
icons … I can't find it … so I'll just carry on."

After a suitable time period for accomplishing the task list (provided as part
of the evaluation protocol) – usually one to thr ee hours – the participants can be

5.3 UsabilityTesting and Laboratories 181

BOX 5.3
Informed consent guide lines (Dumas and Loring, 2008).

Each informed cons ent stat ement should contain:

• The purpose of the study (an explanation of why the study is being done).

• The procedure being used for the study. This section should also inc lude
a time expectation for the participant and the protocol for requesting a
break.

• If there will be any type of recording, who wil l see the recordings, and
w hat happens to the recording material when the testing is completed
(not al l studies invo lve recordings) .

• A statement of confidentiality and how the anonymity of the participant is
preserved.

• Any risks to the participant (in most usability studies there is minimal risk) .

• The fact that participation is voluntary and that the participant can with­
draw at any time with no penalty.

• Whom to contact with questions and for any further information after the
study and a statement that initial questions about the testing have been
answered satisfactorily.

The informed consent statement should be signed prior to the start
of any testing.

invited to make general comments or suggestions or to respond to speci fic
questions. The inform al atmosph ere of a thjnk -a loud sess ion is pleasant and
often leads to man y spontan eous sugg estions for impro vements. In their efforts
to encoura ge thinkin g aloud , some usability laboratories have found that ha, ,ing
two participants workin g together produce s more talkin g, as one participant
explains procedure s and decision s to the other (see Fig. 5.6). Researchers need to
be awar e that p eopl e may not always say exactly what they are thinkin g. Also,
de scribing their thoughts can alter the process.

Another related technique is called retrospective think-aloud. With this tech­
nique , after compl eting a task, users are asked wh at they were thinkin g as they
p erform ed the task. The drawb ack is tha t the users may not be able to wh olly
and accuratel y recall their thoughts after completing the task; howe ver, this
approach allows user s to focus all their attention on the tasks they are perform­
ing and generate s more accura te timin gs. Two oth er variants include concurrent
probing and retrospective probing. These techniqu es both interfere with any tradi ­
tional user -interface measurement s and take the participant awa y from the task
at hand but pro vide insight into the user's thinking process.

182 Chapter 5 Evaluation and the User Experience

J

FIGURE 5.6
Having peop le work in pairs gives the additional advantage of having some insight
into the thought process as they discuss it and an unobstructed view into their
information transfer channe ls (speech and body language, etc.). Coupling this with a
pattern (Elmqvist and Yi, 2012) such as pair analytics, a system can be evaluated in
the early formative stages.

It is important to cons ider timing when using think-aloud technjques. The
stand ard think-aloud procedure may alter the true task time, as verbalizing the
thought process creates additional cognitive load, and the users may pause
the task activity as they vocalize their thoughts. Think-aloud can also be used
when doing expert reviews. Retrospective thmk-aloud procedures will not alter
the task timings them selves, but because the users need to perform the tasks and
then reflect on and review them again, their overall time commitment may be
doubled. Also, be aware that using the think-aloud technique along with eye­
tracking may generate invalid results: Users' eyes may wander whlle they are
speaking, causing spurious data to be generated.

5.3.4 The spectrum of usability testing
Usability testing comes in many different flavors and formats. Most of the
curren t research demonstrates the importance of testing often and at varied
times during the design cycle. The purpose of the test and the type of data that is
needed are important considerations. Testing may be done at the exploratory
stage, when the designers are trying to conceive the correct design, or as a

5.3 UsabilityTesting and Laboratories 183

validation effort to ensure that certain requirements were met. The following is
a list of the various types of usability testing. Testing can be performed using
combinations of these methods as well.

Paper mockups and prototyping. Early usability studies can be conducted
using paper mockups of pages to assess user reactions to wording, layout, and
sequencing. A test administrator plays the role of the computer by flipping the
pages while asking a participant user to carry out typical tasks. This informal
testing is inexpensive, rapid, and usually productive. Typically designers cre­
ate low-fidelity paper prototypes of the design, but today there are computer
programs (e.g., Microsoft Visio, SmartDraw, Gliffy, Balsamiq, MockingBird)
that can allow desigi1ers to create more detailed high-fidelity prototypes with
minimal effort. Interestingly enough, users have been shown to respond more
openly to the lower -fidelity designs, potentially because the sketchy and less
polished appearance of early prototypes clearly communicates to the user that
the design can still be changed without major cost or time investment. Although
prototypes are typically performed with the user and the administrator in the
same physical place, with today's technologies these activities can also be done
remotely. Additional information on prototyping can be found in Chapter 4.

Discount usability testing. This quick -and-dirty approach to task analysis,
prototype development, and testing has been widely influential because it low­
ers the barriers to newcomers (Nielsen, 1993). A controversial aspect is the rec­
ommendation to use only three to six test participants. Advocates point out that
most serious problems are found with only a few participants, enabling prompt
revision and repeated testing, while critics hold that a broader subject pool is
required to thoroughly test more complex systems. One resolution to the contro­
versy is to use discount usability testing as a formative evaluation (while desigI1S
are changing substantially) and more extensive usability testing as a summative
evaluation (near the end of the design process). The formative evaluation iden­
tifies problems that guide redesign, while the summative evaluation provides
evidence for product announcements ("94% of our 120 testers completed their
shopping tasks without assistance") and clarifies training needs ("with four
minutes of instruction, every participant successfully programmed the device").
Small numbers may be valid for some projects, but when dealing with web com­
panies with a large public web-facing presence, experiments may need to be run
at large scale with thousands of users (see A/B testing below).

Competitive usability testing. Competitive testing compares a new interface
to previous versions or to similar products from competitors. This approach is
close to a controlled experimental study (Section 5.7), and staff must be careful
to construct parallel sets of tasks and to counterbalance the order of presenta­
tion of the interfaces. Within -subjects designs seem the most powerful because
participants can make comparisons between the competing interfaces – fewer
participants are needed, although each is needed for a longer time period.

184 Chapter 5 Evaluation and the User Experience

A/B testing. This method tests different designs of an interface. Typically, it is
done with just two groups of users to observe and record differences between
the design s . Sometimes referred to as bucket testing, it is similar to a betiveen­
subjects design (Section 5.7). This method of testing is often used with large-scale
online controlled experiments (Kohavi and Longbotham, 2015). A/B testing
involves randomly assigning two groups of users to either the control group
(no change) or the treatment group (with the change) and then having some
dependent measure that can be tested to see if there is a difference between the
groups (Fig. 5.7). Before running A/B testing, it is often suggested (Crook et al.,
2009) to run an A/A test or a null test. In A/ A testing, there are still two groups,
but they both receive the same treatment (the control); then the variability for
power calcula tions and the experimentation sys tem can be tested. In a true test,
with a 95% confidence level, the null hypotheses should be rejected. This test­
ing method has been used at Bing, where more than 200 experiments are run
concurrently with 100 million customers spanning billions of changes. Some of
the items tested may be new ideas and others are modifications of exis ting items
(Kohavi et al., 2013).

Universal usability testing. This approacl1 tests interfaces with highly diverse
users, hardware, software platforms, and networks. When a wide range of in­
ternationa l users is anticipated, such as for consumer electronics products, web­
based information services, or e-govemment services, ambitious testing is nec­
essary to clean up problems and thereby help ensure success . Trials with small
and large displays, slow and fast net,-vorks, and a range of operating systems
or Internet browsers will do much to improve the user experience. Being aware

I

Interface 50% 50%
A

70%

Evaluation criteria i.e. effic iency

FIGURE 5.7
Example of A/B Testing

Interface
B

60%

Participants are assigned random ly to one of two testing groups (A or B). The
interface is similar, but a particular criter ion is being evaluated and a different
version of the interface is given to each group. The resu lts are evaluated to see
if there is a difference. This typically is done with relatively large numbers of
participants in each group. It can be repeated many times with sma ll variations
between the interfaces each t ime.

5.3 UsabilityTesting and Laboratories 185

of any perceptual or physical limitations of the users (e.g., vision impairments,
hearing difficulties, motor or mobility impairments) and modifying the testing
to accommodate these limitations will result in the creation of products that can
be used by a wider variety of users (see Chapter 2).

Field tests and portable labs. This testing method puts new interfaces to
work in realistic environments or in a more naturalistic environment in the field
for a fixed trial period. These same tests can be repeated over longer time pe­
riods to do longitudinal testing. Field tests can be made more fruitful if log­
ging software is used to capture error, command, and help frequencies as well
as productivity measures. Portable usability laboratories with recording and
logging facilities have been developed to support more thorough field testing.
Today's computing devices are portable and easy to transport. If a large monitor
is needed, it can often be rented at the testing location. A different kind of field
testing involves supplying users with test versions of new software or consumer
products; tens or even thousands of users might receive beta versions and be
asked to comment. Some companies that provide this service include Noldus,
UserWorks, Ovo Studios, and Experience Dynamics. Sometimes the interface
requires a true immersion into the environment and an "in-the-wild" testing
procedure is required (Rogers et al., 2013).

Remote usability testing . Since web-based applications are avai lab le across
the world, it is tempting to conduct usability tests online, avoiding the complex­
ity and cost of bringing participants to a lab. This makes it possible to have larger
numbers of participants with more diverse backgrounds, and it may add to the
realism, since participants do their tests in their own environments and use their
own equipment. Par ticipants can be recruited by e-mail from customer lists or
through online communities including Amazon Mechanical Turk. This opens the
poo l of participants to sophisticated users who, perhaps because of their remote
locations or other physical challenges, could not otherwise get to a lab location.
The downside is that there is less control over user behaviors and diminished
ability to observe users' reactions, although usage logs and phone interviews are
useful supplements. These tests can be performed both synchronously (users do
tasks at the same time while the evaluator observes) and asynchronously (users
perform tasks independently and the evaluator looks at the results later). Some
studies have shown remote usability testing to find more problems than tradi­
tional usability testing. Synchronous remote usabi lity testing has been shown to
be a valid evaluation technique. There are many platforms that support this type
of testing. They include Citrix GoToMeeting, Cisco WebEx, IBM Sametime, join.
me, and Google Hangouts. Interesting approaches include synchronous remote
usability testing using virtual worlds (Madathil and Greenstein, 2011).

Can-you-break-this tests. Game designers pioneered the can-you-break-this
approach to usability testing by providing energetic teenagers witl1 the challenge
of trying to beat new games. This destructive testing approach, in which the users

186 Chapter 5 Evaluation and the User Experience

try to find fatal flaws in the system or otherwise destroy it, has been used in
other types of projects as we ll and should be considered seriously. Users today
have litt le patience with flawed and poorly designed products and are often
fickle with company loyalty if reasonable competitors exist.

For all its success, usability testing does have at least two serio us limitations:
it emphasizes first-time usage and provides limited coverage of the interface
features. Since usability tests are usually only one to three hours long, it is diffi­
cult to ascertain how performance will be after a week or a month of regular
usage. Within the short time of a usability test, the participants may get to use
only a small fraction of the system's features, menus, dialog boxes, or help
pages. These and other concerns have led design teams to supplement usability
testing with varied forms of exper t reviews.

Further criticisms of usability lab testing come from proponents of activity
theory and those who believe that more realistic test environments are
necessary to evaluate information appliances, ambient technologies, and other
consumer-orien ted mobile devices. Furthermore, tests of interfaces used in
high-stress situations and mission-critical domains such as military comba t,
law enforcement, first response, and similar situations often cannot be con­
ducted in traditional usability lab settings. Creating a realistic environment is
critical to adequately test such interfaces, but it is not always possible. Designers
must be aware of the total cognitive or mental load placed on the users and its
implications.

Usability testing with mobile devices also needs special attention. Some
issues to be aware of include availability of extra batteries and chargers, signal
strength issues, network failures, ensuring that the user is focusing on the inter­
face, and being sure users and their fingers are not blocking the observer from
seeing what was tapped.

The continued interest in usability testing is apparent from the assortment of
books devoted to the topic. These sources (Dumas and Loring, 2008; Rubin and
Chisnell, 2008; Barnum, 2011; Nielsen and Budiu, 2012; Reiss, 2012; MacKenzie,
2013; Wilson, 2013; Preece et al., 2015) discuss setting up usability labs, the role
of the usability monitor, the collection and reporting of the test data, and other
informati on needed to run professional usability tests.

5.3.5 Usability test reports
The U.S. National Institute for Standards and Technology (NIST) took a major
step toward standardizing usability test reports in 1997 when it convened a
group of software manufacturers and large purchasers to work for several years
to produce the Common Indu stry Format (CIF) for summative usability testing
results. The format describes the testing environment, tasks, participants, and
results in a standard way so as to enable consumers to make comparisons. The
group's work (http://www.nist.gov/itl/iad/vug/) is ongoing; the participants

5.4 Su rvey Inst ruments 187

are deve loping guidelines for formative usability test reports, and some best
p ractice guidelines are emerging. Key points are that it is important to und er­
stand the audience (who will be reading the report) and to keep the report con­
crete and specific.

5.4 Survey Instruments

User surveys (written or online) are familiar, inexpensive, and generally
acceptable companions for usability tests and expert reviews. Managers and
users can easily grasp the notion of surveys, and the typically large numbers of
respondents (hundreds to thousands of users) confer a sense of authority
compared to the potentially biased and highly variable resu lts from small num ­
bers of usability-test participants or expert reviewers. The keys to successful
surveys are clear goals in advance and development of focused items that help
to attain those goals. Two critical aspects of survey design are validity and
reliability. Experienced surveyors know that care is needed during design,
administration, and data analysis (Lazar et al., 2009; Cairns and Cox, 2011;
Kohavi et al., 2013; Tullis and Albert, 2013). Additional information on surveys
can be found in Chapter 4.

5.4. 1 Preparing and designing survey questions
A survey form should be prepared, reviewed by colleagues, and tested with a
small sample of users before a large -scale survey is conducted. Methods of
statistical ana lysis (beyond means and standard deviations) and presentation
(histograms, scatterplots, and so on) should also be developed before the final
survey is distributed. In short, directed activi ties are more successful than
unplanned statistics-gather ing expeditions. Our experience is that directed
activities also provide the most fertile frameworks for unanticipated
discoveries. Since biased samples of respondents can produce erroneous
results, su rvey planners need to bui ld in methods to verify that respondents
represent the population in terms of age, gender, experience, and other
re levant characteristics.

It is important to pre-test or pilot-test any survey instrument prior to actual
use. Users can be asked for their subjective impressions about specific aspects of
the interface, such as the representation of:

• Task domain objects and actions

• Interface domain metaphors and action handles

• Syntax of inputs and design of screen displays

188 Chapter 5 Evaluat ion and the User Exper ience

It may also be useful to ascertain certain characteristics about the users,
including:

• Background demographics (age, gender, origins, native language, education,
income)

• Experience wi th computers (specific app lications or software packages,
length of time, depth of knowledge, whether knowledge was acquired
through formal training or self-teaching)

• Job responsibilities (decision-making influence, managerial roles, motivation)

• Personality style (introvert versus extrove rt, risk taking versus risk averse,
early versus late adopter, systematic versus opportunistic)

• Reasons for not using an interface (inadequate services, too complex, too
slow, afraid)

• Familia rity wi th features (printing, macros, shortcu ts, tu torials)

• Feelings after using an interface (confused versus clear, frustrated versus in
contro l, bored versus excited)

On line and web-based surveys avoid the cost and effort of pri n ting, dis tribut ­
ing, and collecting paper forms . Many people prefer to answer a brief survey on
a computer or other electronic devic e ins tead of filling in and retu rning a printed
form, although there is a potential bias in the self-selected samp le. Some surveys
can have very large numbers of respondents . Some companies that provide
comp ut erized surveys include Survey Monkey, Survey Gizmo, Qualtrics, and
Question Pro. Academic or educational discounts may be available .

Typically, participants are asked to respo nd to a series of statements
according to the following commonly used Likert scale:

Strongly agree Agree Neutral Disagree Strongly disagree

The items in the survey could be sim ilar to the following :

• I can effectively perform the task using this inte rface

• Items are p laced where I expecte d to find them in the inte rface

Such a list of stateme11ts can help designe rs to identify prob lems users are
having and to demonstrate improvements to the interface as changes are made;
progress is demonstrated by improved scores on subsequent surveys.

Ano ther approach is to use a set of bipolar seman tically anchored items
(pleasing versus irri tating, simple versus complica ted, concise versus redundant)
that ask users to describ e their reactions to using the interface . Users have to rate
the items on 1-to-7 scales:

Hostile

Easy to use

Clear

1 2 3 4 5 6 7

1 2 3

1 2 3
7 4 5 6

4 5 6 7

Friendly

Difficult to use

Confusing

5.4 Survey Instruments 189

Yet another approach is to ask users to evaluate various aspects of the
interface design, such as the readability of characters, use of the terminology,
organization of the structure, or the meaning of the icons/ controls. If users rate
as poor one aspect of tlle system, tl1e designers l1ave a clear iI1dication of what
needs to be redone. If precise-as opposed to general-questions are used in
surveys, there is a greater chance that the results will provide useful guidance
for taking action.

Additional attention may be needed when dealing with special populations
(see Chapter 2). For example, questionnaires for children must be in age­
appropriate language, questionnaires for international users may need to be
translated, larger fonts may be needed for older adults, and special accommoda­
tions may need to be made for users with disabilities.

5.4.2 Sample questionnaires
Questionnaire s and surveys are commonly used in usability evaluation. Several
instruments and scales have been developed and refined over time. The early
questionnaires concentrated on elements such as clarity of fonts, appearance
on the display , and keyboard configurations. Later questionnaires dealt v.rith
multiinedia components, conferencing, and other current interface designs
including consumer electronics and mobile devices. Here is some information
on a few (most use a Likert-Like scale) :

The Questionnaire for User Interaction Satisfaction (QUIS). The QUIS (http:/ /lap.
umd.edu/ quis/) has been applied in many projects with thousands of users, and
new versions have been created that include items relating to website design.
The University of Maryland's Office of Technology Commercialization licenses
the QUIS. Special licensing terms may be available for students. Table 5.1
contains a portion of the QUIS, including an example for collecting computer
experience data.

The System Usability Scale (SUS). Develo ped by John Brooke, it is sometimes
referred to as the "quick and dirty" scale. The SUS consists of 10 statements

Exam ples of the specific satisfaction scale questions :

5.4 Messages which appear on confusing clear 1 2 3 4 5 6 7 8 9 NA
display:

5.4.1 Instructions for commands
or choice:

TABLES.1

confusing clear 1 2 3 4 5 6 7 8 9 NA

Questionna ire for User Interaction Satisfact ion (OUIS)
(© University of Maryland, 1997).

190 Chapter 5 Evaluation and the User Experience

1 I think that I would like to
use this system frequently

2 I found the system
unnecessarily complex

TABLE 5.2

Strongly
disagree

1

1

2

2

System Usability Scale (SUS) examp le (Brooke, 1996).

3 4

3 4

Strongly
agree

5

5

with which users rate their agreement (on a 5-point scale). Half of the questions
are positively worded, and the other half are negatively worded. A score is
computed that cai1 be viewed as a percentage. Table 5.2 contains a sample from
the SUS.

The Co1nputer System Usability Questionnaire (CSUQ). A later deve lopment by
IBM (based on the earlier PSSUQ) contains 19 statements to which participants
respond using a 7-point scale. Table 5.3 contains a sample from the CSUQ.

The Softiuare Usability Measurement Inventory (SUMI). Develo ped by the Human
Factors Research Group (HFRG), it contains SO items designed to measure users'
perceptions of their affect (emotional response), efficiency, and control and of
the leamability and helpfulness of the interface (Kirakowski and Corbett, 1993).
Table 5.4 contains a sample from th e SUMI.

The Website Analysis and MeasureMent Inventory (W AMMI) questionnaire. It was
desigi1ed for web-based evaluations ai1d is available in more thai1 a dozen lan ­
guages (http:/ /www.wamrni.com/).

Although man y of these qu es tionnair es were deve lop ed a while ago, they
still serve as reliable and valid instrument s. Some have been transformed by
changing the focus of the items asked about. Specialized questionnaires have
been developed and tested based on these proven instruments. One example is
the Mobile Phone Usability Questionnaire (MPUQ), which consists of 72 items

1 Overall, I am satisfied with how
easy it is to use this system.

2 I can effectively complete my
work using this system.

TABLE 5.3

1234567

Strong ly • • • • • • •
disagree

Strong ly • • • • • • •
disagree

Computer System Usability Questionnaire (CSUQ) example.

NA I
Strongly •
agree

Strongly •
agree

5.4 Survey Instruments 191

1 This software responds too slowly
to inputs.

2 I would recommend this software to
my colleagues.

TABLE 5.4

Agree

0

0

Undecided Disagree

0 □

0 □

Software Usabili ty Measurement Inventory (SUM I) examp le.

broken down into six factors: ease of learning and use, helpfulness and prob­
lem-solving capabilities, affective aspect and multimedia properties, commands
and minimal memory load, control and efficiency, and typical tasks for mobile
phones (Ryu, 2009). An example of sample questions from the MPUQ can be
fot1nd in Table 5.5. The SUS has also been t1sed with cellphones as well as inter­
active vo ice sys tems, web-based interfaces, and other interfaces and continues
to be a robust and versatile tool. UMUX-LITE is another option as a shortened
SUS (Lewis et al., 2013). As with any metric, any score should not be used in
isolation. The best testing procedure, leading to the most confidence-inspiring
results, would include triar1gulating the data from multiple methods, such as
observations, interviews, logging of interface usage, and qualitative satisfaction
data as well.

Writing and designing good questionnaires is an art as well as a science.
Several books (Rubin and Chisnell, 2008; Sauro and Lewis, 2012; Tullis and
Albert, 2013) and articles exist that provide further reading on use, validi ty,
and development of good and valid questionnaires. In addition to the stan ­
dard-type measures of satisfaction, specialized devices (e.g., mobile devices)
and gaming interfaces may require unique measures su ch as pleasure,
joy, affect, challenge, or realism. Some links to some older questionnaires
include the following: Gary Perlman (http:// garyper lrnan.corn / quest/ quest.
cgi?forrn=USE) and Jurek Kirakowski (http:/ /www.ucc.ie/hfrg/resources/
qfaql.html).

( Examples of sample questions relating to mobile phones:

Is it easy to change the ringer signal?

Can you personalize the ringer signal with this product? If so, is that feature
usefu l and enjoyable for you?

Do you feel excited w hen using this product?

Is it easy to use the phone book feature of this product?

A LE 5.5
Mobile Phone Usability Questionnaire (MPUQ) examp le.

)

192 Chapter 5 Evaluation and the User Experience

5.5 Acceptance Tests

For large implementation projects, the customer or manager usually sets
objective and measurable goals for hardware and software performance. Many
authors of requirements document s are even so bold as to specify the mean
time between failures as well as the mean time to repair for hardware and, in
some cases, software failures. More typically, a set of test cases is specified for
the softwa re, with possible response-time requirements for the hardware/ soft­
ware combination (see Chapter 12). If the completed product fails to meet these
acceptance criteria, the system must be reworked until success is demonstrated.

These notions can be neatly extended to the human interface. Explicit
acceptance criteria should be established when the requirements document is
written or when a contract is offered. Rather than use the vague and misleading
criterion of "user friendliness," measurable criteria for the user interface can be
established for the following:

• Time for users to learn specific functions

• Speed of task performance

• Rate of errors by users

• User retention of commands over time

• Subjective user satisfaction

An acceptance test for a food-shopping website might specify the followin g:

The participants will be 35 adults (25-45 years old), native speakers with
no disabilities, hired from an employment agency. They wil l have moderate
web-use experience: one to five hours/week for at least a year. They will be
given a five-minute demonstration on the basic features. At least 30 of the 35
adults should be able to complete the benchmark tasks within 30 minutes.

Another testable requirement for the same interface might be this:

Special participants in three categories will also be tested: (a) 10 older adults
aged 55-65; (b) 10 adult users with varying motor, visual, and auditory dis­
abilities; and (c) 10 adult users who are recent immigrants and use English
as a second language.

Since the choice of the benchmark tasks is critical, pilot testing must be done
to refine the materials and procedures used. A third item in the acceptance test
plan might focus on retention:

Ten participants will be recalled after one week and asked to carry out a new
set of benchmark tasks. In 20 minutes, at least eight of the participants should
be able to complete the tasks correctly.

5.6 Evaluation during Active Use and Beyond 193

In a large interface, there may be 8 or 10 such tests to carry out different
components of the interface and with different user communities. Other criteria,
such as subjective satisfaction, output comprehensibility, system response time,
installation procedures, docume11tation, or graphics appeal, may also be
considered in acceptance tests of complete commercial products.

If precise acceptance criteria are established, both the customer and the
interaction designer can benefit. Arguments about user friendliness are avoided,
and contractual fulfillment can be demonstrated objectively. Acceptance tests
differ from usability tests in that the atmosphere may be adversarial, so outside
testing organizations are often appropriate to ensure neutrality. The central goal
of acceptance testing is not to detect flaws, but rather to verify adherence to
requirements.

After successful acceptance testing, there may be a period of field testing or
an extensive beta test with real users before national or international distribution.
In addition to further refining the user interface, field tests can improve training
methods, tut orial materials, telephone-help procedures, marketing methods,
and publicity stra tegies.

The goal of early expert reviews, usability testing, surveys, acceptance testing,
and field testing is to force as much as possible of the evolutionary development
into the pre-release phase, when change is relatively easy and less expensive
to accomplish.

5.6 Evaluation during Active Use and Beyond

A carefully designed and thoroughly tested interface is a wonderful asset, but
successful active use requir es cons tant attention from dedicated managers, user­
service personnel, and maintenance staff. Everyone involved in supporting the
user experience can contribute to interface refinements that provide ever-higher
levels of service. You cannot please all of the users all of the time, but earnest
effort will be rewarded by the appreciation of a grateful user community .
Perfection is not attainable, but incremental improvements are possible and are
worth pursuing.

Gradual interface dissemination is useful so that problems can be repaired
with minimal disruption. As user numbers grow, major changes to the interface
should be limited to an announced revision. If interface users can anticipate the
changes, resistance will be reduced, especially if they have positive expectations
of improvement. More frequent changes are expected in the rapidly developing
web and interactive environments, but stable access to key resources even as
novel serv ices are added and sincere interest in the user experience may be the
winning policy.

194 Chapter 5 Evaluation and the User Experience

5.6.1 Interviews and focus-group discussions

Interviews with individual users can be productive because the interviewer
can pursue spec ific issues of concern to help in better understanding the
user's perspective. Inter viewing can be cost ly and time-consuming, so usually
only a small fraction of the user community is involved. On the other hand,
direct contact with users often leads to specific, constructive suggestions.
Professionally led focus groups can elicit surpri sing patterns of usage or hid­
den problems, which can be quickly explored and confirmed by participants.
On the other hand, outspoken individuals can sway the group or dismiss
comments from weaker participants. Interviews and focus groups can be
arranged to target specific sets of users, such as experienced or long-term
user s of a product, generating different se ts of issues than would be raised
with novice users.

5.6.2 Continuous user-performance data logging

The software architecture should make it easy for system managers to collect
data abou t the patterns of interface usage, speed of user performa11ce, rate of
errors, and/ or frequency of requests for online assistance. Logging data provide
guidance in the acquisition of new llardware, changes in operating procedures,
improvements to training, plans for system expansion, and so on.

For example, if the freqt1ency of each error message is recorded, the highest­
frequency error is a candidate for attention. The message could be rewritten,
supporting materials could be revised, the software could be changed to pro­
vide more specific information, or the syntax could be simplified. Without spe­
cific logging data, however, the sys tem-main tenance staff has no way of
knowing which of the many hundreds of error-message situations presents the
biggest problem for users. Similarly, staff should examine messages that never
appear to see whether there is an error in the code or whether users are avoiding
use of some facility.

If logging data are readily available, changes to the human-computer inter­
face can be made to simplify access to frequently used features. Managers also
should examine unused or rarely used functionality to understand why users
are avoiding those features. A major benefit of usage-frequency data is the guid­
ance that they provide to system maintainers in op timizing performance and in
reducing costs for all participants. This latter argument ma y yield the clearest
advantage to cost-conscious managers, whereas the increased quality of the
interface is an attraction to service-oriented managers. Zooming in on specific
events (e.g., undo and erase) provides a cost-effective, automated approach to
facilitate the detection of critical incidents that may not be discovered by self­
reporting (Akers et al., 2009).

Logging may be well-intentioned, but users' rights to privacy deserve to be
protected. Links to specific user names should not be collected unless necessary.

5.6 Evaluation during Active Use and Beyond 195

When logging aggregate performance crosses over to monitoring individual
activity, managers must inform users of what is being monitored and how the
gathered information will be used. Although organizations may have a right to
measure workers' performance levels, workers shou ld be able to view the results
and to discuss the implications. If monitoring is sur reptitious and is later discov ­
ered, resulting worker mistrust of management could be more damaging than
the benefits of the collected data. Manager and worker cooperation to improve
productivity and worker participation in the process and benefits are advised.

With the huge impact of the internet on e-commerce, many companies are
interested in tracking hits on their sites, page views, and so on. There has been
an explosion of companies (Google, Microsoft, Yahoo!, and others) that offer
such services, referred to as web analytics (sometimes called big data). This data
gathering can provide companies with detailed tracking information on their
websites, including graphic displays, dashboards, and calculations and compu­
tations to demonstrate the impact on return on investment changes and other
modifications. This informati on can be presented in graphical dashboards to
provide visualiza tions of the data (see Chapter 16).

With the interest in big data, many services are making a success of providing
clients with log data and analyses of web visits from their panels of users.
These users have provided their demographic information and are paid to
answer surveys or allow their web-visi tation patterns to be logged. The pur­
chasers of the data are interested in knowing what kinds of people buy books,
visit news sites, or seek healthcare information to guide their marketing, prod­
uct development, and website design efforts . Some of these services include
Alexa, Quora, Pew Internet, Hitwise, Google Analytics, Forrester, comScore,
and Nielsen Digital Ad Ratings.

5.6.3 Online or chat consultants, e-mail,
and online suggestion boxes

Online or chat consultants can provide extremely effective and personal assis­
tance to users who are experiencing difficulties. Many users feel reassured if
they know that there is a human being to whom they can turn when problems
arise. These consu ltants are an excellent source of information about problems
users are having and can suggest improvements ai1d potential extensions.

Some orgaiu zations offer toll-free numbers through which the users can reach
a knowledgeable consultant; others charge for consultation by the minute or
offer support only to elite or premium customers. On some sys tems, the consul­
tants can monitor or even control the user's computer and see the saine display
that the user sees while maintaining telephone or other chat contact (Fig. 5.8).
This service can be extremely reassuring because users know that someone can
walk them through the correct sequence to complete their tasks. When users
want service, they typically want it immediately, and users often work worldwide

196 Chapter 5 Evaluation and the User Experience

FIGURE 5.8

Online chat consu ltant . Typically, the consultants are on a headse t and may or
may not be ab le to view the participants. They wi ll communicate by a vocal or chat
dialogue . If it is a chat dialogue, there is usually some indication for the participant
to wait while the consu ltant is typing.

in a 24-hour, 7-day-a-week environmen t. Many organizations are using software
agents with recomn1ender systems to provide real-time chat facilities, thereby
integrating the human touch with common automated responses. Such services
help u sers, build customer loyalty, and provide insights that can lead to design
refinements as well as novel product extensions. Although these services are
often well -received, companies need to be aware of biases. Those participants
who respond to services offered online may not be representative of the general
user population, so drawing conclusions abo ut the data collected in these types
of interventions can be problematic (Crook et al., 2009).

5.6.4 Discussion groups, wikis, newsgroups, and search
Some users may have questions about the suitability of a software package for
their application or may be seeking someone who has had experience using ar1
interfa ce feature. They are not likely to have a particular individual in mind, so
e-mail does not serve their needs. Furthermore, with the international use of

5.6 Evaluation during Active Use and Beyond 197

software products and the 24-hour, 7-day-a-week, always-on computing envi­
ronment, users may encounter issues outside of traditional working hours.
Many interaction designers and website managers offer users discussion groups,
newsgroups, or wikis to permit posting of open messages and questio11s. More
independent discussion groups are also hosted by various services and can eas­
ily be found using today's powerful search engines.

Discussion groups usually offer lists of item headlines, allowing users to scan
for relevant topics. User-ge11erated content fuels these discussion groups. Most
anyone can add new items, but usually someone moderates the discussion to
ensure that offensive, useless, outdated, or repetitious items are removed. When
there are a substantial number of users who are geographically dispersed, mod­
erators may have to work hard to create a sense of community.

With the prevalence of the internet, searching has become even more
generic and ubiquitous and typically under user control. Users often input
into Google (or another search engine) a phrase or a set of words describing
their issues, easily yielding a long list of matches. Some may match exactly
what the user is looking for without any registrations or other sign-up
activities. These matches may point to wikis, discussion forums, company
FAQs, and even YouTube videos.

5.6.5 Tools for automated evaluation

Software too.ls can be effective in evaluating user interfaces for applications,
websites, and mobile devices. Even straightforward tools to check spelling or
concordance of terms benefit interaction designers. Simple metrics that report
numbers of pages, widgets, or links between pages capture the size of a user ­
interface project, but the inclusion of more sophisticated evaluation procedures
can allow interaction designers to assess whether a menu tree is too deep or
contains redundancies, whether labels have been used consistent ly, whether all
buttons have prope r transitions associated with them, and so on.

Research has noted some recommendations. Keep average link text to two to
three words, using sans serif fonts and applying colors to highlight headings. One
intriguing finding was tha t preferred websites do not always have the fastest user
performance, suggesting that in e-commerce, mobile, entertainment, and gaming
applications, the attractiveness may be more important than rapid task execution.
Further analysis of the results could lead to conjectures about the design goals that
bring about high preference. For example, users may prefer desigi1s that are com­
prehensible, predictable, and ,risually appealing and that incorporate relevant
content. Today, sophisticated users exist with high expectations. Young folks
(such as digital natives) have grown up with computers and other mobile devices,
and these devices are an integral part of their life and related activities.

In the recent past, down load speeds for webpages were an issue, and people
used website optimization services that could count the number of items in a

198 Chapter 5 Evaluation and the User Experience

page, the number of bytes in each image, and the size of the source code. These
services also provided suggestions for how to revise webpages for faster
performance. Today the issue is more about the number of hits and the visibility
of a webpage.

Another family of tools is run-ti1ne logging softiuare, which captures the users'
patterns of activity. Simple reports-such as the frequency of each error mes­
sage, menu-item selection, dialog-box appearance, help invocation, form-field
usage, or webpage access- are of great benefit to maintenance personnel and to
revisers of the initial design. Experimental researchers can also capture perfor­
mance data for alternative designs to guide their decision making. Software to
analyze and summarize the performance data (e.g., TechSmith's Morae) is
improving steadily (see Fig. 5.9).

When evaluating mobile device s it1 the field, unobtru sive method s to gather
data may be needed. A log-file-recording tool that captures clicks with associated
timestamps and positions on the display, keeping track of items selected a1ld
displa y changes, capturing the pag e shot s, and recording when a user is finished,
can provide valuable information for analysis. Another approach to gather user
feedback is to do a site intercept survey. This involves putting a small JavaScript
snippet on a webpage allowing it to gather information from the users.

Search Results

I Modify Se&ctt… J Searching fc,-: r~ouse Od(s

Task! TMk2 lrask3 TMk4

r; Searchprofle ]I~~
l1.::::::. ~C=us=tom==se=a~=,h========="~.b:JI~ ~

-..—————————-'
I

I ,,,
r=oo:oo.oo

Total Events: 677

FIGURE 5.9

'

P artldpant 4

Selected Events: 1

-…, 0:04:14.80

Selected OuratJon: Not ~ble

This is an example of some of the automated reports that can be created with software
such as TechSmith's Morae. The item being measured is mouse clicks. This shows the
view for task 2 (selected in the tabbed bar). Obvious ly, the other three tasks could also
be displayed. These are the values fo r participant 4. The dropdown list box would allow
the evaluator to choose the mouse clicks for other participants. Across the horizontal
ax is, time is shown.

5.7 Controlled Psychologica lly Oriented Experiments 199

Of course, gathering the data from usability evaluations is only the begin­
ning. Making se11se of the data, identifying patterns, and reaching a better
understanding of what the data show are difficult and tedious tasks.

5.7 Controlled Psychologically Oriented Experiments

Scientific and engineering progress is often stimulated by improved techniques
for precise measurement. Rapid progress in the design of interfaces will be
stimulated as researchers and practitioners continue to evolve suitable human­
performance measures and techniques. We have come to expect that automobiles
will have gas mileage reports pasted to their windows, appliances will have
energy-efficiency ratings, and textbooks will be given grade-leve l designations;
soon, we will expect software packages to show learning-time estimates and
user-satisfaction indices from appropriate evaluation sources.

5.7.1 Experimental approaches
Academic and industrial researcher s understand that the power of the traditional
scientific method can be fruitfully employed in the study of interfaces. They are
co11ducting numerous experiments that aid in better understanding of basic
design principles. The classic scientific method for interface research (as stated in
Chapter 1), which is based on controlled experimentation, has this basic outline:

• Understanding of a practical problem and related theory

• Lucid statement of a testable hypothesis

• Manipulation of a small number of independent variables

• Measurement of specific dependent variables

• Careful selection and assignment of subjects

• Control for bias in subjects, procedures, and materials

• Application of statistical tests

• Interpretation of results, refinement of theory, and guidance for experimenters

The classic experimental methods of psychology are being enhanced to deal
with the complex cognitive tasks of human performance with information and
computer systems. The transformation from Aristotelian introspection to Galilean
experimentation that took two millennia in physics is being accomplished in just
over three decades in the stud y of human-computer interaction.

The scienti fic approach required for controlled experimentatior1 yields
narrow but reliable results. Through multiple replications with similar tasks,
participants, and experimental conditions, reliability and validity can be

200 Chapter 5 Evaluation and the User Experience

enhanced. Each smal l experimen tal result acts like a tile in the mosaic of human
performance with computer-based information systems.

Managers of actively used systems are also coming to recognize the power of
controlled experiments in fine-tuning the human-computer interface. As pro­
posals are made for new interfaces, novel devices, and reorganized display for­
mats, a carefully controlled experiment can provide data to support a
management decision. Fractions of the user population can be given proposed
improvements for a limited time, and then performance can be compared witl,
the control group. Dependent measures may include performance times, user­
subjective satisfaction, error rates, and user retention over time.

For example, the competition over mobile device-input methods has led to
numerous experimental studies of keyboard arrangements with similar trait,ing
methods, standard benchmark tasks, common dependent measures that account
for error rates, and strategies for testing frequent users. Such careful controls are
necessary because a 10-minute reduction in learning time, a 10% speed increase,
or 10 fewer errors could be a vital advantage in a competitive consumer market.

Similar controlled studies are being used in online experiments at a large scale
with large web-based groups (Kohavi and Longbotham, 2015). It is important to
pay attention to the size and the representativeness of the group used. One needs
to be aware of novelty and primacy effects that can affect the results. Other "rules
of thumb" in doing these types of studies are discussed by Kohavi et al. (2014).

5.7.2 Experimental design
A full discussion of experimental design is ot1tside the scope of this book,
although many excellent resources exist (Lazar et al., 2009; Cairns and Cox,
2011; Sauro and Lewis, 2012; MacKenzie, 2013; Tullis and Albert, 2013).
Experimental design and statistical analysis are complex topics. Some basic
terminology and methodologies are described, though novice experimenters
wou ld be well advised to collaborate with experienced research scientists and
statisticians to develop the details properly.

In a tightly controlled experimental study, selecting the appropriate partici­
pants is important. Since conclusions and inferences are often made from the data,
it is important that the sample is representative of the target users for the interface.
Users are frequently grouped or categorized by some sort of demographic, such as
age, gender, computer experience, or other attribute. When selecting participants
from a population to create the sample, the sampling technique needs to be consid­
ered. Are people selected randomly? Is tl,ere a stratified subsample that should be
used? Novice researchers may want to use their friends and family members,
creating a convenience sample, but such a sample is not typically representative,
may be biased, and therefore can compromise the confidence and validity of the
results. The sample size is another consideration. It is important to define a
confidence level that needs to be met for the study. A full discussion of samp le sizes
and confidence levels can be found in most statistics books.

5.7 Controlled Psychologica lly Oriented Experiments 201

Basic experimental design comes in two forms: between-subjects or within­
subjects. In a betzveen-subjects desigi1, the groups are relatively similar in makeup,
and each group is given a different treatment. To have a powerful effect, this
design approach needs to have a relatively large number of users in each group.
The large sample size usually ensures that the groups (if selected appropriately)
are similar in nature, so the differences can be attributed primarily to the different
treatments. If the groups are too small, the results may be related to the individual
characteristics of each grot1p. In a within-subjects design, each participant performs
the same tasks, and the data being recorded are compared across participants.
Although the sample size can be smaller, there may still be concerns about fatigue
(causing performance to decrease) or practice and familiarity (causing performance
to increase). It is important to counterbal.ance the tasks, since the order of the tasks
can affect the results. If tl1e variable being measured is ease of use, earlier tasks may
artificially seem more difficult because the user is not yet familiar with the system;
likewise, later tasks may be seen as easier, not because the tasks themselves are less
compl ex but becaus e of the familiarity with the system that the user has acquired.

In the design of an experimental study, different type s of var iables need to be
considered and understood. The independent variable is something that is being
manipulated. For example, you may have two different interface designs: one
that provid es access to a help sys tem and one that does not. The dependent
variable is something that happ ens as a result of the experiment and is usually
measured. Examp les of dependent variables include time to complete the task,
number of errors, and user satisfaction. Experimental design needs to be
carefully contr olled so the main difference s found in the depend en t var iables
can be attribu ted to the indep end ent va riabl es, not other outside sources or
confounding variables. To help control for potentia l systematic bias and
experimental error in the study design, the researcher should apply randomiza ­
tion strategies, when plausible, such as random selection of participants and
random assignment of participant s to testing conditions. See Box 5.4 for a
discussion on Simpson's Paradox. One also needs to be aware of false positives.
These are positive resu lts that are not really true. They could be due to experi ­
mental design issues, biased selection, data issues, or just chance.

BOX 5.4
Simpson's Paradox (Crook et al., 2009).

A study is being done with two groups (an A/8 test). Users can use different
browsers. The sampling is not uniform; t here may be users from some brows­
ers sampled at a higher rate. When the study is complete, the treatment group
does better. Upon further analysis of the data, separating the users by browser
types, the treatment is actually wo rse for all browser types.

202 Chapter 5 Evaluation and the User Experience

With the advent of the maturing of HCI and more emphasis on the user expe­
rience, measurements are changing. Traditional quantitative approaches are
still important and valid, but attention needs to be given to qualitative measures
(Bazeley, 2013) and associated methodologies as well. Measuring emotional
dimensions associated with the interface will provide a more holistic perspec­
tive of the user experience. Differences between users' perceptions and objective
measures need to continue to be identified. Researchers need more work on
valida ting scales and instrumer,ts for these types of measures .

Practitioner's Summary

Interface developers evaluate their designs by conducting expert reviews,
usability tests (in lab settings, in the field, and "in-the-wild"), surveys, and
rigorous acceptance tests. Once interfaces are released, developers carry out
continuous performance evaluations by interviews or surveys or by logging
users' performance in a way that respects their privacy. If you are not measur­
ing u ser performance, you are not focusing on user experience and usability!

Successful interface project managers understand that they must work hard
to establish a relationship of trust with the user community. As markets are
opened (for example, in another country or vertical market segment), managers
have to start &esl, in gaining recognition and customer loyalty. Special attentio11
may need to be devoted to novice users, users with disabilities, and other special
populations (children, older adults). In addition to providing a properly func­
tioning system, successful managers recognize the need to offer mechanisms for
feedback, such as online surveys, interviews, discussion groups, consultan ts,
suggestion boxes, newsletters, and conferences as well as participation in the
common social media outlets.

Ideally, a company has a group of personnel targeted and trained in doing
usabil ity eval uation s. But sometimes one person may be the only usabil ity evan­
gelist in the company . That position requires wearing many hats to be sure the
usability needs of the company are addressed. Research has shown that includ ­
ing usability early in the product design cycle provides a much better return on
investment (ROI) compared to tacking on usability at the end.

Researcher's Agenda

Researchers can contribute their experience with experimentation to develop
improved techniques for interface evaluation and the user experience . Guid­
ance in conducting pilot studies, acceptance tests, surveys, interviews, and

Researcher's Agenda 203

discussions would benefit large-scale development groups , but additiona l
attention needs to be given to smaller projects and incremental-type changes.
Strategies are needed to cope with evaluation for the numerous specific popula­
tions of users and the diverse forms of disabilities that users may have. Experts
in constructing psychological tests can help in preparing validated and reliable
test instruments for subjective evaluation of the varying types of interfaces,
from small mobile devices to very large displays, including specialized inter­
faces such as gaming. Such standardized tests would allow iI1dependent groups
to compare the acceptability of interfaces. Would benchmark datasets and task
libraries help standardize evaluation? How useful can researchers make auto ­
mated testing against requirements documents? How many users are needed
to generate valid recommendations? How can we better explait1 the differences
between users' perceptions of a task and the objective measures? How do we
select the best measure for a task? How can life-critical applications for experi ­
enced professionals be tested reliably? Is there a siI1gle usability metric that can
be used and compared across types of interfaces? Can we combine performance
data and subje ctive data and create a single meaningful result? Is there a score­
card that can be used to aid in the interpretation of usability results? Is there
a theory to explain and understand the relationship between measures? Also,
how do we best incorporate and evaluate qualitative data and dimensions such
as fun, pleasure, joy, affect, challenge, or realism?

Input from experimental, cognitive, and clinical psychologists wou ld help
computer specialists to recognize the importance of the human aspects of com ­
puter use. Can psychologica l principles be applied to reduce novice users'
anxiety or expert users' frustration? Could profiles of users' skill levels with
interfaces be helpful in job-placement and training programs? How can good
usability practices be applied to the gaming environment whi le preserving the
challenge and excitement of the game itself? Continuous ly keeping the user
experience in mind is an integral part of this.

Additiona l work is also needed on the appropriate choice of evaluation
methodology. Some of the traditional methodo logies need to be expanded, and
non-empirical methods, such as sketches and other design alternatives, should
be cons ider ed. As HCT is maturing as a discipline, two facets of HCI are emerg­
mg. One approach is micro-HC I: counting discrete items (e.g., mouse clicks)
and other quantita tive items. These are p laces where measurable performance
in terms of speed and errors cai1 be reported using controlled experiments.
The second approach is macro-HCI: dealing more with the full user experi­
ence including social engagement (see Chapter 3). Changes are needed to make
usability reports that are understandable, readable, and useful. Additional work
on developing automated tools is needed, with attention paid to the specialized
systems (mobile devices, games, personal devices) that are readily available
today. The standardized usability instruments need modification ai1d validation
as they deal with different criteria and different environments. What happens

204 Chapter 5 Evaluation and the User Experience

if testing cannot take place in a usability lab? Perhaps the testing must be done
in a field setting or "in-the-wild" to ensure validity. How can we effectively
simulate the high-stress situations that users encounter in hostile environments?
Satisfaction may be more broadly defined to include characteristics such as fun,
pleasure, and challenge.

WORLD WIDE WEB RESOURCES

www.pearsonglobaleditions.com/shneiderman

• Additional information on usability testing and questionnaires is available

on the companion webs ite.

• Resource on usability methods and guidelines from the U.S. government:

http://www.usability.gov

• Usability M ethods Toolbox from James Hom (older but good info rmation):
http://usability .jameshom.com

• SUMI Questionnaire from J. Kirakowski:
http://www. ucc. ie. hfrg .questi on nai res/sum/index. htm I

• Testing Methods and Tools Guide to Usability and Software Engineering
(GUSE) from University of Maryland:
http://lte -projects. u md .ed u/guse/testi ng .htm I

• Heuristic Evaluations from Jakob Nielsen:
http://www. n ng rou p.com/topic/heu ristic -eva lu at io n/

• Usability First -F oraker Design: http://www.usabilityfirst.com

• Zazelenchuk's Usability Test Data Logger:

http://www .userfocus.co .uk/resources/datalogger.h t ml

• Usability informat ion from t he University of Texas:
http://www.utexas .edu/learn/usability/

• A comprehensive list of UX evaluation methods from All A bout UX created
and maintained by volunteers: http://www.allaboutux.org/all-methods

• Sample size calculator: http://www.blinkux .com/usability-sample-size

• How to Conduct Eyetracking Studies, Kara Pernice and Jakob Nielsen:
http://www.nngroup.com/reports/how-to-conduct-eyetrac king-studies/

• New York Times article on A/B testing: http://www .nytimes.com/2015/09/27/
upshot/a-better-government-one-tweak-at-a-ti me . htm I? _r=O

References 205

Discussion Questions

1. Describe at least three different types of expert review methods.

2. Create a bird's-eye view of an interface you wish to investigate. Focus on de­
tecting inconsistencies and spotting unusua l patterns.

3. Compare and contrast controlled psycho logical experiments and usability
tests in the evaluation process of user interfaces. Be sure to include the ben­
efits and limitations of each.

4. List the advantages and disadvantages of survey questionnaires.

For questions 5-7, refer to the following instructions:
One argument against the current interface design of a popular word processor
is that it has all the functional menu items appearing together, which cause the
interface to be too complex. This complexity results in a confusing and frustrat­
ing experience for novice users. An alternative design is to provide different
levels of functional complexity, so users can choose the level that is suitable for
them, then advance to higher level as they get familiar with the tool, thus feel
more comfortable and learn more efficiently. You are asked to conduct usability
testing to compare these two designs.

5. v\lhich type of usability testing should be used for this situation? Explain why.

6. List and briefly describe the steps in the usability testing you would conduct.

7. Do you think there should be a control group in your test? Justify your answer.

References

Akers, David, Simpson, Matthew, Jeffries, Robin, and Winograd, Terry, Undo and
erase events as indicators of usability problems, Proceedings of the ACM Conference
on Human Factors in Con1puting Systen1s, ACM Press, NY (2009), 659-668.

Babaian, T., Lucas, W., and Oja, M-K., Evaluating the collaborative critique method,
Proceedings of the ACM Conference on Hu1nan Factors in Co1nputing Syste111s, ACM
Press, NY (2012).

Barcelos, T., Munoz, R., and Chalegre, V., Gamers as usability evaluators: A study in
the domain of virtual worlds, Proceedings of the 11th Brazilian Syn1posiun1 on Hun1an
Factors in Cornputing Systerns, (2012), 301-304.

Bargas-Avila, J., and Hornbrek, K., Old wine in new bottles or novel challenges? A
critical analysis of empirical studies of user experience, Proceedings of the ACM
Conference on Hu1nan Factors in Cornputing Syste,ns, ACM Press, NY (2011), 2689-2698.

Barnum, Caro l M., Usability Testing Essentials, Morgan Kaufmann (2011).

206 Chapter 5 Evaluation and the User Experience

Bazeley, Patricia, Qualitative Data Analysis: Practical Strategies, Sage Publications (2013).

Brooke, John, SUS: A quick and dirty usability scale, in Jordan, P. W., Thomas, B.,
Weerdmeester, B. A., and McClelland, I. L. (Editors), Usability Evaluation in lndustn;,
Taylor and Francis, London, UK (1996).

Cairns, P., and Cox, A. L. (Editors), Research Methods for Hun-zan-Con-zputer Interaction,
Cambridge University Press (2008, reprinted 2011).

Crook, Thomas, Fra sca, Brian, Kohavi, Ron, and Longbotham, Roger, Seven pitfalls
to avoid when running controlled experim ents on the web, KDD '09, Paris, France
(2009), 1105-1 113.

Dimond, J., Fiesler, C., DiSalvo, B., Pelc, J., and Bruckman, A., Qualitative data
collection techno logies: A comparison of instant messaging, email, and phone, ACM
GROUP'12 (2012), 277- 280 .

Dumas, Joseph, and Loring, Beth, Moderating Usability Tests: Principles and Practices for
Interacting, Morgan Kaufmann, Burlington, MA (2008).

Elmqvist, Niklas, and Yi, Ji Soo, Patterns for visualiza tion evaluation, Proceedings 2012
BELIV Workshop: Beyond Tin·1e and Errors: Novel Evaluation Methods for Visualization.
(2012).

Greenberg, S., Carpendal e, S., Marquardt, N., and Buxton, B., Sketching User Experiences:
The Workbook, Morgan Kaufmann (2012).

Hartson, R., and Pyla, P., The UX Book: Process and Guide/in.es for Ensuring a Quality User
Experience, Morgan Kaufmann (2012).

Joyce, G., Lilley, M., Barker, T., and Jefferies, A., Adapting heuristics for the mobile
panorama, Interaction '14 (2014).

Kirakow ski, J., and Corbett, M., SUMI: The Software Usability Measurement Inventory,
British Journal of Educational Technology 24, 3 (1993), 210-212.

Kohavi, Ron, and Longbotham, Roger, Online controlled experiments and A/B tests to
appear in Sammut, Claude, and Webb, Geoff (Editors), The Encyclopedia of Machine
Learning and Data Mining (2015).

Koha vi, Ron, Deng, Alex, Frasca, Brian, Walker, Toby, Xu, Ya, and Pohlmann, Nils,
Onlin e controlled exper iment s at large sca le, KDD'13, Chicago, IL (2013), 1168-1176.

Kohavi, Ron , Deng , Alex, Longbotham, Roger, and Xu, Ya, Seven rules of thumb for
web site experimenters, KDD'14 New York, NY (2014), 1857-1866.

Korhonen, Hannu, and Koivisto, Elina M. I., Playability heuristics for mobile games,
Proceedings MobileHCI '06 Conference, ACM Press, New York (2006), 9-15.

Lazar, J., Feng, J. H., and Hochheiser, H., Research Methods in Hun1an-Con1puter Interaction,
Wiley (2009).

Lewis, Jame s R., Usability: Lessons learned … and yet to be learned, International Journal
of Hunzan-Conzputer Interaction 30 (2014), 663-684.

Lewis, J., Utesch, B., and Maher, D., UMUX-LITE: When there's no time for the SUS,
Proceedings of the ACM Conference on Hun1an Factors in Cornputing Sys terns, ACM
Pre ss, NY (2013), 2099-2102.

Lund, A., User Experience Manage,nent: Essential Skills for Leading Effective UX Teams,
Morgan Kaufmann (2011).

References 207

MacDonald, Craig M., and Atwood, Micha el, What does it mean for a system to be
us eful? An exploratory study of us efu ln ess, DIS 2014, Vancouver, BC, Canada (2014),
885–894.

MacKenzie, I. Scott, Human-Computer Interaction: An E1npirical Research Perspective,
Morgan Kaufmann, San Francisco, CA (2013).

Madathil, K., and Greenstein, J., Synchronous remote usability testing: A new approach
facilitated by virtual ,-vorlds, Proceedings of the ACM Conference on Human Factors in
Con1puting Systen1s, ACM Press, New York (2011), 2225-2234.

Masip, L., GranoUers, T., and OJjva, M., A heuristi c evaluation experiment to validate
the new set of usability heuristic s, 2011 Eighth International Conference on Inforrnation
Technology: New Generations (2011).

Molich, Rolf, Jeffries, Robin, and Dumas, Joseph S., Making usability recommendation s
useful and usable, Journal of Usability Studies 2, 4 (2007), 162- 179.

Nielsen, Jakob, Usability Engineering, Academic Press, New York (1993).

Nielsen, J., Heuristic evaluation, in Nielsen, J., and Mack, R. L. (Editors), Usability
Inspection Methods, John Wiley & Sons, New York, NY (1994).

Nielsen, J., and Builiu, R., Mobile Usability, New Riders (2012).

Pinelle, David, Wong, Nelson, and Stach, Tadeusz, Heuristic evaluation for games:
Usability principles for video game design, Proceedings of the ACM Conference on
Human Factors in Con1puting Systerns, ACM Pre ss, NY (2008), 1453-1462.

Preece, Jenny, Rogers, Yvonne, and Sharp, Helen, Interaction Design: Beyond Hu1nan­
Computer Interaction, 4th Edition, John Wiley & Sons, West Sussex, UK (2015).

Reiss, E., Usable Usability, Wile y (2012).

Rogers, Y, Yuill, N., and Marshall, P., Contrasting lab-based and in-the-wild studies
for eval uating multi-user technologies, in 8. Price, The SAGE Handbook on Digital
Technology and Research, SAGE Publications (2013).

Roto, Virpi, and Lund, Arnie, On top of the user experience wave: How is our work
changing, Proceedings of the ACM Conference on Hun1an Factors in Co1nputing Systems,
Extended Abstracts, ACM Pre ss, NY (2013), 2521-2524.

Rubin, Jeffrey, and Chisnell, Dana, Handbook of Usability Testing, 2nd Edition, John Wiley
& Sons, Indianapolis, IN (2008).

Ryu, Young Sam, Mobile Phone UsabiEty Questionnaire (MPUQ) and Automated
Usability Evaluation, in Julie A. Jacko (Editor), Proceedings of the 13th International
Conference on Hu1nan-Co1nputer Interaction. Part I: New Trends, Springer-Verlag, Berlin,
Heidelberg (2009), 349- 351.

Sauro, J., and Lewis, J., Quantifying the User Experience: Practical Statistics for User
Research Morgan Kaufmann (2012).

Schmettow, M., Sample size in usability studies, Cornrnunications of the ACM 55, 4 (April
2012), 64-70.

Spool, Jared, Surviving our success: Three radical recommendations, Journal of Usability
Studies 2, 4 (August 2007), 155-161.

Tullis, Thomas, and Albert, William, Measuring the User Experience: Collecting, Analyzing,
and Presenting Usability Metrics, 2nd Edition, Morgan Kaufmann Publishers/Elsevier
(2013).

208 Chapter 5 Evaluat ion and the User Exper ience

Vermeeren, A., Lai-Chong Law, E., Roto, V., Obrist, M., Hoonhout, J., and Vaananen­
Vainio-Mattila, K., User experience evaluation methods: Current state and develop ­
ment needs, NordiCHI 2010, Reykjavik, Iceland (2010), 521-53 0.

Wharton, Cathleen, Rieman, John, Lewis, Clayton, and Polson, Peter, The cognitive
walkthrough method: A practitioner's guide, in Nielsen, Jakob, and Mack, Robert
(Editors), Usability Inspection Methods, John Wiley & Sons, New York (1994).

Wilson, Chauncey, User Interface Inspection Methods: A User-centered Design Method,
Morgan Kaufmann (2013).

Expert paper writers are just a few clicks away

Place an order in 3 easy steps. Takes less than 5 mins.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00