Posted: February 26th, 2023

Website Review

Refer to Textbook: Interaction Design by Helen Sharp, Jennifer Preece, and Yvonne Rogers 

Publisher: Wiley Evaluation of Best Buy E-Commerce Website. Including images is highly encouraged to demonstrate the usability issues within site. Use the following points as guidelines when writing the Website Review:

  • Metrics: As much as possible, use measurable criteria as you’ve read in the Preece text. Identify those methods you’ve applied in your review of the Website.
  • Application: Using the Nielsen 10 heuristics for usability design, explain how each task can be improved.

 The assignment should be between 3 – 4 pages. Follow APA 7 guidelines for references and in-text citations.

Interaction Design continues to be the standard textbook in the field. Seasoned practitioners will find it use-
ful when they need a reference to best practices or to explain a concept to a colleague. Students can turn to
Interaction Design for an easy-to-understand description of the basics or in-depth how-tos. From personas and
disabilities to the design of UX organizations and working in Agile, if you’re going to pick one book to bring into
the office, it should be this one.

Jofish Kaye, Principal Research Scientist, Mozilla, USA

This is the perfect textbook for a wide range of User interface/User experience design courses. For an undergradu-
ate, it provides a variety of compelling examples that illustrate best practice in Interaction Design. For a graduate
student, it provides a foundational overview of advanced topics. This book is also essential for the professional who
wants to know the state of the art in Interaction design. I use this textbook and recommend it widely.

Rosa I. Arriaga, Ph.D., Senior Research Scientist, School of Interactive Computing
Georgia Institute of Technology, USA

The Interaction Design book has immensely contributed to a growing Namibian HCI skilled community over
the last decade. Exposing students, academics and practitioners to the basic principles and theories as well as
most recent trends and technologies, with global and local case studies, in the latest edition, allows for reflective
applications within very specific contexts. This book remains our number one reference in the education of future
generations of interaction designers in Namibia, promoting the creation of thoughtful user experiences for respon-
sible citizens.

Heike Winschiers-Theophilus, Professor, Faculty of Computing and Informatics,
Namibia University of Science and Technology, Africa

Throughout my teaching of user experience and interaction design, the book by Rogers, Preece and Sharp has
been an absolute cornerstone textbook for students. The authors bring together their own wealth of knowledge of
academic HCI with a deep understanding of industry practice to provide what must be the most comprehensive
introduction to the key areas of interaction design and user experience work, now an established field of practice. I
put this book in the “essential reading” section of many of the reading lists I give to students.

Simon Attfield, Associate Professor in Human Centred Technology, Middlesex University, UK

Interaction design has gone through tremendous changes in the last few years—for example the rising importance
of “big” data streams to design, and the growing prevalence of everyday ubiquitous computing issues of sensing
and blending gracefully and ethically into peoples’ daily lives. This is an important and timely update to a text
that’s long been considered gold standard in our field. I’m looking forward to using it with my students to help
prepare them for the design challenges they will face in today’s industrial practice.

Katherine Isbister, Professor, Computational Media, University of California Santa Cruz, USA

More than ever, designing effective human-computer interactions is crucial for modern technological systems.
As digital devices become smaller, faster and smarter, the interface and interaction challenges become ever more
complex. Vast quantities of data are often accessed on handheld screens, or no screens at all through voice com-
mands; and AI systems have interfaces that “bite back” with sophisticated dialogue structures. What are the best
interaction metaphors for these technologies? What are the best tools for creating interfaces that are enjoyable and
universally accessible? How do we ensure emerging technologies remain relevant and respectful of human values?
In this book, you’ll find detailed analysis of these questions and much more. (It is a valuable resource for both the
mature student and the reflective professional.)

Frank Vetere, Professor of Interaction Design, School of Computing and Information Systems,
University of Melbourne, Australia

This is at the top of my recommended reading list for undergraduate and master’s students as well as professionals
looking to change career paths. Core issues to interaction design are brought to life through compelling vignettes
and contemporary case examples from leading experts. What has long been a comprehensive resource for interac-
tion design now incorporates timely topics in computing, such as data at scale, artificial intelligence, and ethics,
making it essential reading for anyone entering the field of interaction design.

Anne-Marie Piper, PhD, Associate Professor, Departments of Communication Studies,
Electrical Engineering and Computer Science, Northwestern University, USA

I have been using Interaction Design as a textbook since its first edition for both my undergraduate and graduate
introductory HCI courses. This is a must-read seminal book which provides a thorough coverage of the discipline
of HCI and the practice of user-centered design. The fifth edition lives up to its phenomenal reputation by includ-
ing updated content on the process of interaction design, the practice of interaction design (e.g., technical debt in
UX, Lean UX), design ethics, new types of interfaces, etc. I always recommend Interaction Design to students and
practitioners who want to gain a comprehensive overview of the fields of HCI and UX.

Olivier St-Cyr, Assistant Professor, Teaching Stream, University of Toronto, Canada

Interaction design is a practice that spans many domains. The authors acknowledge this by providing a tremen-
dous amount of information across a wide spectrum of disciplines. This book has evolved from a simple textbook
for HCI students, to an encyclopedia of design practices, examples, discussions of related topics, suggestions for
further reading, exercises, interviews with practitioners, and even a bit of interesting history here and there. I see it
as one of the few sources effectively bridging the gulf between theory and practice. A copy has persistently occu-
pied my desk since the first edition, and I regularly find myself revisiting various sections for inspiration on how to
communicate the reasoning behind my own decisions to colleagues and peers.

William R. Hazlewood, PhD, Principal Design Technologist, Retail Experience
Design Concept Lab, Amazon, USA

For years Interaction Design has been my favourite book not only for supporting my classes but also as my
primary source for preparing UX studies to industrial and academic settings. The chapters engage readers with
easy-to-read content while presenting, harmonically, theories, examples and case studies which touch in multidisci-
plinary aspects of construction and evaluation of interactive products. The fifth edition again maintains the tradi-
tion of being an up-to-date book on HCI, and includes new discussions on Lean UX, emotional interaction, social
and cognitive aspects, and ethics in human studies, which are certainly contemporary topics of utmost relevance
for practitioners and academics in interaction design.

Luciana Zaina, Senior Lecturer, Federal University of São Carlos, Brazil

This book is always my primary recommendation for newcomers to human-computer interaction. It addresses the
subject from several perspectives: understanding of human behaviour in context, the challenges of ever-changing
technology, and the practical processes involved in interaction design and evaluation. The new edition again shows
the authors’ dedication to keeping both the primary content and representative examples up to date.

Robert Biddle, Professor of Human–Computer Interaction, Carleton University, Ottawa, Canada

This fifth edition provides a timely update to one of the must-have classics on interaction design. The changes in
our field, including how to deal with emerging sensing technology and the volumes of data it provides, are well
addressed in this volume. This is a book for those new to and experienced in interaction design.

Jodi Forlizzi, Professor and Geschke Director, Human-Computer Interaction Institute,
The School of Computer Science, CMU, USA

The milieu of digital life surrounds us. However, how we choose to design and create our experiences and
interactions with these emerging technologies remains a significant challenge. This book provides both a road-
map of essential skills and methodologies to tackle these designs confidently as well as the critical deeper history,
literature, and poetry of interaction design. You will return to this book throughout your career to operationalize,
ground and inspire your creative practice of interaction design.

Eric Paulos, Professor, Electrical Engineering and Computer Sciences, UC Berkeley, USA

Preece, Sharp and Rogers offer once again an engaging excursion through the world of interaction design. This
series is always up-to-date and offers a fresh view on a broad range of topics needed for students in the field of
interaction design, human-computer interaction, information design, web design or ubiquitous computing. The
book should be the book every student should have in their backpack. It is a “survival guide”! It guides one
through the jungle of information and the dark technological forests of our digital age. It also helps to develop
a critical view on developing novel technologies as our computing research community needs to confront much
more seriously the negative impacts of our innovations. The online resources are a great help for me to create good
classes and remove some weight from the backpacks of my students.

Johannes Schöning, Professor of Computer Science, Bremen University, Germany

Nearly 20 years have passed since the release of the first edition of Interaction Design, with massive changes to
technology and thus the science and practice of interaction design. The new edition combines the brilliance of the
first book with the wisdom of the lessons learned in the meantime, and the excitement of new technological fron-
tiers. Complex concepts are elegantly and beautifully explained, and the reader is left with little doubt as to how
to put them into practice. The book is an excellent resource for those new to interaction design, or as a guidebook
or reference to practitioners.

Dana McKay, UX Researcher, Practitioner and Academic, University of Melbourne, Australia

Computers are ubiquitous and embedded in virtually every new device and system, ranging from the omnipresent
cellphone to the complex web of sociotechnical systems that envelop most every sphere of personal and profes-
sional life. They connect our activities to ever-expanding information resources with previously unimaginable
computational power. To ensure interface design respects human needs and augments our abilities is an intellectual
challenge of singular importance. It involves not only complex theoretical and methodological issues of how to
design effective representations and mechanisms of interaction but also confronts complex social, cultural, and
political issues such as those of privacy, control of attention, and ownership of information. The new edition of
Interaction Design continues to be the introductory book I recommend to my students and to anyone interested in
this crucially important area.

Jim Hollan, Distinguished Professor of Cognitive Science, University of California San Diego, USA

Interaction Design continues to be my favorite textbook on HCI. Even named our undergraduate and postgradu-
ate programmes at Aalborg University after it. In its fifth edition, it captures the newest developments in the field’s
cumulative body of knowledge, and continues to be the most updated and accessible work available. As always, it
serves as a clear pointer to emerging trends in interactive technology design and use.

Jesper Kjeldskov, Professor and Head of Department of Computer Science, Aalborg University, Denmark

I got to learn about the field of HCI and interaction design when I came across the first edition of this book at the
library in my junior year of college. As an HCI researcher and educator, I have been having the pleasure of intro-
ducing the subject to undergraduates and professional master’s students using the previous editions. I thank the
authors for their studious efforts to update and add new contents that are relevant for students, academics, and
professionals to help them learn this ever-evolving field of HCI and interaction design in a delightful manner.

Eun Kyoung Choe, Professor of Human-Computer Interaction, College of Information Studies,
University of Maryland, USA

This new edition is, without competition, the most comprehensive and authoritative source in the field when it
comes to modern interaction design. It is highly accessible and it is a pleasure to read. The authors of this book
have once again delivered what the field needs!

Erik Stolterman, Professor in Informatics, School of Informatics and Computing,
Indiana University, Bloomington, USA

This book illuminates the interaction design field like no other. Interaction design is such a vast, multidisciplinary
field that you might think it would be impossible to synthesize the most relevant knowledge in one book. This
book does not only that, but goes even further: it eloquently brings contemporary examples and diverse voices to
make the knowledge concrete and actionable, so it is useful for students, researchers, and practitioners alike. This
new edition includes invaluable discussions about the current challenges we now face with data at scale, embrac-
ing the ethical design concerns our society needs so much in this era.

Simone D. J. Barbosa, Professor of Computer Science, PUC-Rio,
and Co-Editor-in-Chief of ACM Interactions, Brazil

My students like this book a lot! It provides a comprehensive coverage of the essential aspects of HCI/UX, which
is key to the success of any software applications. I also like many aspects of the book, particularly the examples
and videos (some of which are provided as hyperlinks) because they not only help to illustrate the HCI/UX con-
cepts and principles, but also relate very well to readers. I highly recommend this book to anyone who wants to
learn more about HCI/UX.

Fiona Fui-Hoon Nah, Editor-in-Chief of AIS Transactions on Human-Computer Interaction,
Professor of Business and Information Technology, Missouri University of Science

and Technology, Rolla, Missouri, USA

I have been using the book for several years in my Human-Computer Interaction class. It helps me, not only for
teaching, but also for theses supervision. I really appreciate the authors regarding their efforts in maintaining the
relevance and up-to-dateness of the Interaction Design book. For example, they put Data At Scale and AgileUX in
the new edition. Really love the book!

Harry B. Santoso, PhD, Instructor of Interaction System (HCI) course at Faculty of Computer Science,
Universitas Indonesia, Indonesia

During my PhD already the first edition of Interaction Design: beyond human-computer interaction in 2002
quickly became my preferred reference book. Seventeen years later, and now in its fifth edition, I commend the
authors for their meticulous and consistent effort in updating and enriching what has become the field’s standard
introductory textbook. Not just about objects and artefact, design today is increasingly recognized as a sophis-
ticated and holistic approach for systems thinking. Similarly, Preece, Sharp, and Rogers have kept the book’s
coverage with the times by providing a comprehensive, compelling, and accessible coverage of concepts, methods
and cases of interaction design across many domains such as experience design, ubiquitous computing, and urban
informatics.

Marcus Foth, Professor of Urban Informatics, QUT Design Lab, Brisbane, Australia

“Interaction Design” has long been my textbook of choice for general HCI courses. The latest edition has intro-
duced a stronger practitioner focus that should add value for students transitioning into practice, for practitioners,
and also for others interested in interaction design and its role in product development. It manages to be an engag-
ing read while also being “snackable”, to cover the basics and also inspire. I still find it a great read, and believe
others will too.”

Ann Blandford, Professor of Human – Computer Interaction, University College London

Very clear style, with plenty of active learning material and pointers to further reading. I found that it works very
well with engineering students.

Albert Ali Salah, Professor at Utrecht University, the Netherlands

INTERACTION DESIGN
beyond human-computer

interaction

Fifth Edition

Interaction Design: beyond human-computer interaction, Fifth Edition

Published by
John Wiley & Sons, Inc.
10475 Crosspoint Boulevard
Indianapolis, IN 46256
www.wiley.com

Copyright © 2019 by John Wiley & Sons, Inc., Indianapolis, Indiana
Published simultaneously in Canada
ISBN: 978-1-119-54725-9
ISBN: 978-1-119-54735-8 (ebk)
ISBN: 978-1-119-54730-3 (ebk)
Manufactured in the United States of America

10 9 8 7 6 5 4 3 2 1

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any
form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except
as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either
the prior written permission of the Publisher, or authorization through payment of the appropriate
per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978)
750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the
Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201)
748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representations or
warranties with respect to the accuracy or completeness of the contents of this work and specifically
disclaim all warranties, including without limitation warranties of fitness for a particular purpose.
No warranty may be created or extended by sales or promotional materials. The advice and strategies
contained herein may not be suitable for every situation. This work is sold with the understanding
that the publisher is not engaged in rendering legal, accounting, or other professional services. If
professional assistance is required, the services of a competent professional person should be sought.
Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an
organization or Web site is referred to in this work as a citation and/or a potential source of further
information does not mean that the author or the publisher endorses the information the organization
or website may provide or recommendations it may make. Further, readers should be aware that
Internet websites listed in this work may have changed or disappeared between when this work was
written and when it is read.

For general information on our other products and services please contact our Customer Care
Department within the United States at (877) 762-2974, outside the United States at (317) 572-3993
or fax (317) 572-4002.

Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material
included with standard print versions of this book may not be included in e-books or in print-on-
demand. If this book refers to media such as a CD or DVD that is not included in the version you
purchased, you may download this material at http://booksupport.wiley.com. For more information
about Wiley products, visit www.wiley.com.

Library of Congress Control Number: 2019932998

Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley &
Sons, Inc. and/or its affiliates, in the United States and other countries, and may not be used without
written permission. All other trademarks are the property of their respective owners. John Wiley &
Sons, Inc. is not associated with any product or vendor mentioned in this book.

http://www.wiley.com

http://www.wiley.com/go/permissions

http://booksupport.wiley.com#_blank

www.wiley.com

The authors are senior academics with a background in teaching, researching, and consulting
in the United Kingdom, United States, Canada, India, Australia, South Africa, and Europe.
Having worked together on four previous editions of this book, as well as an earlier textbook
on human-computer interaction, they bring considerable experience in curriculum develop-
ment using a variety of media for online learning as well as face-to-face teaching. They have
considerable knowledge in creating learning texts and websites that motivate and support
learning for a range of students. All three authors are specialists in interaction design and
human-computer interaction (HCI). In addition, they bring skills from other disciplines; for
instance, Yvonne Rogers started off as a cognitive scientist, Helen Sharp is a software engi-
neer, and Jenny Preece works in information systems. Their complementary knowledge and
skills enable them to cover the breadth of concepts in interaction design and HCI to produce
an interdisciplinary text and website.

Helen Sharp is a Professor of Software Engineering and Associate Dean in the Faculty of
Science, Technology, Engineering, and Mathematics at the Open University. Originally trained
as a software engineer, it was by watching the frustration of users and the clever “work-
arounds” they developed that inspired her to investigate HCI, user-centered design, and the
other related disciplines that now underpin the field of interaction design. Her research
focuses on the study of professional software practice and the effect of human and social
aspects on software development, leveraging her expertise in the intersection between inter-
action design and software engineering and working closely with practitioners to support
practical impact. She is active in both the software engineering and CHI communities, and
she has had a long association with practitioner-related conferences. Helen is on the editorial
board of several software engineering journals, and she is a regular invited speaker at aca-
demic and practitioner venues.

Yvonne Rogers is the Director of the Interaction Centre at University College London, a
Professor of Interaction Design, and a deputy head of department for Computer Science.
She is internationally renowned for her work in HCI and ubiquitous computing and, in
particular, for her pioneering approach to innovation and ubiquitous learning. Yvonne is
widely published, and she is the author of two recent books: Research in the Wild (2017,
co-authored with Paul Marshall) and The Secrets of Creative People (2014). She is also a
regular keynote speaker at computing and HCI conferences worldwide. Former positions
include Professor of Interaction Design at the Open University (2006–2011), Professor of
Human-Computer Interaction at the School of Informatics and Computing at Indiana
University (2003–2006), and Professor in the former School of Cognitive and Computing
Sciences at Sussex University (1992–2003). She has also been a Visiting Professor at UCSC,
University of Cape Town, Melbourne University, Stanford, Apple, Queensland University,
and UCSD. She has been elected as a Fellow of the ACM, the British Computer Society, and
the ACM’s CHI Academy.

About the Authors

A b o u t t h e A u t h o r sviii

Jennifer Preece is Professor and Dean Emerita in the College of Information Studies—
Maryland’s iSchool—at the University of Maryland. Jenny’s research focuses on the intersec-
tion of information, community, and technology. She is particularly interested in community
participation online and offline. She has researched ways to support empathy and social
support online, patterns of online participation, reasons for not participating (for example,
lurking and infrequent participation), strategies for supporting online communication, devel-
opment of norms, and the attributes of successful technology-supported communities.
Currently, Jenny focuses on how technology can be used to educate and motivate citizens to
engage and contribute quality data to citizen science projects. This research contributes to the
broader need for the collection of data about the world’s flora and fauna at a time when
many species are in rapid decline due to habitat loss, pollution, and climate change. She was
author of one of the first books on online communities—Online Communities: Designing
Usability, Supporting Sociability (2000) published by John Wiley & Sons Ltd and several
other HCI texts. Jenny is also widely published, a regular keynote speaker, and a member of
the ACM’s CHI Academy.

Associate Publisher
Jim Minatel

Editorial Manager
Pete Gaughan

Production Manager
Katie Wisor

Project Editor
Gary Schwartz

Production Editor
Barath Kumar Rajasekaran

Technical Editors
Danelle Bailey
Jill L. H. Reed

Copy Editor
Kim Wimpsett

Proofreader
Nancy Bell

Indexer
Johnna VanHoose Dinse

Cover Designer
Wiley

Cover Image
© Wiley; Jennifer Preece photo courtesy of
Craig Allan Taylor

Credits

Many people have helped us over the years in writing the four previous editions of this book.
We have benefited from the advice and support of our many professional colleagues across
the world and from our students, friends, and families. We especially would like to thank
everyone who generously contributed their ideas and time to help make all of the editions of
this book successful.

These include our colleagues and students at the College of Information Studies—
“Maryland’s iSchool”—at University of Maryland, the Human-Computer Interaction
Laboratory (HCIL) and Center for the Advanced Study of Communities and Information
(CASCI), the Open University, and University College London. We would especially like to
thank (in alphabetical first name order) all of the following individuals who have helped us
over the years:

Alex Quinn, Alice Robbin, Alice Siempelkamp, Alina Goldman, Allison Druin, Ana
Javornik, Anijo Mathew, Ann Blandford, Ann Jones, Anne Adams, Ben Bederson, Ben Shnei-
derman, Blaine Price, Carol Boston, Cathy Holloway, Clarisse Sieckenius de Souza, Connie
Golsteijn, Dan Green, Dana Rotman, danah boyd, Debbie Stone, Derek Hansen, Duncan
Brown, Edwin Blake, Eva Hornecker, Fiona Nah, Gill Clough, Godwin Egbeyi, Harry Brignull,
Janet van der Linden, Jeff Rick, Jennifer Ferreira, Jennifer Golbeck, Jeremy Mayes, Joh Hunt,
Johannes Schöning, Jon Bird, Jonathan Lazar, Judith Segal, Julia Galliers, Fiona Nah, Kent
Norman, Laura Plonka, Leeann Brumby, Leon Reicherts, Mark Woodroffe, Michael Wood,
Nadia Pantidi, Nick Dalton, Nicolai Marquardt, Paul Cairns, Paul Marshall, Philip “Fei”
Wu, Rachael Bradley, Rafael Cronin, Richard Morris, Richie Hazlewood, Rob Jacob, Rose
Johnson, Stefan Kreitmayer, Steve Hodges, Stephanie Wilson, Tamara Clegg, Tammy Toscos,
Tina Fuchs, Tom Hume, Tom Ventsias, Toni Robertson, and Youn-Kyung Lim.

In addition we wish to thank the many students, instructors, researchers and practition-
ers who have contacted us over the years with stimulating comments, positive feedback and
provocative questions

We are particularly grateful to Vikram Mehta, Nadia Pantidi, and Mara Balestrini for
filming, editing, and compiling a series of on-the-spot “talking heads” videos, where they
posed probing questions to the diverse set of attendees at CHI’11, CHI’14, and CHI’18,
including a variety of CHI members from across the globe. The questions included asking
about the future of interaction design and whether HCI has gone too wild. There are about
75 of these videos, which can be viewed on our website at www.id-book.com. We are also
indebted to danah boyd, Harry Brignull, Leah Beuchley, Albrecht Schmidt, Ellen Gottesdie-
ner, and Jon Froehlich for generously contributing in-depth, text-based interviews in the
book. We would like to thank Rien Sach, who has been our webmaster for several years, and
Deb Yuill who did a thoughtful and thorough job of editing the old reference list.

Danelle Bailey and Jill Reed provided thoughtful critiques and suggestions on all the
chapters in the fifth edition, and we thank them.

Finally, we would like to thank our editor and the production team at Wiley who have
been very supportive and encouraging throughout the process of developing this fifth edition:
Jim Minatel, Pete Gaughan, Gary Schwartz, and Barath Kumar Rajasekaran.

Acknowledgments

http://www.id-book.com

Contents

What’s Inside? xvii

1 WHAT IS INTERACTION DESIGN? 1

1.1 Introduction 1
1.2 Good and Poor Design 2
1.3 What Is Interaction Design? 9
1.4 The User Experience 13
1.5 Understanding Users 15
1.6 Accessibility and Inclusiveness 17
1.7 Usability and User Experience Goals 19
Interview with Harry Brignull 34

2 THE PROCESS OF INTERACTION DESIGN 37

2.1 Introduction 37
2.2 What Is Involved in Interaction Design? 38
2.3 Some Practical Issues 55

3 CONCEPTUALIZING INTERACTION 69

3.1 Introduction 69
3.2 Conceptualizing Interaction 71
3.3 Conceptual Models 74
3.4 Interface Metaphors 78
3.5 Interaction Types 81
3.6 Paradigms, Visions, Theories, Models, and Frameworks 88
Interview with Albrecht Schmidt 97

4 COGNITIVE ASPECTS 101

4.1 Introduction 101
4.2 What Is Cognition? 102
4.3 Cognitive Frameworks 123

5 SOCIAL INTERACTION 135

5.1 Introduction 135
5.2 Being Social 136
5.3 Face-to-Face Conversations 139

C O N T E N T Sxiv

5.4 Remote Conversations 143
5.5 Co-presence 150
5.6 Social Engagement 158

6 EMOTIONAL INTERACTION 165

6.1 Introduction 165
6.2 Emotions and the User Experience 166
6.3 Expressive Interfaces and Emotional Design 172
6.4 Annoying Interfaces 174
6.5 Affective Computing and Emotional AI 179
6.6 Persuasive Technologies and Behavioral Change 182
6.7 Anthropomorphism 187

7 INTERFACES 193

7.1 Introduction 193
7.2 Interface Types 194
7.3 Natural User Interfaces and Beyond 252
7.4 Which Interface? 253
Interview with Leah Buechley 257

8 DATA GATHERING 259

8.1 Introduction 259
8.2 Five Key Issues 260
8.3 Data Recording 266
8.4 Interviews 268
8.5 Questionnaires 278
8.6 Observation 287
8.7 Choosing and Combining Techniques 300

9 DATA ANALYSIS, INTERPRETATION, AND PRESENTATION 307

9.1 Introduction 307
9.2 Quantitative and Qualitative 308
9.3 Basic Quantitative Analysis 311
9.4 Basic Qualitative Analysis 320
9.5 Which Kind of Analytic Framework to Use? 329
9.6 Tools to Support Data Analysis 341
9.7 Interpreting and Presenting the Findings 342

C O N T E N T S xv

10 DATA AT SCALE 349

10.1 Introduction 349
10.2 Approaches to Collecting and Analyzing Data 351
10.3 Visualizing and Exploring Data 366
10.4 Ethical Design Concerns 375

11 DISCOVERING REQUIREMENTS 385

11.1 Introduction 385
11.2 What, How, and Why? 386
11.3 What Are Requirements? 387
11.4 Data Gathering for Requirements 395
11.5 Bringing Requirements to Life: Personas and Scenarios 403
11.6 Capturing Interaction with Use Cases 415
Interview with Ellen Gottesdiener 418

12 DESIGN, PROTOTYPING, AND CONSTRUCTION 421

12.1 Introduction 421
12.2 Prototyping 422
12.3 Conceptual Design 434
12.4 Concrete Design 445
12.5 Generating Prototypes 447
12.6 Construction 457
Interview with Jon Froehlich 466

13 INTERACTION DESIGN IN PRACTICE 471

13.1 Introduction 471
13.2 AgileUX 473
13.3 Design Patterns 484
13.4 Open Source Resources 489
13.5 Tools for Interaction Design 491

14 INTRODUCING EVALUATION 495

14.1 Introduction 495
14.2 The Why, What, Where, and When of Evaluation 496
14.3 Types of Evaluation 500
14.4 Evaluation Case Studies 507
14.5 What Did We Learn from the Case Studies? 514
14.6 Other Issues to Consider When Doing Evaluation 516

C O N T E N T Sxvi

15 EVALUATION STUDIES: FROM CONTROLLED TO NATURAL SETTINGS 523

15.1 Introduction 523
15.2 Usability Testing 524
15.3 Conducting Experiments 533
15.4 Field Studies 536
Interview with danah boyd 546

16 EVALUATION: INSPECTIONS, ANALYTICS, AND MODELS 549

16.1 Introduction 549
16.2 Inspections: Heuristic Evaluation and Walk-Throughs 550
16.3 Analytics and A/B Testing 567
16.4 Predictive Models 576

References 581

Index 619

What’s Inside?

Welcome to the fifth edition of Interaction Design: beyond human-computer interaction and
our interactive website at www.id-book.com. Building on the success of the previous edi-
tions, we have substantially updated and streamlined the material in this book to provide a
comprehensive introduction to the fast-growing and multi-disciplinary field of interaction
design. Rather than let the book expand, however, we have again made a conscious effort to
keep it at the same size.

Our textbook is aimed at both professionals who want to find out more about inter-
action design and students from a range of backgrounds studying introductory classes in
human-computer interaction, interaction design, information and communications technol-
ogy, web design, software engineering, digital media, information systems, and information
studies. It will appeal to practitioners, designers, and researchers who want to discover what
is new in the field or to learn about a specific design approach, method, interface, or topic.
It is also written to appeal to a general audience interested in design and technology.

It is called Interaction Design: beyond human-computer interaction because interaction
design has traditionally been concerned with a broader scope of issues, topics, and methods than
was originally the scope of human-computer interaction (HCI)—although nowadays, the two
increasingly overlap in scope and coverage of topics. We define interaction design as follows:

Designing interactive products to support the way people communicate and interact in
their everyday and working lives.

Interaction design requires an understanding of the capabilities and desires of people
and the kinds of technology that are available. Interaction designers use this knowledge to
discover requirements and develop and manage them to produce a design. Our textbook pro-
vides an introduction to all of these areas. It teaches practical techniques to support develop-
ment as well as discussing possible technologies and design alternatives.

The number of different types of interface and applications available to today’s interac-
tion designers continues to increase steadily, so our textbook, likewise, has been expanded to
cover these new technologies. For example, we discuss and provide examples of brain, smart,
robotic, wearable, shareable, augmented reality, and multimodel interfaces, as well as more
traditional desktop, multimedia, and web-based interfaces. Interaction design in practice is
changing fast, so we cover a range of processes, issues, and examples throughout the book.

The book has 16 chapters, and it includes discussion of the different design approaches
in common use; how cognitive, social, and affective issues apply to interaction design; and
how to gather, analyze, and present data for interaction design. A central theme is that design
and evaluation are interwoven, highly iterative processes, with some roots in theory but that
rely strongly on good practice to create usable products. The book has a hands-on orienta-
tion and explains how to carry out a variety of techniques used to design and evaluate the
wide range of new applications coming onto the market. It has a strong pedagogical design
and includes many activities (with detailed comments) and more complex activities that can
form the basis for student projects. There are also “Dilemmas,” which encourage readers to
weigh the pros and cons of controversial issues.

http://www.id-book.com

W h at ’ s I n s I d e ?xviii

The style of writing throughout the book is intended to be accessible to a range of read-
ers. It is largely conversational in nature and includes anecdotes, cartoons, and case studies.
Many of the examples are intended to relate to readers’ own experiences. The book and the
associated website are also intended to encourage readers to be active when reading and to
think about seminal issues. The goal is for readers to understand that much of interaction
design needs consideration of the issues, and that they need to learn to weigh the pros and
cons and be prepared to make trade-offs. There is rarely a right or wrong answer, although
there is a world of difference between a good design and a poor design.

This book is accompanied by a website (www.id-book.com), which provides a variety of
resources, including slides for each chapter, comments on chapter activities, and a number
of in-depth case studies written by researchers and designers. There are video interviews
with a wide range of experts from the field, including professional interaction designers and
university professors. Pointers to respected blogs, online tutorials, YouTube videos, and other
useful materials are also provided.

tasteRs

We address topics and questions about the what, why, and how of interaction design. These
include the following:
• Why some interfaces are good and others are poor
• Whether people can really multitask
• How technology is transforming the way people communicate with one another
• What are users’ needs, and how we can design for them
• How interfaces can be designed to change people’s behavior
• How to choose between the many different kinds of interactions that are now available

(for example, talking, touching, and wearing)
• What it means to design accessible and inclusive interfaces
• The pros and cons of carrying out studies in the lab versus in the field and in the wild
• When to use qualitative and quantitative methods
• How to construct informed consent forms
• How the type of interview questions posed affects the conclusions that can be drawn from

the answers given
• How to move from a set of scenarios and personas to initial low-fidelity prototypes
• How to visualize the results of data analysis effectively
• How to collect, analyze, and interpret data at scale
• Why it is that what people say can be different from what they do
• The ethics of monitoring and recording people’s activities
• What are Agile UX and Lean UX and how they relate to interaction design
• How Agile UX can be practically integrated with interaction design throughout different

stages of the design process

http://www.id-book.com

W h at ’ s I n s I d e ? xix

Changes from Previous Editions

To reflect the dynamic nature of the field, the fifth edition has been thoroughly updated, and
new examples, images, case studies, dilemmas, and so on, have been included to illustrate the
changes. Included in this edition is a new chapter called “Data at Scale.” Collecting data has
never been easier. However, knowing what to do with it when designing new user experiences
is much more difficult. The chapter introduces key methods for collecting data at scale, dis-
cusses how to transform data at scale to be meaningful, and reviews a number of methods for
visualizing and exploring data at scale while introducing fundamental design principles
for making data at scale ethical. This is positioned just after two chapters that introduce data
gathering and data analysis that discuss fundamental methods.

In this edition, the chapter on the Process of Interaction Design has been re-located to
Chapter 2 in order to better frame the discussion of interaction design. It has been updated
with new process models and modified to fit its new location in the book structure. This means
that the other chapters have been renumbered to accommodate this and the new chapter.

Chapter 13, “Interaction Design in Practice,” has been updated to reflect recent devel-
opments in the use of practical UX methods. Old examples and methods no longer used in
the field have been removed to make way for the new material. Some chapters have been
completely rewritten, while others have been extensively revised. For example, Chapters 4, 5,
and 6 have been substantially updated to reflect new developments in social media and emo-
tional interaction, while also covering the new interaction design issues they raise, such as
privacy and addiction. Many examples of new interfaces and technologies have been added
to Chapter 7. Chapter 8 and Chapter 9 on data gathering and analysis have also been sub-
stantially updated. New case studies and examples have been added to Chapters 14–16 to
illustrate how evaluation methods have changed for use with the continuously evolving tech-
nology that is being developed for today’s users. The interviews accompanying the chapters
have been updated, and two new ones are included with leading figures involved in innova-
tive research, state-of-the-art design, and contemporary practice.

We have decided to continue to provide both a print-based version of the book and an
e-book. Both are in full color. The e-book supports note sharing, annotating, contextualized
navigating, powerful search features, inserted videos, links, and quizzes.

W H A T I S I N T E R A C T I O N D E S I G N ?

Objectives
The main goals of this chapter are to accomplish the following:

• Explain the difference between good and poor interaction design.
• Describe what interaction design is and how it relates to human-computer interaction
and other fields.

• Explain the relationship between the user experience and usability.
• Introduce what is meant by accessibility and inclusiveness in relation to human-
computer interaction.

• Describe what and who is involved in the process of interaction design.
• Outline the different forms of guidance used in interaction design.
• Enable you to evaluate an interactive product and explain what is good and bad about
it in terms of the goals and core principles of interaction design.

1.1 Introduction

How many interactive products are there in everyday use? Think for a minute about what you
use in a typical day: a smartphone, tablet, computer, laptop, remote control, coffee machine,
ticket machine, printer, GPS, smoothie maker, e-reader, smart TV, alarm clock, electric tooth-
brush, watch, radio, bathroom scales, fitness tracker, game console . . . the list is endless. Now
think for a minute about how usable they are. How many are actually easy, effortless, and

1.1 Introduction

1.2 Good and Poor Design

1.3 What Is Interaction Design?

1.4 The User Experience

1.5 Understanding Users

1.6 Accessibility and Inclusiveness

1.7 Usability and User Experience Goals

Chapter 1

1 W H AT I S I N T E R A C T I O N D E S I G N ?2

enjoyable to use? Some, like the iPad, are a joy to use, where tapping an app and flicking
through photos is simple, smooth, and enjoyable. Others, like working out how to buy the
cheapest train ticket from a ticket machine that does not recognize your credit card after
completing a number of steps and then makes you start again from scratch, can be very frus-
trating. Why is there a difference?

Many products that require users to interact with them, such as smartphones and fit-
ness trackers, have been designed primarily with the user in mind. They are generally easy
and enjoyable to use. Others have not necessarily been designed with the users in mind;
rather, they have been engineered primarily as software systems to perform set functions.
An example is setting the time on a stove that requires a combination of button presses
that are not obvious as to which ones to press together or separately. While they may work
effectively, it can be at the expense of how easily they will be learned and therefore used in a
real-world context.

Alan Cooper (2018), a well-known user experience (UX) guru, bemoans the fact that
much of today’s software suffers from the same interaction errors that were around 20 years
ago. Why is this still the case, given that interaction design has been in existence for more
than 25 years and that there are far more UX designers now in industry than ever before?
He points out how many interfaces of new products do not adhere to the interaction design
principles validated in the 1990s. For example, he notes that many apps do not follow even
the most basic of UX principles, such as offering an “undo” option. He exclaims that it is
“inexplicable and unforgivable that these violations continue to resurface in new prod-
ucts today.”

How can we rectify this situation so that the norm is that all new products are designed
to provide good user experiences? To achieve this, we need to be able to understand how
to reduce the negative aspects (such as frustration and annoyance) of the user experience
while enhancing the positive ones (for example, enjoyment and efficacy). This entails devel-
oping interactive products that are easy, effective, and pleasurable to use from the users’
perspective.

In this chapter, we begin by examining the basics of interaction design. We look at the
difference between good and poor design, highlighting how products can differ radically in
how usable and enjoyable they are. We then describe what and who is involved in the process
of interaction design. The user experience, which is a central concern of interaction design,
is then introduced. Finally, we outline how to characterize the user experience in terms of
usability goals, user experience goals, and design principles. An in-depth activity is presented
at the end of the chapter in which you have the opportunity to put into practice what you
have read by evaluating the design of an interactive product.

1.2 Good and Poor Design

A central concern of interaction design is to develop interactive products that are usable. By
this we mean products that are generally easy to learn, effective to use, and provide an enjoy-
able user experience. A good place to start thinking about how to design usable interactive
products is to compare examples of well-designed and poorly designed ones. Through identi-
fying the specific weaknesses and strengths of different interactive products, we can begin to

1 . 2 G O O D A N D P O O R D E S I G N 3

understand what it means for something to be usable or not. Here, we describe two examples
of poorly designed products that have persisted over the years—a voice-mail system used in
hotels and the ubiquitous remote control—and contrast these with two well-designed exam-
ples of the same products that perform the same function.

1.2.1 Voice-Mail System
Imagine the following scenario. You are staying at a hotel for a week while on a business
trip. You see a blinking red light on the landline phone beside the bed. You are not sure what
this means, so you pick up the handset. You listen to the tone and it goes “beep, beep, beep.”
Maybe this means that there is a message for you. To find out how to access the message,
you have to read a set of instructions next to the phone. You read and follow the first step:

1. Touch 41.
The system responds: “You have reached the Sunny Hotel voice message center. Please
enter the room number for which you would like to leave a message.”
You wait to hear how to listen to a recorded message. But there are no further instruc-
tions from the phone. You look down at the instruction sheet again and read:
2. Touch*, your room number, and #.
You do so and the system replies: “You have reached the mailbox for room 106. To leave
a message, type in your password.”

You type in the room number again, and the system replies: “Please enter room number
again and then your password.”

You don’t know what your password is. You thought it was the same as your room num-
ber, but clearly it is not. At this point, you give up and call the front desk for help. The person at
the desk explains the correct procedure for listening to messages. This involves typing in,
at the appropriate times, the room number and the extension number of the phone (the latter is
the password, which is different from the room number). Moreover, it takes six steps to access
a message. You give up.

What is problematic with this voice-mail system?

• It is infuriating.
• It is confusing.
• It is inefficient, requiring you to carry out a number of steps for basic tasks.
• It is difficult to use.
• It has no means of letting you know at a glance whether any messages have been left or
how many there are. You have to pick up the handset to find out and then go through a
series of steps to listen to them.

• It is not obvious what to do: The instructions are provided partially by the system and
partially by a card beside the phone.

Now compare it to the phone answering machine shown in Figure 1.1 The illustration
shows a small sketch of a phone answering machine. Incoming messages are represented
using marbles. The number of marbles that have moved into the pinball-like chute indicates
the number of messages. Placing one of these marbles into a dent on the machine causes the
recorded message to play. Dropping the same marble into a different dent on the phone dials
the caller who left the message.

1 W H AT I S I N T E R A C T I O N D E S I G N ?4

How does the marble answering machine differ from the voice-mail system?

• It uses familiar physical objects that indicate visually at a glance how many messages have
been left.

• It is aesthetically pleasing and enjoyable to use.
• It requires only one-step actions to perform core tasks.
• It is a simple but elegant design.
• It offers less functionality and allows anyone to listen to any of the messages.

The marble answering machine is considered a design classic. It was created by Durrell
Bishop while he was a student at the Royal College of Art in London (described by Crampton
Smith, 1995). One of his goals was to design a messaging system that represented its basic
functionality in terms of the behavior of everyday objects. To do this, he capitalized on people’s
everyday knowledge of how the physical world works. In particular, he made use of the ubiq-
uitous everyday action of picking up a physical object and putting it down in another place.

This is an example of an interactive product designed with the users in mind. The focus is
on providing them with a pleasurable experience but one that also makes efficient the activity
of receiving messages. However, it is important to note that although the marble answering
machine is an elegant and usable design, it would not be practical in a hotel setting. One
of the main reasons is that it is not robust enough to be used in public places; for instance,
the marbles could easily get lost or be taken as souvenirs. Also, the need to identify the user
before allowing the messages to be played is essential in a hotel setting.

Therefore, when considering the design of an interactive product, it is important to con-
sider where it is going to be used and who is going to use it. The marble answering machine
would be more suitable in a home setting—provided that there were no children around who
might be tempted to play with the marbles!

Video Durrell Bishop’s answering machine: http://vimeo.com/19930744.

Figure 1.1 The marble answering machine
Source: Adapted from Crampton Smith (1995)

1 . 2 G O O D A N D P O O R D E S I G N 5

1.2.2 Remote Control
Every home entertainment system, be it the smart TV, set-top box, stereo system, and so
forth, comes with its own remote control. Each one is different in terms of how it looks and
works. Many have been designed with a dizzying array of small, multicolored, and double-
labeled buttons (one on the button and one above or below it) that often seem arbitrarily
positioned in relation to one another. Many viewers, especially when sitting in their living
rooms, find it difficult to locate the right ones, even for the simplest of tasks, such as pausing
or finding the main menu. It can be especially frustrating for those who need to put on their
reading glasses each time to read the buttons. The remote control appears to have been put
together very much as an afterthought.

In contrast, much effort and thought went into the design of the classic TiVo remote con-
trol with the user in mind (see Figure 1.2). TiVo is a digital video recorder that was originally
developed to enable the viewer to record TV shows. The remote control was designed with
large buttons that were clearly labeled and logically arranged, making them easy to locate
and use in conjunction with the menu interface that appeared on the TV screen. In terms of
its physical form, the remote device was designed to fit into the palm of a hand, having a
peanut shape. It also has a playful look and feel about it: colorful buttons and cartoon icons
are used that are distinctive, making it easy to identify them.

Figure 1.2 The TiVo remote control
Source: https://business.tivo.com/

1 W H AT I S I N T E R A C T I O N D E S I G N ?6

How was it possible to create such a usable and appealing remote device where so many
others have failed? The answer is simple: TiVo invested the time and effort to follow a user-
centered design process. Specifically, TiVo’s director of product design at the time involved
potential users in the design process, getting their feedback on everything from the feel of the
device in the hand to where best to place the batteries, making them easy to replace but not
prone to falling out. He and his design team also resisted the trap of “buttonitis” to which so
many other remote controls have fallen victim; that is one where buttons breed like rabbits—
a button for every new function. They did this by restricting the number of control buttons
embedded in the device to the essential ones. Other functions were then represented as part of
the menu options and dialog boxes displayed on the TV screen, which could then be selected
via the core set of physical control buttons. The result was a highly usable and pleasing device
that has received much praise and numerous design awards.

DILEMMA
What Is the Best Way to Interact with a Smart TV?

A challenge facing smart TV providers is how to enable users to interact with online
content. Viewers can select a whole range of content via their TV screens, but it involves
scrolling through lots of menus and screens. In many ways, the TV interface has become
more like a computer interface. This raises the question of whether the remote control is
the best input device to use for someone who sits on a sofa or chair that is some distance
from the wide TV screen. Smart TV developers have addressed this challenge in a num-
ber of ways.

An early approach was to provide an on-screen keyboard and numeric keypad that pre-
sented a grid of alphanumeric characters (see Figure 1.3a), which were selected by pressing
a button repeatedly on a remote control. However, entering the name of a movie or an email
address and password using this method can be painstakingly slow; it is also easy to overshoot
and select the wrong letter or number when holding a button down on the remote to reach a
target character.

More recent remote controls, such as those provided by Apple TV, incorporate a
touchpad to enable swiping akin to the control commonly found on laptops. While this
form of touch control expedites skipping through a set of letters displayed on a TV screen,
it does not make it any easier to type in an email address and password. Each letter,
number, or special character still has to be selected. Swiping is also prone to overshoot-
ing when aiming for a target letter, number, or character. Instead of providing a grid, the
Apple TV interface displays two single lines of letters, numbers, and special characters
to swipe across (see Figure 1.3b). While this can make it quicker for someone to reach a
character, it is still tedious to select a sequence of characters in this way. For example, if
you select a Y and the next letter is an A, you have to swipe all the way back to the begin-
ning of the alphabet.

1 . 2 G O O D A N D P O O R D E S I G N 7

1.2.1 What to Design
Designing interactive products requires considering who is going to be using them, how
they are going to be used, and where they are going to be used. Another key concern is
to understand the kind of activities people are doing when interacting with these prod-
ucts. The appropriateness of different kinds of interfaces and arrangements of input and
output devices depends on what kinds of activities are to be supported. For example,
if the activity is to enable people to bank online, then an interface that is secure, trust-
worthy, and easy to navigate is essential. In addition, an interface that allows the user to
find out information about new services offered by the bank without it being intrusive
would be useful.

Might there be a better way to interact with a smart TV while sitting on the sofa? An
alternative is to use voice control. Remote controls, like Siri or TiVo, for example, have a
speech button that when pressed allows viewers to ask for movies by name or more generally
by category, for instance, “What are the best sci-fi movies on Netflix?” Smart speakers, such
as Amazon Echo, can also be connected to a smart TV via an HDMI port, and, similarly, the
user can ask for something general or more specific, for example, “Alexa, play Big Bang The-
ory, Season 6, Episode 5, on the TV.” On recognizing the command, it will switch on the TV,
switch to the right HDMI channel, open Netflix, and begin streaming the specific episode.
Some TV content, however, requires the viewer to say that they are over a certain age by
checking a box on the TV display. If the TV could ask the viewer and check that they are over
18, then that would be really smart! Also, if the TV needs the viewer to provide a password to
access on-demand content, they won’t want to say it out aloud, character by character, espe-
cially in front of others who might also be in the room with them. The use of biometrics, then,
may be the answer.

(a) (b)

Figure 1.3 Typing on a TV screen (a) by selecting letters and numbers from a square matrix
and (b) by swiping along a single line of letters and numbers
Source: (b) https://support.apple.com/en-us/HT200107

https://support.apple.com/en-us/HT200107

1 W H AT I S I N T E R A C T I O N D E S I G N ?8

The world is becoming suffused with technologies that support increasingly diverse
activities. Just think for a minute about what you can currently do using digital technology:
send messages, gather information, write essays, control power plants, program, draw, plan,
calculate, monitor others, and play games—just to name but a few. Now think about the
types of interfaces and interactive devices that are available. They too are equally diverse:
multitouch displays, speech-based systems, handheld devices, wearables, and large interactive
displays—again, to name but a few. There are also many ways of designing how users can
interact with a system, for instance, via the use of menus, commands, forms, icons, gestures,
and so on. Furthermore, ever more innovative everyday artifacts are being created using
novel materials, such as e-textiles and wearables (see Figure 1.4).

The Internet of Things (IoT) now means that many products and sensors can be con-
nected to each other via the Internet, which enables them to talk to each other. Popular
household IoT-enabled products include smart heating and lighting and home security sys-
tems where users can change the controls from an app on their phone or check out who is
knocking on their door via a doorbell webcam. Other apps that are being developed are
meant to make life easier for people, like finding a car parking space in busy areas.

The interfaces for everyday consumer items, such as cameras, microwave ovens, toasters,
and washing machines, which used to be physical and the realm of product design, are now
predominantly digitally based, requiring interaction design (called consumer electronics). The
move toward transforming human-human transactions into solely interface-based ones has
also introduced a new kind of customer interaction. Self-checkouts at grocery stores and librar-
ies are now the norm where it is commonplace for customers to check out their own goods or
books themselves, and at airports, where passengers check in their own luggage. While more
cost-effective and efficient, it is impersonal and puts the onus on the person to interact with the
system. Furthermore, accidentally pressing the wrong button or standing in the wrong place at
a self-service checkout can result in a frustrating, and sometimes mortifying, experience.

Figure 1.4 Turn signal biking jacket using e-textiles developed by Leah Beuchley
Source: Used courtesy of Leah Buechley

1 . 3 W H AT I S I N T E R A C T I O N D E S I G N ? 9

What this all amounts to is a multitude of choices and decisions that interaction design-
ers have to make for an ever-increasing range of products. A key question for interaction
design is this: “How do you optimize the users’ interactions with a system, environment, or
product so that they support the users’ activities in effective, useful, usable and pleasurable
ways?” One could use intuition and hope for the best. Alternatively, one can be more prin-
cipled in deciding which choices to make by basing them on an understanding of the users.
This involves the following:

• Considering what people are good and bad at
• Considering what might help people with the way they currently do things
• Thinking through what might provide quality user experiences
• Listening to what people want and getting them involved in the design
• Using user-centered techniques during the design process

The aim of this book is to cover these aspects with the goal of showing you how to carry
out interaction design. In particular, it focuses on how to identify users’ needs and the context
of their activities. From this understanding, we move on to consider how to design usable,
useful, and pleasurable interactive products.

1.3 What Is Interaction Design?

By interaction design, we mean the following:

Designing interactive products to support the way people communicate and interact in their
everyday and working lives

Put another way, it is about creating user experiences that enhance and augment the
way people work, communicate, and interact. More generally, Terry Winograd originally
described it as “designing spaces for human communication and interaction” (1997, p. 160).
John Thackara viewed it as “the why as well as the how of our daily interactions using com-
puters” (2001, p. 50), while Dan Saffer emphasized its artistic aspects: “the art of facilitating
interactions between humans through products and services” (2010, p. 4).

A number of terms have been used since to emphasize different aspects of what is being
designed, including user interface design (UI), software design, user-centered design, product
design, web design, user experience design, and interactive system design. Interaction design
is generally used as the overarching term to describe the field, including its methods, theories,
and approaches. UX is used more widely in industry to refer to the profession. However, the
terms can be used interchangeably. Also, it depends on their ethos and brand.

1.3.1 The Components of Interaction Design
We view interaction design as fundamental to many disciplines, fields, and approaches that
are concerned with researching and designing computer-based systems for people. Figure 1.5
presents the core ones along with interdisciplinary fields that comprise one or more of these,
such as cognitive ergonomics. It can be confusing to try to work out the differences between
them as many overlap. The main differences between interaction design and the other
approaches referred to in the figure come largely down to which methods, philosophies, and
lenses they use to study, analyze, and design products. Another way they vary is in terms of

1 W H AT I S I N T E R A C T I O N D E S I G N ?10

the scope and problems they address. For example, information systems is concerned with the
application of computing technology in domains such as business, health, and education,
whereas ubiquitous computing is concerned with the design, development, and deployment
of pervasive computing technologies (for example, IoT) and how they facilitate social inter-
actions and human experiences.

BOX 1.1
Is Interaction Design Beyond HCI?

We see the main difference between interaction design (ID) and human-computer interaction
(HCI) as one of scope. Historically, HCI had a narrow focus on the design and usability of
computing systems, while ID was seen as being broader, concerned with the theory, research,
and practice of designing user experiences for all manner of technologies, systems, and prod-
ucts. That is one of the reasons why we chose to call our book Interaction Design: beyond
human-computer interaction, to reflect this wider range. However, nowadays, HCI has greatly
expanded in its scope (Churchill et al., 2013), so much so that it overlaps much more with ID
(see Figure 1.6).

Interaction
Design

Academic
Disciplines

Ergonomics
Psychology/

Cognitive Science

Informatics
Design

Engineering

Computer Science/
Software Engineering

Social Sciences
(e.g., Sociology,

Anthropology)

Ubiquitous
Computing

Human
Factors (HF)

Cognitive
Engineering

Human-Computer
Interaction (HCI)

Cognitive
Ergonomics

Information
Systems

Computer-
Supported
Cooperative
Work (CSCW)

Film Industry

Industrial Design

Artist-Design

Product Design

Graphic Design

Design Practices

Interdisciplinary Overlapping Fields

Figure 1.5 Relationship among contributing academic disciplines, design practices, and interdisci-
plinary fields concerned with interaction design (double-headed arrows mean overlapping)

1 . 3 W H AT I S I N T E R A C T I O N D E S I G N ? 11

1.3.2 Who Is Involved in Interaction Design?
Figure 1.5 also shows that many people are involved in performing interaction design, rang-
ing from social scientists to movie-makers. This is not surprising given that technology has
become such a pervasive part of our lives. But it can all seem rather bewildering to the
onlooker. How does the mix of players work together?

Designers need to know many different things about users, technologies, and the interac-
tions among them to create effective user experiences. At the least, they need to understand
how people act and react to events and how they communicate and interact with each other.
To be able to create engaging user experiences, they also need to understand how emotions
work, what is meant by aesthetics, desirability, and the role of narrative in human experi-
ence. They also need to understand the business side, technical side, manufacturing side, and
marketing side. Clearly, it is difficult for one person to be well versed in all of these diverse
areas and also know how to apply the different forms of knowledge to the process of interac-
tion design.

Interaction design is ideally carried out by multidisciplinary teams, where the skill sets
of engineers, designers, programmers, psychologists, anthropologists, sociologists, marketing
people, artists, toy makers, product managers, and others are drawn upon. It is rarely the case,

Figure 1.6 HCI out of the box: broadening its reach to cover more areas

1 W H AT I S I N T E R A C T I O N D E S I G N ?12

however, that a design team would have all of these professionals working together. Who to
include in a team will depend on a number of factors, including a company’s design philoso-
phy, size, purpose, and product line.

One of the benefits of bringing together people with different backgrounds and training
is the potential of many more ideas being generated, new methods developed, and more crea-
tive and original designs being produced. However, the downside is the costs involved. The
more people there are with different backgrounds in a design team, the more difficult it can
be to communicate and make progress with the designs being generated. Why? People with
different backgrounds have different perspectives and ways of seeing and talking about the
world. What one person values as important others may not even see (Kim, 1990). Similarly,
a computer scientist’s understanding of the term representation is often very different from
that of a graphic designer or psychologist.

What this means in practice is that confusion, misunderstanding, and communication
breakdowns can surface in a team. The various team members may have different ways
of talking about design and may use the same terms to mean quite different things. Other
problems can arise when a group of people who have not previously worked as a team are
thrown together. For example, Aruna Balakrishnan et al. (2011) found that integration across
different disciplines and expertise is difficult in many projects, especially when it comes to
agreeing on and sharing tasks. The more disparate the team members—in terms of culture,
background, and organizational structures—the more complex this is likely to be.

ACTIVITY 1.1
In practice, the makeup of a given design team depends on the kind of interactive product
being built. Who do you think should be involved in developing
• A public kiosk providing information about the exhibits available in a science museum?
• An interactive educational website to accompany a TV series?

Comment
Ideally, each team will have a number of different people with different skill sets. For example,
the first interactive product would include the following individuals:
• Graphic and interaction designers, museum curators, educational advisers, software engi-
neers, software designers, and ergonomists

The second project would include these types of individuals:
• TV producers, graphic and interaction designers, teachers, video experts, software engi-
neers, and software designers

In addition, as both systems are being developed for use by the general public, representa-
tive users, such as school children and parents, should be involved.

In practice, design teams often end up being quite large, especially if they are working on
a big project to meet a fixed deadline. For example, it is common to find teams of 15 or more
people working on a new product like a health app. This means that a number of people from
each area of expertise are likely to be working as part of the project team.

1 . 4 T H E U S E R E X P E R I E N C E 13

1.3.3 Interaction Design Consultancies
Interaction design is now widespread in product and services development. In particular,
website consultants and the computing industries have realized its pivotal role in successful
interactive products. But it is not just IT companies that are realizing the benefits of having
UXers on board. Financial services, retail, governments, and the public sector have realized
too the value of interaction design. The presence or absence of good interaction design can
make or break a company. Getting noticed in the highly competitive field of web products
requires standing out. Being able to demonstrate that your product is easy, effective, and
engaging to use is seen as central to this. Marketing departments are also realizing how
branding, the number of hits, the customer return rate, and customer satisfaction are greatly
affected by the usability of a website.

There are many interaction design consultancies now. These include established compa-
nies, such as Cooper, NielsenNorman Group, and IDEO, and more recent ones that specialize
in a particular area, such as job board software (for example, Madgex), digital media (think of
Cogapp), or mobile design (such as CXpartners). Smaller consultancies, such as Bunnyfoot and
Dovetailed, promote diversity, interdisciplinarity, and scientific user research, having psycholo-
gists, researchers, interaction designers, usability, and customer experience specialists on board.

Many UX consultancies have impressive websites, providing case studies, tools, and
blogs. For example, Holition publishes an annual glossy booklet as part of its UX Series
(Javornik et al., 2017) to disseminate the outcomes of their in-house research to the wider
community, with a focus on the implications for commercial and cultural aspects. This shar-
ing of UX knowledge enables them to contribute to the discussion about the role of technol-
ogy in the user experience.

1.4 The User Experience

The user experience refers to how a product behaves and is used by people in the real world.
Jakob Nielsen and Don Norman (2014) define it as encompassing “all aspects of the end-
user’s interaction with the company, its services, and its products.” As stressed by Jesse Gar-
rett (2010, p. 10), “Every product that is used by someone has a user experience: newspapers,
ketchup bottles, reclining armchairs, cardigan sweaters.” More specifically, it is about how
people feel about a product and their pleasure and satisfaction when using it, looking at it,
holding it, and opening or closing it. It includes their overall impression of how good it is
to use, right down to the sensual effect small details have on them, such as how smoothly a
switch rotates or the sound of a click and the touch of a button when pressing it. An impor-
tant aspect is the quality of the experience someone has, be it a quick one, such as taking a
photo; a leisurely one, such as playing with an interactive toy; or an integrated one, such as
visiting a museum (Law et al., 2009).

It is important to point out that one cannot design a user experience, only design for a
user experience. In particular, one cannot design a sensual experience, but only create the
design features that can evoke it. For example, the outside case of a smartphone can be
designed to be smooth, silky, and fit in the palm of a hand; when held, touched, looked at,
and interacted with, that can provoke a sensual and satisfying user experience. Conversely, if
it is designed to be heavy and awkward to hold, it is much more likely to end up providing a
poor user experience—one that is uncomfortable and unpleasant.

1 W H AT I S I N T E R A C T I O N D E S I G N ?14

Designers sometimes refer to UX as UXD. The addition of the D to UX is meant to
encourage design thinking that focuses on the quality of the user experience rather than
on the set of design methods to use (Allanwood and Beare, 2014). As Don Norman (2004)
has stressed for many years, “It is not enough that we build products that function, that are
understandable and usable, we also need to build joy and excitement, pleasure and fun, and
yes, beauty to people’s lives.”

ACTIVITY 1.2

The iPod Phenomenon
Apple’s classic (and subsequent) generations of portable music players, called iPods, including
the iPod Touch, Nano, and Shuffle, released during the early 2000s were a phenomenal success.
Why do you think this occurred? Has there been any other product that has matched this quality
of experience? With the exception of the iPod Touch, Apple stopped production of them in 2017.
Playing music via a smartphone became the norm, superseding the need for a separate device.

Comment
Apple realized early on that successful interaction design involves creating interactive prod-
ucts that have a quality user experience. The sleek appearance of the iPod music player (see
Figure 1.7), its simplicity of use, its elegance in style, its distinct family of rainbow colors, a
novel interaction style that many people discovered was a sheer pleasure to learn and use,
and the catchy naming of its product and content (iTunes, iPod), among many other design
features, led to it becoming one of the greatest products of its kind and a must-have fashion
item for teenagers, students, and adults alike. While there were many competing players on
the market at the time—some with more powerful functionality, others that were cheaper and
easier to use, or still others with bigger screens, more memory, and so forth—the quality of the
overall user experience paled in comparison to that provided by the iPod.

Figure 1.7 The iPod Nano
Source: David Paul Morris / Getty Images

1 . 5 U N D E R S TA N D I N G U S E R S 15

There are many aspects of the user experience that can be considered and many ways of
taking them into account when designing interactive products. Of central importance are the
usability, functionality, aesthetics, content, look and feel, and emotional appeal. In addition,
Jack Carroll (2004) stresses other wide-reaching aspects, including fun, health, social capital
(the social resources that develop and are maintained through social networks, shared values,
goals, and norms), and cultural identity, such as age, ethnicity, race, disability, family status,
occupation, and education.

Several researchers have attempted to describe the experiential aspect of a user experi-
ence. Kasper Hornbæk and Morten Hertzum (2017) note how it is often described in terms
of the way that users perceive a product, such as whether a smartwatch is seen as sleek or
chunky, and their emotional reaction to it, such as whether people have a positive experi-
ence when using it. Marc Hassenzahl’s (2010) model of the user experience is the most well-
known, where he conceptualizes it in terms of pragmatic and hedonic aspects. By pragmatic,
it is meant how simple, practical, and obvious it is for the user to achieve their goals. By
hedonic, it is meant how evocative and stimulating the interaction is to them. In addition
to a person’s perceptions of a product, John McCarthy and Peter Wright (2004) discuss the
importance of their expectations and the way they make sense of their experiences when
using technology. Their Technology as Experience framework accounts for the user experi-
ence largely in terms of how it is felt by the user. They recognize that defining experience
is incredibly difficult because it is so nebulous and ever-present to us, just as swimming in
water is to a fish. Nevertheless, they have tried to capture the essence of human experience
by describing it in both holistic and metaphorical terms. These comprise a balance of sensual,
cerebral, and emotional threads.

How does one go about producing quality user experiences? There is no secret sauce
or magical formula that can be readily applied by interaction designers. However, there are
numerous conceptual frameworks, tried and tested design methods, guidelines, and relevant
research findings, which are described throughout the book.

1.5 Understanding Users

A main reason for having a better understanding of people in the contexts in which they live,
work, and learn is that it can help designers understand how to design interactive products
that provide good user experiences or match a user’s needs. A collaborative planning tool
for a space mission, intended to be used by teams of scientists working in different parts of
the world, will have quite different needs from one targeted at customer and sales agents,
to be used in a furniture store to draw up kitchen layout plans. Understanding individual

The nearest overall user experience that has all of the above is not so much for a product
but for a physical store. The design of the Apple Store as a completely new customer experi-
ence for buying technology has been very successful in how it draws people in and what they
do when browsing, discovering, and purchasing goods in the store. The products are laid out
in a way to encourage interaction.

1 W H AT I S I N T E R A C T I O N D E S I G N ?16

differences can also help designers appreciate that one size does not fit all; what works for
one user group may be totally inappropriate for another. For example, children have different
expectations than adults about how they want to learn or play. They may find having inter-
active quizzes and cartoon characters helping them along to be highly motivating, whereas
most adults find them annoying. Conversely, adults often like talking-head discussions about
topics, but children find them boring. Just as everyday objects like clothes, food, and games
are designed differently for children, teenagers, and adults, so too should interactive products
be designed for different kinds of users.

Learning more about people and what they do can also reveal incorrect assumptions
that designers may have about particular user groups and what they need. For example, it
is often assumed that because of deteriorating vision and dexterity, old people want things
to be big—be it text or graphical elements appearing on a screen or the physical controls,
like dials and switches, used to control devices. This may be true for some elderly people,
but studies have shown that many people in their 70s, 80s, and older are perfectly capa-
ble of interacting with standard-size information and even small interfaces, for example,
smartphones, just as well as those in their teens and 20s, even though, initially, some might
think they will find it difficult (Siek et al., 2005). It is increasingly the case that as people
get older, they do not like to consider themselves as lacking in cognitive and manual skills.
Being aware of people’s sensitivities, such as aging, is as important as knowing how to
design for their capabilities (Johnson and Finn, 2017). In particular, while many older adults
now feel comfortable with and use a range of technologies (for instance, email, online shop-
ping, online games, or social media), they may resist adopting new technologies. This is not
because they don’t perceive them as being useful to their lives but because they don’t want
to waste their time getting caught up by the distractions that digital life brings (Knowles and
Hanson, 2018), for example, not wanting to be “glued to one’s mobile phone” like younger
generations.

Being aware of cultural differences is also an important concern for interaction design,
particularly for products intended for a diverse range of user groups from different countries.
An example of a cultural difference is the dates and times used in different countries. In the
United States, for example, the date is written as month, day, year (05/21/20), whereas in
other countries, it is written in the sequence of day, month, year (21/05/20). This can cause
problems for designers when deciding on the format of online forms, especially if intended
for global use. It is also a concern for products that have time as a function, such as operating
systems, digital clocks, or car dashboards. To which cultural group do they give preference?
How do they alert users to the format that is set as default? This raises the question of how
easily an interface designed for one user group can be used and accepted by another. Why is
it that certain products, like a fitness tracker, are universally accepted by people from all parts
of the world, whereas websites are designed differently and reacted to differently by people
from different cultures?

To understand more about users, we have included three chapters (Chapters 4–6) that
explain in detail how people act and interact with one another, with information, and with
various technologies, together with describing their abilities, emotions, needs, desires, and
what causes them to get annoyed, frustrated, lose patience, and get bored. We draw upon
relevant psychological theory and social science research. Such knowledge enables designers
to determine which solutions to choose from the many design alternatives available and how to
develop and test these further.

1 . 6 A C C E S S I B I L I T Y A N D I N C L U S I V E N E S S 17

1.6 Accessibility and Inclusiveness

Accessibility refers to the extent to which an interactive product is accessible by as many
people as possible. Companies like Google and Apple provide tools for their developers to
promote this. The focus is on people with disabilities. For example, Android OS provides a
range of tools for those with disabilities, such as hearing aid compatibility to a built-in screen
reader, while Apple VoiceOver lets the user know what’s happening on its devices, so they
can easily navigate and even know who is in a selfie just taken, by listening to the phone.
Inclusiveness means being fair, open, and equal to everyone. Inclusive design is an over-
arching approach where designers strive to make their products and services accommodate
the widest possible number of people. An example is ensuring that smartphones are being
designed for all and made available to everyone—regardless of their disability, education,
age, or income.

Whether or not a person is considered to be disabled changes over time with age, or
as recovery from an accident progresses throughout their life. In addition, the severity and
impact of an impairment can vary over the course of a day or in different environmental
conditions. Disability can result because technologies are often designed in such a way as to
necessitate a certain type of interaction that is impossible for someone with an impairment.
Disability in this context is viewed as the result of poor interaction design between a user and
the technology, not the impairment alone. Accessibility, on the other hand, opens up experi-
ences so that they are accessible to all. Technologies that are now mainstream once started out
as solutions to accessibility challenges. For example, SMS was designed for hearing-impaired
people before it became a mainstream technology. Furthermore, designing for accessibility
inherently results in inclusive design for all.

Accessibility can be achieved in two ways: first, through the inclusive design of
technology, and second, through the design of assistive technology. When designing for
accessibility, it is essential to understand the types of impairments that can lead to dis-
ability as they come in many forms. They are often classified by the type of impairment,
for example:

• Sensory impairment (such as loss of vision or hearing)
• Physical impairment (having loss of functions to one or more parts of the body, for exam-
ple, after a stroke or spinal cord injury)

• Cognitive (for instance, learning impairment or loss of memory/cognitive function due to
old age or a condition such as Alzheimer’s disease)

Within each type is a complex mix of people and capabilities. For example, a person
might have only peripheral vision, be color blind, or have no light perception (and be regis-
tered blind). All are forms of visual impairment, and all require different design approaches.
Color blindness can be overcome by an inclusive design approach. Designers can choose
colors that will appear as separate colors to everyone. However, peripheral vision loss or
complete blindness will often need an assistive technology to be designed.

Impairment can also be categorized as follows:

• Permanent (for example, long-term wheelchair user)
• Temporary (such as after an accident or illness)
• Situational (for instance, a noisy environment means a person can’t hear)

1 W H AT I S I N T E R A C T I O N D E S I G N ?18

The number of people living with permanent disability increases with age. Fewer than
20 percent of people are born with a disability, whereas 80 percent of people will have a
disability once they reach 85. As people age, their functional abilities diminish. For exam-
ple, people older than 50 often find it difficult to hear conversations in rooms with hard
surfaces and lots of background noise. This is a disability that will come to most of us at
some point.

People with permanent disabilities often use assistive technology in their everyday life,
which they consider to be life-essential and an extension of their self (Holloway and Dawes,
2016). Examples include wheelchairs (people now refer to “wearing their wheels,” rather
than “using a wheelchair”) and augmented and alternative communication aids. Much cur-
rent HCI research into disability explores how new technologies, such as IoT, wearables, and
virtual reality, can be used to improve upon existing assistive technologies.

Aimee Mullens is an athlete, actor, and fashion model who has shown how prosthetics
can be designed to move beyond being purely functional (and often ugly) to being desirable
and highly fashionable. She became a bilateral amputee when her legs were amputated below
the knee as a one-year-old. She has done much to blur the boundary between disabled and
nondisabled people, and she uses fashion as a tool to achieve this. Several prosthetic compa-
nies now incorporate fashion design into their products, including striking leg covers that are
affordable by all (see Figure 1.8).

Figure 1.8 Fashionable leg cover designed by Alleles Design Studio
Source: https://alleles.ca/. Used courtesy of Alison Andersen

Homepage

1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 19

1.7 Usability and User Experience Goals

Part of the process of understanding users is to be clear about the primary objective of devel-
oping an interactive product for them. Is it to design an efficient system that will allow them
to be highly productive in their work? Is it to design a learning tool that will be challenging
and motivating? Or, is it something else? To help identify the objectives, we suggest classify-
ing them in terms of usability and user experience goals. Traditionally, usability goals are
concerned with meeting specific usability criteria, such as efficiency, whereas user experience
goals are concerned with explicating the nature of the user experience, for instance, to be
aesthetically pleasing. It is important to note, however, that the distinction between the two
types of goals is not clear-cut since usability is often fundamental to the quality of the user
experience and, conversely, aspects of the user experience, such as how it feels and looks, are
inextricably linked with how usable the product is. We distinguish between them here to help
clarify their roles but stress the importance of considering them together when designing for
a user experience. Also, historically HCI was concerned primarily with usability, but it has
since become concerned with understanding, designing for, and evaluating a wider range of
user experience aspects.

1.7.1 Usability Goals
Usability refers to ensuring that interactive products are easy to learn, effective to use, and
enjoyable from the user’s perspective. It involves optimizing the interactions people have with
interactive products to enable them to carry out their activities at work, at school, and in
their everyday lives. More specifically, usability is broken down into the following six goals:

• Effective to use (effectiveness)
• Efficient to use (efficiency)
• Safe to use (safety)
• Having good utility (utility)
• Easy to learn (learnability)
• Easy to remember how to use (memorability)

Usability goals are typically operationalized as questions. The purpose is to provide the
interaction designer with a concrete means of assessing various aspects of an interactive
product and the user experience. Through answering the questions, designers can be alerted
very early on in the design process to potential design problems and conflicts that they might
not have considered. However, simply asking “Is the system easy to learn?” is not going to be
very helpful. Asking about the usability of a product in a more detailed way—for example,
“How long will it take a user to figure out how to use the most basic functions for a new
smartwatch; how much can they capitalize on from their prior experience; and how long
would it take the user to learn the whole set of functions?”—will elicit far more information.

The following are descriptions of the usability goals and a question for each one:

(i) Effectiveness is a general goal, and it refers to how good a product is at doing what it is
supposed to do.
Question: Is the product capable of allowing people to learn, carry out their work effi-

ciently, access the information that they need, or buy the goods that they want?

1 W H AT I S I N T E R A C T I O N D E S I G N ?20

(ii) Efficiency refers to the way a product supports users in carrying out their tasks. The
marble answering machine described earlier in this chapter was considered efficient in
that it let the user carry out common tasks, for example, listening to messages, through
a minimal number of steps. In contrast, the voice-mail system was considered inefficient
because it required the user to carry out many steps and learn an arbitrary set of sequences
for the same common task. This implies that an efficient way of supporting common
tasks is to let the user use single button or key presses. An example of where this kind of
efficiency mechanism has been employed effectively is in online shopping. Once users
have entered all of the necessary personal details in an online form to make a purchase,
they can let the website save all of their personal details. Then, if they want to make
another purchase at that site, they don’t have to re-enter all of their personal details. A
highly successful mechanism patented by Amazon.com is the one-click option, which
requires users to click only a single button when they want to make another purchase.
Question: Once users have learned how to use a product to carry out their tasks, can they

sustain a high level of productivity?
(iii) Safety involves protecting the user from dangerous conditions and undesirable situa-

tions. In relation to the first ergonomic aspect, it refers to the external conditions where
people work. For example, where there are hazardous conditions—such as X-ray
machines or toxic chemicals—operators should be able to interact with and control
computer-based systems remotely. The second aspect refers to helping any kind of user
in any kind of situation to avoid the dangers of carrying out unwanted actions acciden-
tally. It also refers to the perceived fears that users might have of the consequences of
making errors and how this affects their behavior. Making interactive products safer in
this sense involves (1) preventing the user from making serious errors by reducing the
risk of wrong keys/buttons being mistakenly activated (an example is not placing the
quit or delete-file command right next to the save command on a menu) and (2) provid-
ing users with various means of recovery should they make errors, such as an undo func-
tion. Safe interactive systems should engender confidence and allow the user the
opportunity to explore the interface to try new operations (see Figure 1.9a). Another
safety mechanism is confirming dialog boxes that give users another chance to consider
their intentions (a well-known example is the appearance of a dialog box after issuing
the command to delete everything in the trash, saying: “Are you sure you want to remove
the items in the Trash permanently?”) (see Figure 1.9b).
Question: What is the range of errors that are possible using the product, and what

measures are there to permit users to recover easily from them?
(iv) Utility refers to the extent to which the product provides the right kind of functionality

so that users can do what they need or want to do. An example of a product with high
utility is an accounting software package that provides a powerful computational tool
that accountants can use to work out tax returns. An example of a product with low
utility is a software drawing tool that does not allow users to draw freehand but forces
them to use a mouse to create their drawings, using only polygon shapes.
Question: Does the product provide an appropriate set of functions that will enable users

to carry out all of their tasks in the way they want to do them?
(v) Learnability refers to how easy a system is to learn to use. It is well known that people

don’t like spending a long time learning how to use a system. They want to get started
right away and become competent at carrying out tasks without too much effort. This is

Http://www.Amazon.com

1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 21

especially true for interactive products intended for everyday use (for example social media,
email, or a GPS) and those used only infrequently (for instance, online tax forms). To a
certain extent, people are prepared to spend a longer time learning more complex systems
that provide a wider range of functionality, such as web authoring tools. In these situations,
pop-up tutorials can help by providing contextualized step-by-step material with hands-on
exercises. A key concern is determining how much time users are prepared to spend learn-
ing a product. It seems like a waste if a product provides a range of functionality that the
majority of users are unable or unprepared to spend the time learning how to use.
Question: Is it possible for the user to work out how to use the product by exploring the

interface and trying certain actions? How hard will it be to learn the whole set of func-
tions in this way?

(vi) Memorability refers to how easy a product is to remember how to use, once learned. This
is especially important for interactive products that are used infrequently. If users haven’t
used an operation for a few months or longer, they should be able to remember or at least
rapidly be reminded how to use it. Users shouldn’t have to keep relearning how to carry

(a)

(b)

Figure 1.9 (a) A safe and unsafe menu. Which is which and why? (b) A warning dialog
box for Mac OS X

1 W H AT I S I N T E R A C T I O N D E S I G N ?22

out tasks. Unfortunately, this tends to happen when the operations required to be learned
are obscure, illogical, or poorly sequenced. Users need to be helped to remember how to
do tasks. There are many ways of designing the interaction to support this. For example,
users can be helped to remember the sequence of operations at different stages of a task
through contextualized icons, meaningful command names, and menu options. Also,
structuring options and icons so that they are placed in relevant categories of options, for
example, placing all of the drawing tools in the same place on the screen, can help the
user remember where to look to find a particular tool at a given stage of a task.
Question: What types of interface support have been provided to help users remember

how to carry out tasks, especially for products and operations they use infrequently?

In addition to couching usability goals in terms of specific questions, they are turned into
usability criteria. These are specific objectives that enable the usability of a product to be
assessed in terms of how it can improve (or not improve) a user’s performance. Examples
of commonly used usability criteria are time to complete a task (efficiency), time to learn a
task (learnability), and the number of errors made when carrying out a given task over time
(memorability). These can provide quantitative indicators of the extent to which productivity
has increased, or how work, training, or learning have been improved. They are also useful
for measuring the extent to which personal, public, and home-based products support leisure
and information gathering activities. However, they do not address the overall quality of the
user experience, which is where user experience goals come into play.

1.7.2 User Experience Goals
A diversity of user experience goals has been articulated in interaction design, which covers
a range of emotions and felt experiences. These include desirable and undesirable ones, as
shown in Table 1.1.

Desirable aspects

Satisfying Helpful Fun

Enjoyable Motivating Provocative

Engaging Challenging Surprising

Pleasurable Enhancing sociability Rewarding

Exciting Supporting creativity Emotionally fulfilling

Entertaining Cognitively stimulating Experiencing flow

Undesirable aspects

Boring Unpleasant

Frustrating Patronizing

Making one feel guilty Making one feel stupid

Annoying Cutesy

Childish Gimmicky

Table 1.1 Desirable and undesirable aspects of the user experience

1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 23

Many of these are subjective qualities and are concerned with how a system feels to
a user. They differ from the more objective usability goals in that they are concerned with
how users experience an interactive product from their perspective, rather than assessing how
useful or productive a system is from its own perspective. Whereas the terms used to describe
usability goals comprise a small distinct set, many more terms are used to describe the mul-
tifaceted nature of the user experience. They also overlap with what they are referring to. In
so doing, they offer subtly different options for expressing the way an experience varies for
the same activity over time, technology, and place. For example, we may describe listening to
music in the shower as highly pleasurable, but consider it more apt to describe listening
to music in the car as enjoyable. Similarly, listening to music on a high-end powerful music
system may invoke exciting and emotionally fulfilling feelings, while listening to it on a
smartphone that has a shuffle mode may be serendipitously enjoyable, especially not know-
ing what tune is next. The process of selecting terms that best convey a user’s feelings, state
of being, emotions, sensations, and so forth when using or interacting with a product at a
given time and place can help designers understand the multifaceted and changing nature of
the user experience.

The concepts can be further defined in terms of elements that contribute to making
a user experience pleasurable, fun, exciting, and so on. They include attention, pace, play,
interactivity, conscious and unconscious control, style of narrative, and flow. The concept of
flow (Csikszentmihalyi, 1997) is popular in interaction design for informing the design of
user experiences for websites, video games, and other interactive products. It refers to a state
of intense emotional involvement that comes from being completely involved in an activity,
like playing music, and where time flies. Instead of designing web interfaces to cater to visi-
tors who know what they want, they can be designed to induce a state of flow, leading the
visitor to some unexpected place, where they become completely absorbed. In an interview
with Wired magazine, Mihaly Csikszentmihalyi (1996) uses the analogy of a gourmet meal
to describe how a user experience can be designed to be engrossing, “starting off with the
appetizers, moving on to the salads and entrées, and building toward dessert and not know-
ing what will follow.”

The quality of the user experience may also be affected by single actions performed
at an interface. For example, people can get much pleasure from turning a knob that has
the perfect level of gliding resistance; they may enjoy flicking their finger from the bottom
of a smartphone screen to reveal a new menu, with the effect that it appears by magic, or
enjoy the sound of trash being emptied from the trashcan on a screen. These one-off actions
can be performed infrequently or several times a day—which the user never tires of doing.
Dan Saffer (2014) has described these as micro-interactions and argues that designing these
moments of interaction at the interface—despite being small—can have a big impact on the
user experience.

ACTIVITY 1.3
There are more desirable than undesirable aspects of the user experience listed in Table 1.1.
Why do you think this is so? Should you consider all of these when designing a product?

(Continued)

1 W H AT I S I N T E R A C T I O N D E S I G N ?24

Comment
The two lists we have come up with are not meant to be exhaustive. There are likely to be
more—both desirable and undesirable—as new products surface. The reason for there being
more of the former is that a primary goal of interaction design is to create positive experi-
ences. There are many ways of achieving this.

Not all usability and user experience goals will be relevant to the design and evaluation
of an interactive product being developed. Some combinations will also be incompatible. For
example, it may not be possible or desirable to design a process control system that is both
safe and fun. Recognizing and understanding the nature of the relationship between usability
and user experience goals is central to interaction design. It enables designers to become aware
of the consequences of pursuing different combinations when designing products and high-
lighting potential trade-offs and conflicts. As suggested by Jack Carroll (2004), articulating
the interactions of the various components of the user’s experience can lead to a deeper and
more significant interpretation of the role of each component.

BOX 1.3
Beyond Usability: Designing to Persuade

Eric Schaffer (2009) argues that we should be focusing more on the user experience and less
on usability. He points out how many websites are designed to persuade or influence rather
than enable users to perform their tasks in an efficient manner. For example, many online
shopping sites are in the business of selling services and products, where a core strategy is to
entice people to buy what they might not have thought they needed. Online shopping experi-
ences are increasingly about persuading people to buy rather than being designed to make
shopping easy. This involves designing for persuasion, emotion, and trust, which may or may
not be compatible with usability goals.

This entails determining what customers will do, whether it is to buy a product or renew
a membership, and it involves encouraging, suggesting, or reminding the user of things that
they might like or need. Many online travel sites try to lure visitors to purchase additional
items (such as hotels, insurance, car rental, car parking, or day trips) besides the flight they
originally wanted to book, and they will add a list full of tempting graphics to the visitor’s
booking form, which then has to be scrolled through before being able to complete the trans-
action. These opportunities need to be designed to be eye-catching and enjoyable, in the same
way that an array of products are attractively laid out in the aisles of a grocery store that one
is required to walk past before reaching one’s desired product.

Some online sites, however, have gone too far, for example, adding items to the cus-
tomer’s shopping basket (for example, insurance, special delivery, and care and handling)
that the shopper has to deselect if not desired or start all over again. This sneaky add-on
approach can often result in a negative experience. More generally, this deceptive approach

1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 25

to UX has been described by Harry Brignull as dark patterns (see http://darkpatterns.org/).
Shoppers often become annoyed if they notice decisions that add cost to their purchase
have been made on their behalf without even being asked. For example, on clicking the
unsubscribe button on the website of a car rental company, as indicated in Figure 1.10,
the user is taken to another page where they have to uncheck additional boxes and then
Update. They are then taken to yet another page where they are asked for their reason.
The next screen says “Your email preferences have been updated. Do you need to hire a
vehicle?” without letting the user know whether they have been unsubscribed from their
mailing list.

(Continued)

Email preferences

y.rogers@ucl.ac.uk

Uncheck the emails you do not want to receive

Newsletters UK

* required fields

NiftyCars Partners offers About your rental

Update

Email preferences

We’d love to get some feedback on why you’re unsubscribing.

Emails were too frequent

Update

Emails were not relevant

I am no longer interested in this content

I never signed up for newsletters from NiftyCars

Figure 1.10 Dark pattern for a car rental company

http://darkpatterns.org/

1 W H AT I S I N T E R A C T I O N D E S I G N ?26

1.7.3 Design Principles
Design principles are used by interaction designers to aid their thinking when designing for
the user experience. These are generalizable abstractions intended to orient designers toward
thinking about different aspects of their designs. A well-known example is feedback: Products
should be designed to provide adequate feedback to the users that informs them about what
has already been done so that they know what to do next in the interface. Another one that is
important is findability (Morville, 2005). This refers to the degree to which a particular object
is easy to discover or locate—be it navigating a website, moving through a building, or finding
the delete image option on a digital camera. Related to this is the principle of navigability: Is
it obvious what to do and where to go in an interface; are the menus structured in a way that
allows the user to move smoothly through them to reach the option they want?

Design principles are derived from a mix of theory-based knowledge, experience, and com-
mon sense. They tend to be written in a prescriptive manner, suggesting to designers what to
provide and what to avoid at the interface—if you like, the dos and don’ts of interaction design.
More specifically, they are intended to help designers explain and improve their designs (Thim-
bleby, 1990). However, they are not intended to specify how to design an actual interface, for
instance, telling the designer how to design a particular icon or how to structure a web portal, but
to act more like triggers for designers, ensuring that they provide certain features in an interface.

A number of design principles have been promoted. The best known are concerned with
how to determine what users should see and do when carrying out their tasks using an
interactive product. Here we briefly describe the most common ones: visibility, feedback,
constraints, consistency, and affordance.

Visibility
The importance of visibility is exemplified by our contrasting examples at the beginning of
the chapter. The voice-mail system made the presence and number of waiting messages invis-
ible, while the answering machine made both aspects highly visible. The more visible functions
are, the more likely it is that users will be able to know what to do next. Don Norman (1988)
describes the controls of a car to emphasize this point. The controls for different operations are
clearly visible, such as indicators, headlights, horn, and hazard warning lights, indicating what

The key is to nudge people in subtle and pleasant ways with which they can trust and feel
comfortable. Natasha Loma (2018) points out how dark pattern design is “deception and
dishonesty by design.” She describes in a TechCrunch article the many kinds of dark patterns
that are now used to deceive users. A well-known example that most of us have experienced
is unsubscribing from a marketing mailing list. Many sites go to great lengths to make it dif-
ficult for you to leave; you think you have unsubscribed, but then you discover that you need
to type in your email address and click several more buttons to reaffirm that you really want
to quit. Then, just when you think you are safe, they post a survey asking you to answer a few
questions about why you want to leave. Like Harry Brignull, she argues that companies
should adopt fair and ethical design where users have to opt in to any actions that benefit the
company at the expense of the users’ interests.

1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 27

can be done. The relationship between the way the controls have been positioned in the car and
what they do makes it easy for the driver to find the appropriate control for the task at hand.

In contrast, when functions are out of sight, it makes them more difficult to find and
to know how to use. For example, devices and environments that have become automated
through the use of sensor technology (usually for hygiene and energy-saving reasons)—like
faucets, elevators, and lights—can sometimes be more difficult for people to know how to
control, especially how to activate or deactivate them. This can result in people getting caught
short and frustrated. Figure 1.11 shows a sign that explains how to use the automatically
controlled faucet for what is normally an everyday and well-learned activity. It also states
that the faucets cannot be operated if wearing black clothing. It does not explain, however,
what to do if you are wearing black clothing! Increasingly, highly visible controlling devices,
like knobs, buttons, and switches, which are intuitive to use, have been replaced by invisible
and ambiguous activating zones where people have to guess where to move their hands, bod-
ies, or feet—on, into, or in front of—to make them work.

Feedback
Related to the concept of visibility is feedback. This is best illustrated by an analogy to what
everyday life would be like without it. Imagine trying to play a guitar, slice bread using a
knife, or write using a pen if none of the actions produced any effect for several seconds.

Figure 1.11 A sign in the restrooms at the Cincinnati airport
Source: http://www.baddesigns.com

http://www.baddesigns.com

1 W H AT I S I N T E R A C T I O N D E S I G N ?28

There would be an unbearable delay before the music was produced, the bread was cut, or
the words appeared on the paper, making it almost impossible for the person to continue
with the next strum, cut, or stroke.

Feedback involves sending back information about what action has been done and what
has been accomplished, allowing the person to continue with the activity. Various kinds of
feedback are available for interaction design—audio, tactile, verbal, visual, and combinations
of these. Deciding which combinations are appropriate for different types of activities and
interactivities is central. Using feedback in the right way can also provide the necessary vis-
ibility for user interaction.

Constraints
The design concept of constraining refers to determining ways of restricting the kinds of user inter-
action that can take place at a given moment. There are various ways that this can be achieved.
A common design practice in graphical user interfaces is to deactivate certain menu options by
shading them gray, thereby restricting the user only to actions permissible at that stage of the
activity (see Figure 1.12). One of the advantages of this form of constraining is that it prevents
the user from selecting incorrect options and thereby reduces the chance of making a mistake.

The use of different kinds of graphical representations can also constrain a person’s
interpretation of a problem or information space. For example, flow chart diagrams show
which objects are related to which, thereby constraining the way that the information can be
perceived. The physical design of a device can also constrain how it is used; for example, the

Figure 1.12 A menu showing restricted availability of options as an example of logical constraining.
Gray text indicates deactivated options.
Source: https://www.ucl.ac.uk

https://www.ucl.ac.uk

1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 29

external slots in a computer have been designed to allow a cable or card to be inserted in a
certain way only. Sometimes, however, the physical constraint is ambiguous, as shown in Fig-
ure 1.13. The figure shows part of the back of a computer. There are two sets of connectors;
the two on the right are for a mouse and a keyboard. They look identical and are physically
constrained in the same way. How do you know which is which? Do the labels help?

Consistency
This refers to designing interfaces to have similar operations and use similar elements for achiev-
ing similar tasks. In particular, a consistent interface is one that follows rules, such as using the
same operation to select all objects. For example, a consistent operation is using the same input
action to highlight any graphical object on the interface, such as always clicking the left mouse
button. Inconsistent interfaces, on the other hand, allow exceptions to a rule. An example is
where certain graphical objects (for example, email messages presented in a table) can be high-
lighted only by using the right mouse button, while all other operations are highlighted using
the left mouse button. The problem with this kind of inconsistency is that it is quite arbitrary,
making it difficult for users to remember and making its use more prone to mistakes.

One of the benefits of consistent interfaces, therefore, is that they are easier to learn and
use. Users have to learn only a single mode of operation that is applicable to all objects. This
principle works well for simple interfaces with limited operations, such as a portable radio
with a small number of operations mapped onto separate buttons. Here, all the user has to
do is to learn what each button represents and select accordingly. However, it can be more
problematic to apply the concept of consistency to more complex interfaces, especially when
many different operations need to be designed. For example, consider how to design an inter-
face for an application that offers hundreds of operations, such as a word-processing appli-
cation. There is simply not enough space for a thousand buttons, each of which maps to an
individual operation. Even if there were, it would be extremely difficult and time-consuming
for the user to search through all of them to find the desired operation. A much more effec-
tive design solution is to create categories of commands that can be mapped into subsets of
operations that can be displayed at the interface, for instance, via menus.

Figure 1.13 Ambiguous constraints on the back of a computer
Source: http://www.baddesigns.com

http://www.baddesigns.com

1 W H AT I S I N T E R A C T I O N D E S I G N ?30

Affordance
This is a term used to refer to an attribute of an object that allows people to know how to
use it. For example, a mouse button invites pushing (in so doing, activating clicking) by the
way it is physically constrained in its plastic shell. At a simple level, to afford means “to give
a clue”’ (Norman, 1988). When the affordances of a physical object are perceptually obvious,
it is easy to know how to interact with it. For example, a door handle affords pulling, a cup
handle affords grasping, and a mouse button affords pushing. The term has since been much
popularized in interaction design, being used to describe how interfaces should make it obvi-
ous as to what can be done when using them. For example, graphical elements like buttons,
icons, links, and scrollbars are discussed with respect to how to make it appear obvious how
they should be used: icons should be designed to afford clicking, scrollbars to afford moving
up and down, and buttons to afford pushing.

Don Norman (1999) suggests that there are two kinds of affordance: perceived and real.
Physical objects are said to have real affordances, like grasping, that are perceptually obvious
and do not have to be learned. In contrast, user interfaces that are screen-based are virtual and
do not have these kinds of real affordances. Using this distinction, he argues that it does
not make sense to try to design for real affordances at the interface, except when designing
physical devices, like control consoles, where affordances like pulling and pressing are help-
ful in guiding the user to know what to do. Alternatively, screen-based interfaces are better
conceptualized as perceived affordances, which are essentially learned conventions. However,
watching a one-year-old swiping smartphone screens, zooming in and out on images with
their finger and thumb, and touching menu options suggests that kind of learning comes
naturally.

Applying Design Principles in Practice
One of the challenges of applying more than one of the design principles in interaction
design is that trade-offs can arise among them. For example, the more you try to constrain
an interface, the less visible information becomes. The same can also happen when trying
to apply a single design principle. For example, the more an interface is designed to afford
through trying to resemble the way physical objects look, the more it can become clut-
tered and difficult to use. It can also be the case that the more an interface is designed to
be aesthetic, the less usable it becomes. Consistency can be a problematic design principle;
trying to design an interface to be consistent with something can make it inconsistent with
something else. Furthermore, sometimes inconsistent interfaces are actually easier to use
than consistent interfaces. This is illustrated by Jonathan Grudin’s classic (1989) use of the
analogy of where knives are stored in a house. Knives come in a variety of forms, including
butter knives, steak knives, table knives, and fish knives. An easy place to put them all and
subsequently locate them is in the top drawer by the sink. This makes it easy for everyone
to find them and follows a simple consistent rule. But what about the knives that don’t fit
or are too sharp to put in the drawer, like carving knives and bread knives? They are placed
in a wooden block. And what about the best knives kept only for special occasions? They
are placed in the cabinet in another room for safekeeping. And what about other knives
like putty knives and paint-scraping knives used in home improvement projects (kept in the
garage) and jack-knives (kept in one’s pockets or backpack)? Very quickly, the consistency
rule begins to break down.

1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 31

Jonathan Grudin notes how, in extending the number of places where knives are kept,
inconsistency is introduced, which in turn increases the time needed to learn where they are
all stored. However, the placement of the knives in different places often makes it easier to
find them because they are at hand for the context in which they are used and are also next
to the other objects used for a specific task; for instance, all of the home improvement project
tools are stored together in a box in the garage. The same is true when designing interfaces:
introducing inconsistency can make it more difficult to learn an interface, but in the long run
it can make it easier to use.

ACTIVITY 1.4
One of the main design principles for website design is simplicity. Jakob Nielsen (1999) pro-
posed that designers go through all of their design elements and remove them one by one. If
a design works just as well without an element, then remove it. Do you think this is a good
design principle? If you have your own website, try doing this and seeing what happens. At
what point does the interaction break down?

Comment
Simplicity is certainly an important design principle. Many designers try to cram too much
into a screenful of space, making it unwieldy for people to find the element in which they are
interested. Removing design elements to see what can be discarded without affecting the over-
all function of the website can be a salutary lesson. Unnecessary icons, buttons, boxes, lines,
graphics, shading, and text can be stripped, leaving a cleaner, crisper, and easier-to-navigate
website. However, graphics, shading, coloring, and formatting can make a site aesthetically
pleasing and enjoyable to use. Plain vanilla sites consisting solely of lists of text and a few links
may not be as appealing and may put certain visitors off, never to return. Good interaction
design involves getting the right balance between aesthetic appeal and the optimal amount
and kind of information per page.

In-Depth Activity
This activity is intended for you to put into practice what you have studied in this chapter.
Specifically, the objective is to enable you to define usability and user experience goals and to
transform these and other design principles into specific questions to help evaluate an inter-
active product.

Find an everyday handheld device, for example, a remote control, digital camera, or
smartphone and examine how it has been designed, paying particular attention to how the
user is meant to interact with it.

(Continued)

1 W H AT I S I N T E R A C T I O N D E S I G N ?32

(a) From your first impressions, write down what is good and bad about the way the
device works.

(b) Give a description of the user experience resulting from interacting with it.
(c) Outline some of the core micro-interactions that are supported by it. Are they pleasurable,

easy, and obvious?
(d) Based on your reading of this chapter and any other material you have come across about

interaction design, compile a set of usability and user experience goals that you think will
be most relevant in evaluating the device. Decide which are the most important ones and
explain why.

(e) Translate each of your sets of usability and user experience goals into two or three specific
questions. Then use them to assess how well your device fares.

(f) Repeat steps (c) and (d), but this time use the design principles outlined in the chapter.
(g) Finally, discuss possible improvements to the interface based on the answers obtained in

steps (d) and (e).

Summary
In this chapter, we have looked at what interaction design is and its importance when developing
apps, products, services, and systems. To begin, a number of good and bad designs were pre-
sented to illustrate how interaction design can make a difference. We described who and what
is involved in interaction design and the need to understand accessibility and inclusiveness. We
explained in detail what usability and user experience are, how they have been characterized,
and how to operationalize them to assess the quality of a user experience resulting from interact-
ing with an interactive product. The increasing emphasis on designing for the user experience
and not just products that are usable was stressed. A number of core design principles were also
introduced that provide guidance for helping to inform the interaction design process.

Key Points
• Interaction design is concerned with designing interactive products to support the way
people communicate and interact in their everyday and working lives.

• Interaction design is multidisciplinary, involving many inputs from wide-ranging disciplines
and fields.

• The notion of the user experience is central to interaction design.
• Optimizing the interaction between users and interactive products requires consideration
of a number of interdependent factors, including context of use, types of activity, UX goals,
accessibility, cultural differences, and user groups.

• Identifying and specifying relevant usability and user experience goals can help lead to the
design of good interactive products.

• Design principles, such as feedback and simplicity, are useful heuristics for informing, ana-
lyzing, and evaluating aspects of an interactive product.

F U R T H E R R E A D I N G 33

Further Reading

Here we recommend a few seminal readings on interaction design and the user experience
(in alphabetical order).

COOPER, A., REIMANN, R., CRONIN, D. AND NOESSEL, C. (2014) About Face: The
Essentials of Interaction Design (4th ed.). John Wiley & Sons Inc. This fourth edition of
About Face provides an updated overview of what is involved in interaction design, and it is
written in a personable style that appeals to practitioners and students alike.

GARRETT, J. J. (2010) The Elements of User Experience: User-Centered Design for the Web
and Beyond (2nd ed.). New Riders Press. This is the second edition of the popular coffee-
table introductory book to interaction design. It focuses on how to ask the right questions
when designing for a user experience. It emphasizes the importance of understanding how
products work on the outside, that is, when a person comes into contact with those products
and tries to work with them. It also considers a business perspective.

LIDWELL, W., HOLDEN, K. AND BUTLER, J. (2010) Revised and Updated: 125 Ways to
Enhance Usability, Influence Perception, Increase Appeal, Make Better Design Decisions and
Teach Through Design. Rockport Publishers, Inc. This book presents classic design principles
such as consistency, accessibility, and visibility in addition to some lesser-known ones, such as
constancy, chunking, and symmetry. They are alphabetically ordered (for easy reference) with
a diversity of examples to illustrate how they work and can be used.

NORMAN, D.A. (2013) The Design of Everyday Things: Revised and Expanded Edition.
MIT Press. This book was first published in 1988 and became an international best seller,
introducing the world of technology to the importance of design and psychology. It covers
the design of everyday things, such as refrigerators and thermostats, providing much food for
thought in relation to how to design interfaces. This latest edition is comprehensively revised
showing how principles from psychology apply to a diversity of old and new technologies.
The book is highly accessible with many illustrative examples.

SAFFER, D. (2014) Microinteractions: Designing with Details. O’Reilly. This highly acces-
sible book provides many examples of the small things in interaction design that make a big
difference between a pleasant experience and a nightmare one. Dan Saffer describes how to
design them to be efficient, understandable, and enjoyable user actions. He goes into detail
about their structure and the different kinds, including many examples with lots of illustra-
tions. The book is a joy to dip into and enables you to understand right away why and how
it is important to get the micro-interactions right.

1 W H AT I S I N T E R A C T I O N D E S I G N ?34

INTERVIEW with Harry Brignull

Harry Brignull is a user experience con-
sultant based in the United Kingdom. He
has a PhD in cognitive science, and his
work involves building better experiences
by blending user research and interaction
design. In his work, Harry has consulted
for companies including Spotify, Smart
Pension, The Telegraph, British Airways,
Vodafone, and many others. In his spare
time, Harry also runs a blog on interaction
design that has attracted a lot of eyeballs. It
is called 90percentofeverything.com, and it
is well worth checking out.

What are the characteristics of a good
interaction designer?
I think of interaction design, user expe-
rience design, service design, and user
research as a combined group of disci-
plines that are tricky to tease apart. Every
company has slightly different terminol-
ogy, processes, and approaches. I’ll let you
into a secret, though. They’re all making
it up as they go along. When you see any
organization portraying its design and
research publicly, they’re showing you
a fictionalized view of it for recruitment
and marketing purposes. The reality of the
work is usually very different. Research
and design is naturally messy. There’s a
lot of waste, false assumptions, and blind
alleys you have to go down before you
can define and understand a problem well
enough to solve it. If an employer doesn’t
understand this and they don’t give you
the space and time you need, then you
won’t be able to do a good job, regardless
of your skills and training.

A good interaction designer has skills
that work like expanding foam. You expand
to fill the skill gaps in your team. If you
don’t have a writer present, you need to be
able to step up and do it yourself, at least
to the level of a credible draft. If you don’t
have a researcher, you’ll need to step up
and do it yourself. The same goes for devel-
oping code-based prototypes, planning the
user journeys, and so on. You’ll soon learn
to become used to working outside of your
comfort zone and relish the new challenges
that each project brings.

How has interaction design changed in the
past few years?
In-housing of design teams is a big trend
at the moment. When I started my con-
sultancy career in the mid-2000s, the
main route to getting a career in industry
was to get a role at an agency, like a UX
consultancy, a research agency, or a full-
service agency. Big organizations didn’t
even know where to start with hiring and
building their own teams, so they paid
enormous sums to agencies to design and
build their products. This turned out to
be a pretty ineffective model—when the
agencies finish a project, they take all
the acquired expertise away with them to
their next clients.

These days, digital organizations have
wised up, and they’ve started building their
own in-house teams. This means that a big
theme in design these days is organizational
change. You can’t do good design in an
organization that isn’t set up for it. In fact,
in old, large organizations, the political

http://90percentofeverything.com

35

structure often seems to be set up to sab-
otage good design and development prac-
tices. It sounds crazy, but it’s very common
to walk into an organization to find a proj-
ect manager brandishing a waterfall Gantt
chart while ranting obsessively about Agile
(which is a contradiction in terms) or to
find a product owner saying in one breath
they value user research yet in the next
breath getting angry with researchers for
bringing them bad news. As well as “leg-
acy technology,” organizations naturally
end up with “legacy thinking.” It’s really
tricky to change it. Design used to be just
a department. Nowadays it’s understood
that good design requires the entire organi-
zation to work together in a cohesive way.

What projects are you working on now?
I’m currently head of UX at a FinTech start-
up called Smart Pension in London. Pen-
sions pose a really fascinating user-centered
design challenge. Consumers hate thinking
about pensions, but they desperately need
them. In a recent research session, one of
the participants said something that really
stuck with me: “Planning your pension
is like planning for your own funeral.”
Humans are pretty terrible at long-term
planning over multiple decades. Nobody
likes to think about their own mortality.
But this is exactly what you need to do if
you want to have a happy retirement.

The pension industry is full of jar-
gon and off-putting technical complexity.
Even fundamental financial concepts like
risk aren’t well understood by many con-
sumers. In some recent research, one of
our participants got really tongue-tied try-
ing to understand the idea that since they
were young, it would be “high risk” (in the
loose nontechnical definition of the word)
to put their money into a “low-risk” fund

(in the technical definition of the word)
since they’d probably end up with lower
returns when they got older. Investment
is confusing unless you’ve had training.
Then, there’s the problem that “a little
knowledge can hurt.” Some consumers
who think they know what they’re doing
can end up suffering when they think
they can beat the market by moving their
money around between funds every week.

Self-service online pension (retirement
plans) platforms don’t do anything to help
people make the right decisions because
that would count as advice, which they’re
not able to give because of the way it’s reg-
ulated. Giving an average person a self-
service platform and telling them to go sort
out their pension is like giving them a Unix
terminal and telling them to sort out their
own web server. A few PDF fact sheets just
aren’t going to help. If consumers want
advice, they have to go to a financial advi-
sor, which can be expensive and doesn’t
make financial sense unless you have a lot
of money in the first place. There’s a gap in
the market, and we’re working these sorts
of challenges in my team at Smart Pension.

What would you say are the biggest chal-
lenges facing you and other consultants
doing interaction design these days?
A career in interaction design is one of con-
tinual education and training. The biggest
challenge is to keep this going. Even if you
feel that you’re at the peak of your skills,
the technology landscape will be shifting
under your feet, and you need to keep an
eye on what’s coming next so you don’t get
left behind. In fact, things move so quickly
in interaction design that by the time you
read this interview, it will already be dated.

If you ever find yourself in a “com-
fortable” role doing the same thing every

(Continued)

I N T E R V I E W W I T H H A R R Y B R I G N U L L

1 W H AT I S I N T E R A C T I O N D E S I G N ?36

day, then beware—you’re doing yourself a
disservice. Get out there, stretch yourself,
and make sure you spend some time every
week outside your comfort zone.

If you’re asked to evaluate a prototype ser-
vice or product and you discover it is really
bad, how do you break the news?
It depends what your goal is. If you want
to just deliver the bad news and leave, then
by all means be totally brutal and don’t
pull any punches. But if you want to build
a relationship with the client, you’re going

to need to help them work out how to
move forward.

Remember, when you deliver bad news
to a client, you’re basically explaining to
them that they’re in a dark place and it’s
their fault. It can be quite embarrassing
and depressing. It can drive stakeholders
apart when really you need to bring them
together and give them a shared vision to
work toward. Discovering bad design is an
opportunity for improvement. Always pair
the bad news with a recommendation of
what to do next.

NOTE
We use the term interactive products generically to refer to all classes of interactive
systems, technologies, environments, tools, applications, services, and devices.

Chapter 2

T H E P R O C E S S O F
I N T E R A C T I O N D E S I G N

Objectives
The main goals of this chapter are to accomplish the following:

• Reflect on what interaction design involves.
• Explain some of the advantages of involving users in development.
• Explain the main principles of a user-centered approach.
• Introduce the four basic activities of interaction design and how they are related in a
simple lifecycle model.

• Ask some important questions about the interaction design process and provide the
answers.

• Consider how interaction design activities can be integrated into other development
lifecycles.

2.1 Introduction

Imagine that you have been asked to design a cloud-based service to enable people to share
and curate their photos, movies, music, chats, documents, and so on, in an efficient, safe, and
enjoyable way. What would you do? How would you start? Would you begin by sketching
how the interface might look, work out how the system architecture should be structured, or
just start coding? Or, would you start by asking users about their current experiences with
sharing files and examine the existing tools, for example, Dropbox and Google Drive, and
based on this begin thinking about how you were going to design the new service? What
would you do next? This chapter discusses the process of interaction design, that is, how to
design an interactive product.

There are many fields of design, such as graphic design, architectural design, industrial
design, and software design. Although each discipline has its own approach to design, there

2.1 Introduction

2.2 What Is Involved in Interaction Design?

2.3 Some Practical Issues

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N38

are commonalities. The Design Council of the United Kingdom captures these in the double-
diamond of design, as shown in Figure 2.1. This approach has four phases which are iterated:

• Discover: Designers try to gather insights about the problem.
• Define: Designers develop a clear brief that frames the design challenge.
• Develop: Solutions or concepts are created, prototyped, tested, and iterated.
• Deliver: The resulting project is finalized, produced, and launched.

Interaction design also follows these phases, and it is underpinned by the philosophy of
user-centered design, that is, involving users throughout development. Traditionally, interac-
tion designers begin by doing user research and then sketching their ideas. But who are the
users to be researched, and how can they be involved in development? Will they know what
they want or need if we just ask them? From where do interaction designers get their ideas,
and how do they generate designs?

In this chapter, we raise and answer these kinds of questions, discuss user-centered
design, and explore the four basic activities of the interaction design process. We also intro-
duce a lifecycle model of interaction design that captures these activities and the relationships
among them.

2.2 What Is Involved in Interaction Design?

Interaction design has specific activities focused on discovering requirements for the prod-
uct, designing something to fulfill those requirements, and producing prototypes that are
then evaluated. In addition, interaction design focuses attention on users and their goals.

Discover

P
ro

b
le

m

S
o

lu
ti

o
n

P
ro

b
le

m
D

ef
in

it
io

n
D

es
ig

n
B

ri
ef

Define Develop Deliver
insight into the problem the area to focus upon potential solutions solutions that work

Figure 2.1 The double diamond of design
Source: Adapted from https://www.designcouncil.org.uk/news-opinion/design-process-what-double-diamond

https://www.designcouncil.org.uk/news-opinion/design-process-what-double-diamond

2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 39

For example, the artifact’s use and target domain are investigated by taking a user-centered
approach to development, users’ opinions and reactions to early designs are sought, and
users are involved appropriately in the development process itself. This means that users’
concerns direct the development rather than just technical concerns.

Design is also about trade-offs—about balancing conflicting requirements. One common
form of trade-off when developing a system to offer advice, for example, is deciding how
much choice will be given to the user and how much direction the system should offer. Often,
the division will depend on the purpose of the system, for example, whether it is for playing
music tracks or for controlling traffic flow. Getting the balance right requires experience, but
it also requires the development and evaluation of alternative solutions.

Generating alternatives is a key principle in most design disciplines and one that is also
central to interaction design. Linus Pauling, twice a Nobel Prize winner, once said, “The best
way to get a good idea is to get lots of ideas.” Generating lots of ideas is not necessarily hard,
but choosing which of them to pursue is more difficult. For example, Tom Kelley (2016)
describes seven secrets for successful brainstorms, including sharpening the focus (having a
well-honed problem statement), having playful rules (to encourage ideas), and getting physi-
cal (using visual props).

Involving users and others in the design process means that the designs and potential
solutions will need to be communicated to people other than the original designer. This
requires the design to be captured and expressed in a form that allows review, revision, and
improvement. There are many ways of doing this, one of the simplest being to produce a
series of sketches. Other common approaches are to write a description in natural language,
to draw a series of diagrams, and to build a prototype, that is, a limited version of the final
product. A combination of these techniques is likely to be the most effective. When users are
involved, capturing and expressing a design in a suitable format is especially important since
they are unlikely to understand jargon or specialist notations. In fact, a form with which
users can interact is most effective, so building prototypes is an extremely powerful approach.

ACTIvITY 2.1
This activity asks you to apply the double diamond of design to produce an innovative inter-
active product for your own use. By focusing on a product for yourself, the activity deliber-
ately de-emphasizes issues concerned with involving other users, and instead it emphasizes the
overall process.

Imagine that you want to design a product that helps you organize a trip. This might be
for a business or vacation trip, to visit relatives halfway around the world, or for a bike ride
on the weekend—whatever kind of trip you like. In addition to planning the route or booking
tickets, the product may help to check visa requirements, arrange guided tours, investigate the
facilities at a location, and so on.
1. Using the first three phases of the double diamond of design, produce an initial design

using a sketch or two, showing its main functionality and its general look and feel. This
activity omits the fourth phase, as you are not expected to deliver a working solution.

2. Now reflect on how your activities fell into these phases. What did you do first? What was
your instinct to do first? Did you have any particular artifacts or experiences upon which
to base your design?

(Continued)

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N40

Comment
1. The first phase focuses on discovering insights about the problem, but is there a problem?

If so, what is it? Although most of us manage to book trips and travel to destinations
with the right visas and in comfort, upon reflection the process and the outcome can be
improved. For example, dietary requirements are not always fulfilled, and the accommoda-
tion is not always in the best location. There is a lot of information available to support
organizing travel, and there are many agents, websites, travel books, and tourist boards
that can help. The problem is that it can be overwhelming.

The second phase is about defining the area on which to focus. There are many rea-
sons for travelling—both individual and family—but in my experience organizing business
trips to meetings worldwide is stressful, and minimizing the complexity involved in these
would be worthwhile. The experience would be improved if the product offers advice from
the many possible sources of information and tailors that advice to individual preferences.

The third phase focuses on developing solutions, which in this case is a sketch of the
design itself. Figure 2.2 shows an initial design. This has two versions of the product—one
as an app to run on a mobile device and one to run on a larger screen. The assumptions
underlying the choice to build two versions are based on my experience; I would normally
plan the details of the trip at my desk, while requiring updates and local information while
traveling. The mobile app has a simple interaction style that is easy to use on the go, while
the larger-screen version is more sophisticated and shows a lot of information and the vari-
ous choices available.

(a) (b)

Figure 2.2 Initial sketches of the trip organizer showing (a) a large screen covering the entire
journey from home to Beerwah in Australia and (b) the smartphone screen available for the leg
of the journey at Paris (Charles de Gaulle) airport

2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 41

2.2.1 Understanding the Problem Space
Deciding what to design is key, and exploring the problem space is one way in which to
decide. This is the first phase in the double diamond, but it can be overlooked by those new
to interaction design, as you may have discovered in Activity 2.1. In the process of creating an
interactive product, it can be tempting to begin at the nuts and bolts level of design. By this
we mean working out how to design the physical interface and what technologies and inter-
action styles to use, for example, whether to use multitouch, voice, graphical user interface,
heads-up display, augmented reality, gesture-based, and so forth. The problem with starting
here is that potential users and their context can be misunderstood, and usability and user
experience goals can be overlooked, both of which were discussed in Chapter 1, “What Is
Interaction Design?”

For example, consider the augmented reality displays and holographic navigation sys-
tems that are available in some cars nowadays (see Figure 2.3). They are the result of decades
of research into human factors of information displays (for instance, Campbell et al., 2016),
the driving experience itself (Perterer et al., 2013; Lee et al., 2005), and the suitability of dif-
ferent technologies (for example, Jose et al., 2016), as well as improvements in technology.
Understanding the problem space has been critical in arriving at workable solutions that are
safe and trusted. Having said that, some people may not be comfortable using a holographic
navigation system and choose not to have one installed.

2. Initially, it wasn’t clear that there was a problem to address, but on reflection the complex-
ity of the available information and the benefit of tailoring choices became clearer. The
second phase guided me toward thinking about the area on which to focus. Worldwide
business trips are the most difficult, and reducing the complexity of information sources
through customization would definitely help. It would be good if the product learned
about my preferences, for example, recommending flights from my favorite airline and
finding places to have a vegan meal.

Developing solutions (the third phase) led me to consider how to interact with the
product—seeing detail on a large screen would be useful, but a summary that can be
shown on a mobile device is also needed. The type of support also depends on where the
meeting is being held. Planning a trip abroad requires both a high-level view to check visas,
vaccinations, and travel advice, as well as a detailed view about the proximity of accom-
modation to the meeting venue and specific flight times. Planning a local trip is much less
complicated.

The exact steps taken to create a product will vary from designer to designer, from
product to product, and from organization to organization (see Box 2.1). Capturing con-
crete ideas, through sketches or written descriptions, helps to focus the mind on what is
being designed, the context of the design, and what user experience is to be expected. The
sketches can capture only some elements of the design, however, and other formats are
needed to capture everything intended. Throughout this activity, you have been making
choices between alternatives, exploring requirements in detail, and refining your ideas
about what the product will do.

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N42

While it is certainly necessary at some point to choose which technology to employ and
decide how to design the physical aspects, it is better to make these decisions after articulat-
ing the nature of the problem space. By this we mean understanding what is currently the
user experience or the product, why a change is needed, and how this change will improve
the user experience. In the previous example, this involves finding out what is problem-
atic with existing support for navigating while driving. An example is ensuring that drivers
can continue to drive safely without being distracted when looking at a small GPS display
mounted on the dashboard to figure out on which road it is asking them to “turn left.” Even
when designing for a new user experience, it still requires understanding the context for
which it will be used and the possible current user expectations.

The process of articulating the problem space is typically done as a team effort. Invari-
ably, team members will have differing perspectives on it. For example, a project manager
is likely to be concerned about a proposed solution in terms of budgets, timelines, and
staffing costs, whereas a software engineer will be thinking about breaking it down into
specific technical concepts. The implications of pursuing each perspective need to be con-
sidered in relation to one another. Although time-consuming and sometimes resulting in
disagreements among the design team, the benefits of this process can far outweigh the
associated costs: there will be much less chance of incorrect assumptions and unsupported

(a)

(b)

Figure 2.3 (a) Example of the holographic navigation display from WayRay which overlays GPS
navigation instructions onto the road ahead and gathers and shares driver statistics (b) an aug-
mented reality navigation system available in some cars today
Sources: (a) Used courtesy of WayRay, (b) Used courtesy of Muhammad Saad

2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 43

claims creeping into a design solution that later turn out to be unusable or unwanted.
Spending time enumerating and reflecting upon ideas during the early stages of the design
process enables more options and possibilities to be considered. Furthermore, designers
are increasingly expected to justify their choice of problems and to be able to present
clearly and convincingly their rationale in business as well as design language. Being able
to think and analyze, present, and argue is valued as much as the ability to create a product
(Kolko, 2011).

2.2.2 The Importance of Involving Users
Chapter  1 stressed the importance of understanding users, and the previous description
emphasizes the need to involve users in interaction design. Involving users in development is
important because it’s the best way to ensure that the end product is usable and that it indeed
will be used. In the past, it was common for developers to talk only to managers, experts, or

BOX 2.1
Four Approaches to Interaction Design

Dan Saffer (2010) suggests four main approaches to interaction design, each of which is based
on a distinct underlying philosophy: User-centered design, Activity-centered design, Systems
design, and Genius design.

Dan Saffer acknowledges that the purest form of any of these approaches is unlikely to
be realized, and he takes an extreme view of each in order to distinguish among them. In user-
centered design, the user knows best and is the guide to the designer; the designer’s role is to
translate the users’ needs and goals into a design solution.

Activity-centered design focuses on the behavior surrounding particular tasks. Users still
play a significant role, but it is their behavior rather than their goals and needs that is impor-
tant. Systems design is a structured, rigorous, and holistic design approach that focuses on
context and is particularly appropriate for complex problems. In systems design, it is the
system (that is, the people, computers, objects, devices, and so on) that the center of attention,
while the users’ role is to set the goals of the system.

Finally, genius design is different from the other three approaches because it relies largely
on the experience and creative flair of a designer. Jim Leftwich, an experienced interaction
designer interviewed by Dan Saffer (2010, pp. 44–45), prefers the term rapid expert design. In
this approach, the users’ role is to validate ideas generated by the designer, and users are not
involved during the design process itself. Dan Saffer points out that this is not necessarily by
choice, but it may be because of limited or no resources for user involvement.

Different design problems lend themselves more easily to different approaches, and dif-
ferent designers will tend to gravitate toward using the approach that suits them best. Although
an individual designer may prefer a particular approach, it is important that the approach for
any one design problem is chosen with that design problem in mind.

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N44

proxy users, or even to use their own judgment without reference to anyone else. While oth-
ers involved in designing the product can provide useful information, they will not have the
same perspective as the target user who performs the activity every day or who will use
the intended product on a regular basis.

In commercial projects, a role called the product owner is common. The product owner’s
job is to filter user and customer input to the development cycle and to prioritize require-
ments or features. This person is usually someone with business and technical knowledge, but
not interaction design knowledge, and they are rarely (if ever) a direct user of the product.
Although the product owner may be called upon to assess designs, they are a proxy user at
best, and their involvement does not avoid the need for user involvement.

The best way to ensure that developers gain a good understanding of users’ goals, lead-
ing to a more appropriate, more usable product, is to involve target users throughout devel-
opment. However, two other aspects unrelated to functionality are equally as important if the
product is to be usable and used: expectation management and ownership.

Expectation management is the process of making sure that the users’ expectations of
the new product are realistic. Its purpose is to ensure that there are no surprises for users
when the product arrives. If users feel they have been cheated by promises that have not been
fulfilled, then this will cause resistance and even rejection. Marketing of the new arrival must
be careful not to misrepresent the product, although it may be particularly difficult to achieve
with a large and complex system (Nevo and Wade, 2007). How many times have you seen an
advertisement for something that you thought would be really good to have, but when you
actually see one, you discover that the marketing hype was a little exaggerated? We expect
that you felt quite disappointed and let down. This is the kind of feeling that expectation
management tries to avoid.

Involving users throughout development helps with expectation management because they
can see the product’s capabilities from an early stage. They will also understand better how it
will affect their jobs and lives and why the features are designed that way. Adequate and timely
training is another technique for managing expectations. If users have the chance to work with
the product before it is released through training or hands-on demonstrations of a prerelease
version, then they will understand better what to expect when the final product is available.

A second reason for user involvement is ownership. Users who are involved and feel that
they have contributed to a product’s development are more likely to feel a sense of ownership
toward it and support its use (Bano et al., 2017).

How to involve users, in what roles, and for how long, needs careful planning, as dis-
cussed in the next Dilemma box.

DIlEMMA
Too Much of a Good Thing?

Involving users in development is a good thing, but what evidence is there that user involve-
ment is productive? How much should users be involved and in what role(s)? Is it appropriate
for users to lead a technical development project, or is it more beneficial for them to focus on
evaluating prototypes?

2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 45

2.2.3 Degrees of User Involvement
Different degrees of user involvement are possible, ranging from fully engaged throughout
all iterations of the development process to targeted participation in specific activities and
from small groups of individual users in face-to-face contexts to hundreds of thousands of
potential users and stakeholders online. Where available, individual users may be co-opted
onto the design team so that they are major contributors to the development. This has pros
and cons. On the downside, full-time involvement may mean that they become out of touch
with their user community, while part-time involvement might result in a high workload for
them. On the positive side, having a user engaged full or part-time does mean that the input
is available continually throughout development. On the other hand, users may take part in
specific activities to inform the development or to evaluate designs once they are available.
This is a valuable form of involvement, but the users’ input is limited to that particular activ-
ity. Where the circumstances around a project limit user involvement in this way, there are
techniques to keep users’ concerns uppermost in developers’ minds, such as through personas
(see Chapter 11, “Discovering Requirements”).

Uli Abelein et al. (2013) performed a detailed review of the literature in this area and
concluded that, overall, the evidence indicates that user involvement has a positive effect on
user satisfaction and system use. However, they also found that even though the data clearly
indicates this positive effect, some links have a large variation, suggesting that there is still no
clear way to measure the effects consistently. In addition, they found that most studies with
negative correlations involving users and system success were published more than 10 years
previously.

Ramanath Subrayaman et  al. (2010) investigated the impact of user participation on
levels of satisfaction with the product by both developers and users. They found that for new
products, developer satisfaction increased as user participation increased. On the other hand,
user satisfaction was higher if their participation was low, and satisfaction dropped as their
participation increased. They also identified that high levels of user involvement can generate
conflicts and increased reworking. For maintenance projects, both developers and users were
most satisfied with a moderate level of participation (approximately 20 percent of overall
project development time). Based just on user satisfaction as an indicator of project success,
then, it seems that low user participation is most beneficial.

The kind of product being developed, the kind of user involvement possible, the activities
in which they are involved, and the application domain all have an impact on the effectiveness
of user input (Bano and Zowghi, 2015). Peter Richard et al. (2014) investigated the effect of
user involvement in transport design projects. They found that involving users at later stages
of development mainly resulted in suggestions for service improvement, whereas users
involved at earlier stages of innovation suggested more creative ideas.

Recent moves toward an agile way of working (see Chapter  13, “Interaction Design
in Practice”) has emphasized the need for feedback from customers and users, but this also
has its challenges. Kurt Schmitz et al. (2018) suggests that in tailoring their methods, teams
consider the distinction between frequent participation in activities and effective engagement.

User involvement is undoubtedly beneficial, but the levels and types of involvement
require careful consideration and balance.

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N46

Initially, user involvement took the form of small groups or individuals taking part in
face-to-face information-gathering design, or evaluation sessions, but increasing online con-
nectivity has led to a situation in which many thousands of potential users can contribute
to product development. There is still a place for face-to-face user involvement and in situ
studies, but the range of possibilities for user involvement is now much wider. One example
of this is online feedback exchange (OFE) systems, which are increasingly used to test design
concepts with millions of target users before going to market (Foong et al., 2017).

In fact, design is becoming increasingly participative through crowdsourcing design ideas
and examples, for instance (Yu et al., 2016). Where crowdsourcing is used, a range of differ-
ent people are encouraged to contribute, and this can include any and all of the stakeholders.
This wide participation helps to bring different perspectives to the process, which enhances
the design itself, produces more user satisfaction with the final product, and engenders a
sense of ownership. Another example of involving users at scale is citizen engagement, the
goal of which is to engage a population—civic or otherwise—with the aim of promoting
empowerment through technology. The underlying aim is to involve members of the public
in helping them make a change in their lives where technology is often viewed as an integral
part of the process.

Participatory design, also sometimes referred to as cooperative design or co-design, is
an overarching design philosophy that places those for whom systems, technologies, and
services are being designed, as central actors in creation activities. The idea is that instead
of being passive receivers of new technological or industrial artifacts, end users and stake-
holders are active participants in the design process. Chapter 12, “Design, Prototyping, and
Construction,” provides more information on participatory design.

The individual circumstances of the project affect what is realistic and appropriate. If
the end-user groups are identifiable, for example, the product is for a particular company,
then it is easier to involve them. If, however, the product is intended for the open market, it
is unlikely that users will be available to join the design team. In this case, targeted activities
and online feedback systems may be employed. Box 2.2 outlines an alternative way to obtain
user input from an existing product, and Box 2.5 discusses A/B testing, which draws on user
feedback to choose between alternative designs.

BOX 2.2
User Involvement After Product Release

Once a product has been released, a different kind of user involvement is possible—one that
captures data and user feedback based on day-to-day use of the product. The prevalence of
customer reviews has grown considerably in recent years, and they significantly affect the
popularity and success of a product (Harman et  al., 2012). These reviews provide useful
and far-ranging user feedback. For example, Hammad Khalid et al. (2015) studied reviews
of mobile apps to see what reviewers complained about. They identified 12 complaint types,
including privacy and ethics, interface, and feature removal. Customer reviews can provide
useful insight to help improve products, but detailed analysis of feedback gathered this way
is time-consuming.

2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 47

2.2.4 What Is a User-Centered Approach?
Throughout this book, we emphasize the need for a user-centered approach to development.
By this we mean that the real users and their goals, not just technology, are the driving force
behind product development. As a consequence, a well-designed system will make the most
of human skill and judgment, will be directly relevant to the activity in hand, and will sup-
port rather than constrain the user. This is less of a technique and more of a philosophy.

Error reporting systems (ERSs, also called online crashing analysis) automatically collect
information from users that is used to improve applications in the longer term. This is done
with users’ permission, but with a minimal reporting burden. Figure 2.4 shows two dialog
boxes for the Windows error reporting system that is built into Microsoft operating systems.
This kind of reporting can have a significant effect on the quality of applications. For example,
29 percent of the errors fixed by the Windows XP (Service Pack 1) team were based on infor-
mation collected through their ERS (Kinshumann et  al., 2011). While Windows XP is no
longer being supported, this statistic illustrates the impact ERSs can have. The system uses a
sophisticated approach to error reporting based on five strategies: automatic aggregation of
error reports; progressive data collection so that the data collected (such as abbreviated or full
stack and memory dumps) varies depending on the level of data needed to diagnose the error;
minimal user interaction; preserving user privacy; and providing solutions directly to users
where possible. By using these strategies, plus statistical analysis, effort can be focused on the
bugs that have the highest impact on the most users.

Figure 2.4 Two typical dialog boxes from the Windows error reporting system

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N48

When the field of HCI was being established, John Gould and Clayton Lewis (1985) laid
down three principles that they believed would lead to a “useful and easy to use computer
system.” These principles are as follows:

1. Early focus on users and tasks. This means first understanding who the users will be by
directly studying their cognitive, behavioral, anthropomorphic, and attitudinal character-
istics. This requires observing users doing their normal tasks, studying the nature of those
tasks, and then involving users in the design process.

2. Empirical measurement. Early in development, the reactions and performance of intended
users to printed scenarios, manuals, and so forth, are observed and measured. Later,
users interact with simulations and prototypes, and their performance and reactions are
observed, recorded, and analyzed.

3. Iterative design. When problems are found in user testing, they are fixed, and then more
tests and observations are carried out to see the effects of the fixes. This means that design
and development are iterative, with cycles of design-test-measure-redesign being repeated
as often as necessary.

These three principles are now generally accepted as the basis for a user-centered
approach. When this paper was written, however, they were not accepted by most develop-
ers. We discuss these principles in more detail in the following sections.

Early Focus on Users and Tasks
This principle can be expanded and clarified through the following five further principles:

1. Users’ tasks and goals are the driving force behind the development.
While technology will inform design options and choices, it is not the driving force.
Instead of saying “Where can we deploy this new technology?” say “What technologies
are available to provide better support for users’ goals?”

2. Users’ behavior and context of use are studied, and the system is designed to support them.
This is not just about capturing users’ tasks and goals. How people perform their tasks
is also significant. Understanding behavior highlights priorities, preferences, and implicit
intentions.

3. Users’ characteristics are captured and designed for.
When things go wrong with technology, people often think it is their fault. People are
prone to making errors and have certain limitations, both cognitive and physical. Prod-
ucts designed to support people should take these limitations into account and try to
prevent mistakes from being made. Cognitive aspects, such as attention, memory, and per-
ception issues are introduced in Chapter 4, “Cognitive Aspects.” Physical aspects include
height, mobility, and strength. Some characteristics are general, such as color blindness,
which affects about 4.5 percent of the population, but some characteristics are associated
with a particular job or task. In addition to general characteristics, those traits specific to
the intended user group also need to be captured.

4. Users are consulted throughout development from earliest phases to the latest.
As discussed earlier, there are different levels of user involvement, and there are different
ways in which to consult users.

5. All design decisions are taken within the context of the users, their activities, and their
environment.
This does not necessarily mean that users are actively involved in design decisions, but
that is one option.

2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 49

Empirical Measurement
Where possible, specific usability and user experience goals should be identified, clearly doc-
umented, and agreed upon at the beginning of the project. They can help designers choose
between alternative designs and check on progress as the product is developed. Identifying
specific goals up front means that the product can be empirically evaluated at regular stages
throughout development.

Iterative Design
Iteration allows designs to be refined based on feedback. As users and designers engage with
the domain and start to discuss requirements, needs, hopes, and aspirations, then different
insights into what is needed, what will help, and what is feasible will emerge. This leads to a
need for iteration—for the activities to inform each other and to be repeated. No matter how
good the designers are and however clear the users may think their vision is of the required
artifact, ideas will need to be revised in light of feedback, likely several times. This is particu-
larly true when trying to innovate. Innovation rarely emerges whole and ready to go. It takes
time, evolution, trial and error, and a great deal of patience. Iteration is inevitable because
designers never get the solution right the first time (Gould and Lewis, 1985).

ACTIvITY 2.2
Assume you are involved in developing a novel online experience for buying garden plants.
Although many websites exist for buying plants online, you want to produce a distinct expe-
rience to increase the organization’s market share. Suggest ways of applying the previous
principles in this task.

Comment
To address the first three principles, you would need to find out about the tasks and goals,
behavior, and characteristics of potential customers of the new experience, together with any
different contexts of use. Studying current users of existing online plant shops will provide
some information, and it will also identify some challenges to be addressed in the new experi-
ence. However, as you want to increase the organization’s market share, consulting existing
users alone would not be enough. Alternative avenues of investigation include physical shop-
ping situations—for example, shopping at the market, in the local corner shop, and so on,
and local gardening clubs, radio programs, or podcasts. These alternatives will help you find
the advantages and disadvantages of buying plants in different settings, and you will observe
different behaviors. By looking at these options, a new set of potential users and contexts can
be identified.

For the fourth principle, the set of new users will emerge as investigations progress, but
people who are representative of the user group may be accessible from the beginning. Work-
shops or evaluation sessions could be run with them, possibly in one of the alternative shop-
ping environments such as the market. The last principle could be supported through the
creation of a design room that houses all of the data collected, and it is a place where the devel-
opment team can go to find out more about the users and the product goals.

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N50

2.2.5 Four Basic Activities of Interaction Design
The four basic activities for interaction design are as follows:

1. Discovering requirements for the interactive product.
2. Designing alternatives that meet those requirements.
3. Prototyping the alternative designs so that they can be communicated and assessed.
4. Evaluating the product and the user experience it offers throughout the process.

Discovering Requirements
This activity covers the left side of the double diamond of design, and it is focused on dis-
covering something new about the world and defining what will be developed. In the case of
interaction design, this includes understanding the target users and the support an interactive
product could usefully provide. This understanding is gleaned through data gathering and
analysis, which are discussed in Chapters 8–10. It forms the basis of the product’s require-
ments and underpins subsequent design and development. The requirements activity is dis-
cussed further in Chapter 11.

Designing Alternatives
This is the core activity of designing and is part of the Develop phase of the double diamond:
proposing ideas for meeting the requirements. For interaction design, this activity can be viewed
as two subactivities: conceptual design and concrete design. Conceptual design involves pro-
ducing the conceptual model for the product, and a conceptual model describes an abstraction
outlining what people can do with a product and what concepts are needed to understand how
to interact with it. Concrete design considers the detail of the product including the colors,
sounds, and images to use, menu design, and icon design. Alternatives are considered at every
point. Conceptual design is discussed in Chapter 3, and more design issues for specific interface
types are in Chapter 7; more detail about how to design an interactive product is in Chapter 12.

Prototyping
Prototyping is also part of the Develop phase of the double diamond. Interaction design involves
designing the behavior of interactive products as well as their look and feel. The most effec-
tive way for users to evaluate such designs is to interact with them, and this can be achieved
through prototyping. This does not necessarily mean that a piece of software is required. There
are different prototyping techniques, not all of which require a working piece of software. For
example, paper-based prototypes are quick and cheap to build and are effective for identifying
problems in the early stages of design, and through role-playing users can get a real sense of
what it will be like to interact with the product. Prototyping is covered in Chapter 12.

Evaluating
Evaluating is also part of the Develop phase of the double diamond. It is the process of deter-
mining the usability and acceptability of the product or design measured in terms of a variety
of usability and user-experience criteria. Evaluation does not replace activities concerned
with quality assurance and testing to make sure that the final product is fit for its intended
purpose, but it complements and enhances them. Chapters 14–16 cover evaluation.

The activities to discover requirements, design alternatives, build prototypes, and evalu-
ate them are intertwined: alternatives are evaluated through the prototypes, and the results
are fed back into further design or to identify alternative requirements.

2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 51

2.2.6 A Simple Lifecycle Model for Interaction Design
Understanding what activities are involved in interaction design is the first step to being able
to do it, but it is also important to consider how the activities are related to one another.
The term lifecycle model (or process model) is used to represent a model that captures a set
of activities and how they are related. Existing models have varying levels of sophistication
and complexity and are often not prescriptive. For projects involving only a few experienced
developers, a simple process is adequate. However, for larger systems involving tens or hun-
dreds of developers with hundreds or thousands of users, a simple process just isn’t enough
to provide the management structure and discipline necessary to engineer a usable product.

Many lifecycle models have been proposed in fields related to interaction design. For
example, software engineering lifecycle models include the waterfall, spiral, and V models
(for more information about these models, see Pressman and Maxim [2014]). HCI has been
less associated with lifecycle models, but two well-known ones are the Star (Hartson and
Hix, 1989) and an international standard model ISO 9241-210. Rather than explaining the
details of these models, we focus on the classic lifecycle model shown in Figure 2.5. This
model shows how the four activities of interaction design are related, and it incorporates the
three principles of user-centered design discussed earlier.

Many projects start by discovering requirements from which alternative designs are gen-
erated. Prototype versions of the designs are developed and then evaluated. During prototyp-
ing or based on feedback from evaluations, the team may need to refine the requirements
or to redesign. One or more alternative designs may follow this iterative cycle in parallel.
Implicit in this cycle is that the final product will emerge in an evolutionary fashion from
an initial idea through to the finished product or from limited functionality to sophisticated
functionality. Exactly how this evolution happens varies from project to project. However
many times through the cycle the product goes, development ends with an evaluation activity
that ensures that the final product meets the prescribed user experience and usability criteria.
This evolutionary production is part of the Delivery phase of the double diamond.

In recent years, a wide range of lifecycle models has emerged, all of which encom-
pass these activities but with different emphases on activities, relationships, and outputs.
For example, Google Design Sprints (Box 2.3) emphasize problem investigation, solution
development, and testing with customers all in one week. This does not result in a robust
final product, but it does make sure that the solution idea is acceptable to customers. The
in-the-wild approach (Box 2.4) emphasizes the development of novel technologies that are
not necessarily designed for specific user needs but to augment people, places, and settings.
Further models are discussed in Chapter 13.

Source: Fran / Cartoon Stock

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N52

PROTOTYPING

EVALUATING

FINAL PRODUCT

DESIGNING
ALTERNATIVES

DISCOVERING
REQUIREMENTS

Figure 2.5 A simple interaction design lifecycle model

BOX 2.3
Google Design Sprints (Adapted from Knapp et al. (2016))

Google Ventures has developed a structured approach to design that supports rapid idea-
tion and testing of potential solutions to a design challenge. This is called the Google Design
Sprint. A sprint is divided into five phases, and each phase is completed in a day. This means
that in five days, you can go from a design challenge to a solution that has been tested with
customers. As the authors say, “You won’t finish with a complete, detailed, ready-to-ship
product. But you will make rapid progress, and know for sure if you’re headed in the right
direction” (Knapp et  al., 2016, p16–17). Teams are encouraged to iterate on the last two
phases and to develop and re-test prototypes. If necessary, the first idea can be thrown away
and the process started again at Phase 1. There is preparation to be done before the sprint
begins. This preparation and the five phases are described next (see Figure 2.6).

Google Design Sprint

Unpack Sketch Decide Prototype Test Iterate

Unpack

Sketch

Decide

Prototype

Test1 2 3 4 5

Figure 2.6 The five phases of the Google Design Sprint
Source: www.agilemarketing.net/google-design-sprints. Used courtesy of Agile Marketing

www.agilemarketing.net/google-design-sprints.

2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 53

Setting the Stage
This time is used to choose the right design challenge, gather the right team, and organize the
time and space to run the sprint (that is, full-time for everyone for five days). The sprint can
help in high-stake challenges, when you’re running out of time, or if you’re just stuck. The
team composition depends on the product, but it has about seven people including a decider
(who chooses the design to show to the customer), customer expert, technical expert, and
anyone who will bring a disruptive perspective.

Unpack
Day 1 focuses on making a map of the challenge and choosing a target, that is, a part of the
challenge that can be achieved in a week.

Sketch Competing Solutions
Day 2 focuses on generating solutions, with an emphasis on sketching and individual creativ-
ity rather than group brainstorming.

Decide on the Best
Day 3 focuses on critiquing the solutions generated on Day 1, choosing the one most likely
to meet the sprint’s challenge, and producing a storyboard. Whichever solution is chosen, the
decider needs to support the design.

Build a Realistic Prototype
Day 4 focuses on turning the storyboard into a realistic prototype, that is, something on which
customers can provide feedback. We discuss prototyping further in Chapter 12.

Test with Target Customers
Day 5 focuses on getting feedback from five customers and learning from their reactions.

The Google Design Sprint is a process for answering critical business questions through design,
prototyping, and testing ideas with customers. Marta Rey-Babarro, who works at Google as
a staff UX researcher and was the cofounder of Google’s internal Sprint Academy, describes
how they used a sprint to improve the experience of traveling for business.

We wanted to see if we could improve the business travel experience. We started
by doing research with Googlers to find out what experiences and what needs
they had when they traveled. We discovered that there were some Googlers who
traveled over 300 days a year and others who traveled only once or twice a
year. Their travel experiences and needs were very different. After this research,
some of us did a sprint in which we explored the whole travel experience, from
the planning phase to coming back home and submitting receipts. Within five
days we came up with a vision of what that experience could be. On the fifth
day of the sprint, we presented that vision to higher-level execs. They loved it
and sponsored the creation of a new team at Google that has developed new

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N54

BOX 2.4
Research in the Wild (Adapted from Rogers and Marshall (2017))

Research in the wild (RITW) develops technology solutions in everyday living by creating
and evaluating new technologies and experiences in situ. The approach supports designing
prototypes in which researchers often experiment with new technological possibilities that
can change and even disrupt behavior, rather than ones that fit in with existing practices. The
results of RITW studies can be used to challenge assumptions about technology and human
behavior in the real world and to inform the re-thinking of HCI theories. The perspective
taken by RITW studies is to observe how people react to technology and how they change and
integrate it into their everyday lives.

Figure 2.7 shows the framework for RITW studies. In terms of the four activities intro-
duced earlier, this framework focuses on designing, prototyping, and evaluating technology
and ideas and is one way in which requirements may be discovered. It also considers relevant
theory since often the purpose of an RITW study is to investigate a theory, idea, concept, or
observation. Any one RITW study may emphasize the elements of the framework to a differ-
ent degree.
Technology: Concerned with appropriating existing infrastructures/devices (e.g., Internet of

Things toolkit, mobile app) in situ or developing new ones for a given setting (e.g., a novel
public display).

Design: Covers the design space of an experience (e.g., iteratively creating a collaborative
travel planning tool for families to use or an augmented reality game for playing outdoors).

In situ study: Concerned with evaluating in situ an existing device/tool/service or novel
research-based prototype when placed in various settings or given to someone to use over a
period of time.

Theory: Investigating a theory, idea, concept or observation about a behavior, setting or other
phenomenon using existing ones or developing a new one or extending an existing one.

tools and experiences for the traveling Googler. Some of those internal online
experiences made it also to our external products and services outside of Google.

Marta Rey-Babarro

To see a more detailed description of the Google Design Sprint and to
access a set of five videos that describe what happens on each day of
the sprint, go to www.gv.com/sprint/#book.

http://www.gv.com/sprint/#book

2 . 3 S O M E P R A C T I C A l I S S u E S 55

2.3 Some Practical Issues

The discussion so far has highlighted some issues about the practical application of user-
centered design and the simple lifecycle of interaction design introduced earlier. These issues
are listed here:

• Who are the users?
• What are the users’ needs?
• How to generate alternative designs
• How to choose among alternatives
• How to integrate interaction design activities with other lifecycle models

2.3.1 Who Are the Users?
Identifying users may seem like a straightforward activity, but it can be harder than you
think. For example, Sha Zhao et al. (2016) found a more diverse set of users for smartphones
than most manufacturers recognize. Based on an analysis of one month’s smartphone app

Technology
In Situ
Studies

Design

Theory

Figure 2.7 A framework for research in the wild studies
Source: Rogers and Marshall (2017), p. 6. Used courtesy of Morgan & Claypool

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N56

usage, they discovered 382 distinct types of users, including Screen Checkers and Young
Parents. Charlie Wilson et al. (2015) found that little is understood about who the users of
smart homes in general are expected to be, beyond those focused on health-related condi-
tions. In part, this is because many products nowadays are being developed for use by large
sections of the population, and so it can be difficult to determine a clear description. Some
products (such as a system to schedule work shifts) have more constrained user communities,
for example a specific role (shop assistant) within a particular industrial sector (retail). In this
case, there may be a range of users with different roles who relate to the product in differ-
ent ways. Examples are those who manage direct users, those who receive outputs from the
system, those who test the system, those who make the purchasing decision, and those who
use competitive products (Holzblatt and Jones, 1993).

There is a surprisingly wide collection of people who all have a stake in the development
of a successful product. These people are called stakeholders. Stakeholders are the individu-
als or groups that can influence or be influenced by the success or failure of a project. Alan
Dix et al. (2004) observed that is pertinent to a user-centered view of development: “It will
frequently be the case that the formal ‘client’ who orders the system falls very low on the list
of those affected. Be very wary of changes which take power, influence or control from some
stakeholders without returning something tangible in its place.”

The group of stakeholders for a particular product will be larger than the group of users.
It will include customers who pay for it; users who interact with it; developers who design,
build, and maintain it; legislators who impose rules on the development and operation of it;
people who may lose their jobs because of its introduction; and so on (Sharp et al., 1999).

Identifying the stakeholders for a project helps to decide who to involve as users and to
what degree, but identifying relevant stakeholders can be tricky. Ian Alexander and Suzanne
Robertson (2004) suggest using an onion diagram to model stakeholders and their involve-
ment. This diagram shows concentric circles of stakeholder zones with the product being
developed sitting in the middle. Soo Ling Lim and Anthony Finkelstein (2012) developed a
method called StakeRare and supporting tool called StakeNet that relies on social networks
and collaborative filtering to identify and prioritize relevant stakeholders.

ACTIvITY 2.3
Who are the stakeholders for an electricity smart meter for use in the home to help households
control their energy consumption?

Comment
First, there are the people who live in the house, such as older adults and young children,
with a range of abilities and backgrounds. To varying degrees, they will be users of the meter,
and their stake in its success and usability is fairly clear and direct. Householders want to
make sure that their bills are controlled, that they can easily access suppliers if they want to,
and that their electricity supply is not interrupted. On the other hand, the entire family will

2 . 3 S O M E P R A C T I C A l I S S u E S 57

2.3.2 What Are the Users’ Needs?
If you had asked someone in the street in the late 1990s what they needed, their answer
probably wouldn’t have included a smart TV, a ski jacket with an integrated smartphone, or
a robot pet. If you presented the same person with these possibilities and asked whether they
would buy them if they were available, then the answer may have been more positive. Deter-
mining what product to build is not simply a question of asking people “What do you need?”
and then supplying it, because people don’t necessarily know what is possible. Suzanne and
James Robertson (2013) refer to “un-dreamed-of” needs, which are those that users are una-
ware they might have. Instead of asking users, this is approached by exploring the problem
space, investigating the users and their activities to see what can be improved, or trying out
ideas with potential users to see whether the ideas are successful. In practice, a mixture of
these approaches is often taken—trying ideas in order to discover requirements and decide
what to build, but with knowledge of the problem space, potential users, and their activities.

If a product is a new invention, then identifying the users and representative tasks for
them may be harder. This is where in-the-wild studies or rapid design sprints that provide
authentic user feedback on early ideas are valuable. Rather than imagining who might want
to use a product and what they might want to do with it, it’s more effective to put it out there
and find out—the results might be surprising!

It may be tempting for designers simply to design what they would like to use them-
selves, but their ideas would not necessarily coincide with those of the target user group,
because they have different experiences and expectations. Several practitioners and com-
mentators have observed that it’s an “eye-opening experience” when developers or designers
see a user struggling to complete a task that seemed so clear to them (Ratcliffe and McNeill,
2012, p. 125).

Focusing on people’s goals, usability goals and user experience goals is a more promising
approach to interaction design than simply expecting stakeholders to be able to articulate the
requirements for a product.

want to continue to live in the house in comfort, for example, with enough heat and light.
Then there are the people who install and maintain the meter. They make sure that the meter
is installed correctly and that it continues to work effectively. Installers and maintainers want
the meter to be straightforward to install and to be robust and reliable to reduce the need for
return visits or maintenance calls. Outside of these groups are electricity suppliers and dis-
tributors who also want to provide a competitive service so that the householders are satis-
fied and to minimize maintenance costs. They also don’t want to lose customers and money
because the meters are faulty or are providing inaccurate information. Other people who will
be affected by the success of the meter include those who work on the powerlines and at
electricity generation plants, those who work in other energy industries, and ultimately the
government of the country that will want to maintain steady supply for its industry and
population.

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N58

2.3.3 How to Generate Alternative Designs
A common human tendency is to stick with something that works. While recognizing that a
better solution may exist, it is easy to accept the one that works as being “good enough.” Set-
tling for a solution that is good enough may be undesirable because better alternatives may
never be considered, and considering alternative solutions is a crucial step in the process of
design. But where do these alternative ideas come from?

One answer to this question is that they come from the individual designer’s flair and cre-
ativity (the genius design described in Box 2.1). Although it is certainly true that some people
are able to produce wonderfully inspired designs while others struggle to come up with any
ideas at all, very little in this world is completely new. For example, the steam engine, com-
monly regarded as an invention, was inspired by the observation that steam from a kettle
boiling on the stove lifted the lid. An amount of creativity and engineering was needed to
make the jump from a boiling kettle to a steam engine, but the kettle provided inspiration to
translate this experience into a set of principles that could be applied in a different context.
Innovations often arise through cross-fertilization of ideas from different perspectives, indi-
viduals, and contexts; the evolution of an existing product through use and observation; or
straightforward copying of other, similar products.

Cross-fertilization may result from discussing ideas with other designers, while Bill Bux-
ton (2007) reports that different perspectives from users generated original ideas about alter-
native designs. As an example of evolution, consider the cell phone and its descendant, the
smartphone. The capabilities of the phone in your pocket have increased from the time they
first appeared. Initially, the cell phone simply made and received phone calls and texts, but
now the smartphone supports a myriad of interactions, can take photos and record audio,
play movies and games, and record your exercise routine.

Creativity and invention are often wrapped in mystique, but a lot has been uncovered
about the process and of how creativity can be enhanced or inspired (for example, see
Rogers, 2014). For instance, browsing a collection of designs will inspire designers to con-
sider alternative perspectives and hence alternative solutions. As Roger Schank (1982, p. 22)
puts it, “An expert is someone who gets reminded of just the right prior experience to help
him in processing his current experiences.” And while those experiences may be the designer’s
own, they can equally well be others’.

Another approach to creativity has been adopted by Neil Maiden et al. (2007). They
ran creativity workshops to generate innovative requirements in an air traffic management
(ATM) application domain. Their idea was to introduce experts in different fields into the
workshop and then invite stakeholders to identify analogies between their own field and this
new one. For example, they have invited an Indian textile expert, a musician, a TV program
scheduler, and a museum exhibit designer. Although not all obviously analogical domains,
they sparked creative ideas for the air traffic management application. For example, partici-
pants reported that one textile design was elegant, that is, simple, beautiful, and symmetrical.
They then transferred these properties to a key area of the ATM domain—that of aircraft
conflict resolution. They explored the meaning of elegance within this context and realized
that elegance is perceived differently by different controllers. From this they generated the
requirement that the system should be able to accommodate different air traffic controller
styles during conflict resolution.

2 . 3 S O M E P R A C T I C A l I S S u E S 59

A more pragmatic answer to this question, then, is that alternatives come from seeking
different perspectives and looking at other designs. The process of inspiration and creativ-
ity can be enhanced by prompting a designer’s own experience and studying others’ ideas
and suggestions. Deliberately seeking out suitable sources of inspiration is a valuable step
in any design process. These sources may be very close to the intended new product, such as
competitors’ products; they may be earlier versions of similar systems; or they may be from
a completely different domain.

Under some circumstances, the scope to consider alternative designs is limited. Design is
a process of balancing constraints and trading off one set of requirements with another, and
the constraints may mean that there are few viable alternatives available. For example, when
designing software to run under the Windows operating system, the design must conform to
the Windows look and feel and to other constraints intended to make Windows programs
consistent for the user. When producing an upgrade to an existing system, keeping familiar
elements of it to retain the same user experience may be prioritized.

2.3.4 How to Choose Among Alternative Designs
Choosing among alternatives is mostly about making design decisions: Will the device use
keyboard entry or a touch screen? Will the product provide an automatic memory function
or not? These decisions will be informed by the information gathered about users and their
tasks and by the technical feasibility of an idea. Broadly speaking, though, the decisions
fall into two categories: those that are about externally visible and measurable features

ACTIvITY 2.4
Consider the product introduced in Activity 2.1. Reflecting on the process again, what inspired
your initial design? Are there any innovative aspects to it?

Comment
For our design, existing sources of information and their flaws were influential. For example,
there is so much information available about travel, destinations, hotel comparisons, and so
forth, that it can be overwhelming. However, travel blogs contain useful and practical insights,
and websites that compare alternative options are informative. We were also influenced by
some favorite mobile and desktop applications such as the United Kingdom’s National Rail
smartphone app for its real-time updating and by the Airbnb website for its mixture of sim-
plicity and detail.

Perhaps you were inspired by something that you use regularly, like a particularly enjoy-
able game or a device that you like to use? I’m not sure how innovative our ideas were, but
the main goal was for the application to tailor its advice for the user’s preferences. There are
probably other aspects that make your design unique and that may be innovative to a greater
or lesser degree.

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N60

and those that are about characteristics internal to the system that cannot be observed
or measured without dissecting it. For example, in a photocopier, externally visible and
measurable factors include the physical size of the machine, the speed and quality of copy-
ing, the different sizes of paper it can use, and so on. Underlying each of these factors are

BOX 2.5
A Box Full of Ideas

The innovative product design company IDEO was mentioned in Chapter  1. Underlying
some of its creative flair is a collection of weird and wonderful engineering housed in
a large flatbed filing cabinet called the TechBox. The TechBox holds hundreds of gizmos
and interesting materials, divided into categories such as Amazing Materials, Cool Mech-
anisms, Interesting Manufacturing Processes, Electronic Technologies, and Thermal and
Optical. Each item has been placed in the box because it represents a neat idea or a new
process. The staff at IDEO take along a selection of items from the TechBox to brainstorming
meetings. The items may be chosen because they provide useful visual props or possible solu-
tions to a particular issue or simply to provide some light relief.

Each item is clearly labeled with its name and category, but further information can be
found by accessing the TechBox’s online catalog. Each item has its own page detailing what
the item is, why it is interesting, where it came from, and who has used it or knows more
about it. Items in the box include an example of metal-coated wood and materials with and
without holes that stretch, bend, and change shape or color at different temperatures.

Each of IDEO’s offices has a TechBox, and each TechBox has its own curator who is
responsible for maintaining and cataloging the items and for promoting its use within
the office. Anyone can submit a new item for consideration. As items become common-
place, they are removed from the TechBox to make way for the next generation of
fascinating curios.

2 . 3 S O M E P R A C T I C A l I S S u E S 61

other considerations that cannot be observed or studied without dissecting the machine. For
example, the choice of materials used in a photocopier may depend on its friction rating and
how much it deforms under certain conditions. In interaction design, the user experience
is the driving force behind the design and so externally visible and measurable behavior is
the main focus. Detailed internal workings are still important to the extent that they affect
external behavior or features.

DIlEMMA
Copying for Inspiration: Is It Legal?

Designers draw on their experience of design when approaching a new project. This includes
the use of previous designs that they know work—both designs that they have created them-
selves and those that others have created. Others’ creations often spark inspiration that also
leads to new ideas and innovation. This is well known and understood. However, the expres-
sion of an idea is protected by copyright, and people who infringe on that copyright can be
taken to court and prosecuted. Note that copyright covers the expression of an idea and not
the idea itself. This means, for example, that while there are numerous smartphones all with
similar functionality, this does not represent an infringement of copyright as the idea has
been expressed in different ways and it is the expression that has been copyrighted. Copy-
right is free and is automatically invested in the author, for instance, the writer of a book or
a programmer who develops a program, unless they sign the copyright over to someone else.
Employment contracts often include a statement that the copyright relating to anything pro-
duced in the course of that employment is automatically assigned to the employer and does
not remain with the employee.

Patenting is an alternative to copyright that does protect the idea rather than the expres-
sion of the idea. There are various forms of patenting, each of which is designed to allow the
inventor to capitalize on their idea. For example, Amazon patented its one-click purchasing
process, which allows regular users simply to choose a purchase and buy it with one mouse
click (US Patent No. 5960411, September 29, 1999). This is possible because the system stores
its customers’ details and recognizes them when they access the Amazon site again.

In recent years, the creative commons community (https://creativecommons.org/) has
suggested more flexible licensing arrangements that allow others to reuse and extend a piece
of created work, thereby supporting collaboration. In the open source software development
movement, for example, software code is freely distributed and can be modified, incorporated
into other software, and redistributed under the same open source conditions. No royalty fees
are payable on any use of open source code. These movements do not replace copyright or
patent law, but they provide an alternative route for the dissemination of ideas.

So, the dilemma comes in knowing when it is OK to use someone else’s work as a source
of inspiration and when you are infringing copyright or patent law. The issues are complex
and detailed and well beyond the scope of this book, but Bainbridge (2014) is a good resource
to understand this area better.

Homepage

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N62

One answer to the previous question is that choosing between alternative designs is
informed by letting users and stakeholders interact with them and by discussing their expe-
riences, preferences, and suggestions for improvement. To do this, the designs must be in
a form that can be reasonably evaluated by users, not in technical jargon or notation that
seems impenetrable to them. Documentation is one traditional way to communicate a design,
for example, a diagram showing the product’s components or a description of how it works.
But a static description cannot easily capture the dynamics of behavior, and for an interactive
product this needs to be communicated so that users can see what it will be like to operate it.

Prototyping is often used to overcome potential client misunderstandings and to test the
technical feasibility of a suggested design and its production. It involves producing a limited
version of the product with the purpose of answering specific questions about the design’s
feasibility or appropriateness. Prototypes give a better impression of the user experience
than simple descriptions; different kinds of prototyping are suitable for different stages of
development and for eliciting different kinds of feedback. When a deployable version of the
product is available, another way to choose between alternative designs is to deploy two dif-
ferent variations and collect data from actual use that is then used to inform the choice. This
is called A/B testing, and it is often used for alternative website designs (see Box 2.5).

Another basis on how to choose between alternatives is quality, but that requires a clear
understanding of what quality means, and people’s views of quality vary. Everyone has a
notion of the level of quality that is expected, wanted, or needed from a product. Whether
this is expressed formally, informally, or not at all, it exists and informs the choice between
alternatives. For example, one smartphone design might make it easy to access a popular
music channel but restrict sound settings, while another requires more complicated key
sequences to access the channel but has a range of sophisticated sound settings. One user’s
view of quality may lean toward ease of use, while another may lean toward sophisticated
sound settings.

Most projects involve a range of different stakeholder groups, and it is common for each
of them to define quality differently and to have different acceptable limits for it. For exam-
ple, although all stakeholders may agree on goals for a video game such as “characters will be
appealing” or “graphics will be realistic,” the meaning of these statements can vary between
different groups. Disputes will arise if, later in development, it transpires that “realistic”
to a stakeholder group of teenage players is different from “realistic” to a group of parent
stakeholders or to developers. Capturing these different views clearly clarifies expectations,
provides a benchmark against which products and prototypes can be compared, and forms a
basis on which to choose among alternatives.

The process of writing down formal, verifiable—and hence measurable—usability crite-
ria is a key characteristic of an approach to interaction design called usability engineering.
This has emerged over many years and with various proponents (Whiteside et  al., 1988;
Nielsen, 1993). Most recently, it is often applied in health informatics (for example, see
Kushniruk et al., 2015). Usability engineering involves specifying quantifiable measures of
product performance, documenting them in a usability specification, and assessing the prod-
uct against them.

2 . 3 S O M E P R A C T I C A l I S S u E S 63

BOX 2.5
A/B Testing

A/B testing is an online method to inform the choice between two alternatives. It is most
commonly used for comparing different versions of web pages or apps, but the principles
and mathematics behind it came about in the 1920s (Gallo, 2017). In an interaction design
context, different versions of web pages or apps are released for use by users performing their
everyday tasks. Typically, users are unaware that they are contributing to an evaluation. This
is a powerful way to involve users in choosing between alternatives because a huge number of
users can be involved and the situations are authentic.

On the one hand, it’s a simple idea—give one set of users one version and a second set the
other version and see which set scores more highly against the success criteria. But dividing up
the sets, choosing the success criteria, and working out the metrics to use are nontrivial (for
example, see Deng and Shi, 2016). Pushing this idea further, it is common to have “multivari-
ate” testing in which several options are tried at once, so you end up doing A/B/C testing or
even A/B/C/D testing.

ACTIvITY 2.5
Consider your product from Activity 2.1. Suggest some usability criteria that could be applied
to determine its quality. Use the usability goals introduced in Chapter 1—effectiveness, effi-
ciency, safety, utility, learnability, and memorability. Be as specific as possible. Check the crite-
ria by considering exactly what to measure and how to measure its performance.

Then try to do the same thing for some of the user experience goals introduced in
Chapter 1. (These relate to whether a system is satisfying, enjoyable, motivating, rewarding,
and so on.)

Comment
Finding measurable characteristics for some of these is not easy. Here are some suggestions,
but there are others. Where possible, criteria that are measurable and specific are preferable.
• Effectiveness: Identifying measurable criteria for this goal is particularly difficult since it is
a combination of the other goals. For example, does the system support travel organiza-
tion, choosing transport routes, booking accommodation, and so on? In other words, is the
product used?

• Efficiency: Is it clear how to ask for recommendations from the product? How quickly does
it identify a suitable route or destination details?

(Continued)

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N64

2.3.5 How to Integrate Interaction Design Activities Within Other
Lifecycle Models
As illustrated in Chapter  1 (Figure 1.4), many other disciplines contribute to interaction
design, and some of these disciplines have lifecycles of their own. Prominent among them
are those associated with software development, and integrating interaction design activities
within software development has been discussed for many years; for example, see Carmelo
Ardito et al. (2014) and Ahmed Seffah et al. (2005).

The latest attempts to integrate these practices focus on agile software development.
Agile methods began to emerge in the late 1990s. The most well-known of these are eXtreme
Programming (Beck and Andres, 2005), Scrum (Schwaber and Beedle, 2002), and Kanban
(Anderson, 2010). The Dynamic Systems Development Method (DSDM) (DSDM, 2014),
although established before the current agile movement, also belongs to the agile family as
it adheres to the agile manifesto. These methods differ, but they all stress the importance of
iteration, early and repeated user feedback, being able to handle emergent requirements, and
striking a good balance between flexibility and structure. They also all emphasize collabora-
tion, face-to-face communication, streamlined processes to avoid unnecessary activities, and
the importance of practice over process, that is, of getting work done.

The opening statement for the Manifesto for Agile Software Development (www
.agilemanifesto.org/) reads as follows:

We are uncovering better ways of developing software by doing it and helping others do
it. Through this work we have come to value:

• Individuals and interactions over processes and tools
• Working software over comprehensive documentation
• Customer collaboration over contract negotiation
• Responding to change over following a plan

• Safety: How often does data get lost or is the wrong option chosen? This may be measured,
for example, as the number of times this happens per trip.

• Utility: How many functions offered are used for every trip, how many every other trip, and
how many are not used at all? How many tasks are difficult to complete in a reasonable
time because functionality is missing or the right subtasks aren’t supported?

• Learnability: How long does it take for a novice user to be able to do a series of set tasks,
for example, to book a hotel room in Paris near the meeting venue for the meeting dates,
identify appropriate flights from Sydney to Wellington, or find out whether a visa is needed
to go to China?

• Memorability: If the product isn’t used for a month, how many functions can the user
remember how to perform? How long does it take to remember how to perform the most
frequent task?

Finding measurable characteristics for the user experience criteria is harder. How do you
measure satisfaction, fun, motivation, or aesthetics? What is entertaining to one person may
be boring to another; these kinds of criteria are subjective and so cannot be measured as
objectively.

http://www.agilemanifesto.org/

http://www.agilemanifesto.org/

2 . 3 S O M E P R A C T I C A l I S S u E S 65

This manifesto is underpinned by a series of principles, which range from communica-
tion with the business to excellence of coding and maximizing the amount of work done. The
agile approach to development is particularly interesting from the point of view of interac-
tion design because it incorporates tight iterations and feedback and collaboration with the
customer. For example, in Scrum, each sprint is between one and four weeks, with a product
of value being delivered at the end of each sprint. Also, eXtreme1 Programming (XP) stipu-
lates that the customer should be on-site with developers. In practice, the customer role is
usually taken by a team rather than by one person (Martin et al., 2009), and integration is
far from straightforward (Ferreira et al., 2012). Many companies have integrated agile meth-
ods with interaction design practices to produce a better user experience and business value
(Loranger and Laubheimer, 2017), but it is not necessarily easy, as discussed in Chapter 13,
“Interaction Design in Practice.”

In-Depth Activity
These days, timepieces (such as clocks, wristwatches, and so on) have a variety of functions.
Not only do they tell the time and date, but they can speak to you, remind you when it’s time
to do something, and record your exercise habits among other things. The interface for these
devices, however, shows the time in one of two basic ways: as a digital number such as 11:40
or through an analog display with two or three hands—one to represent the hour, one for the
minutes, and one for the seconds.

This in-depth activity is to design an innovative timepiece. This could be in the form of
a wristwatch, a mantelpiece clock, a sculpture for a garden or balcony, or any other kind
of timepiece you prefer. The goal is to be inventive and exploratory by following these steps:

(a) Think about the interactive product that you are designing: What do you want it to do?
Find three to five potential users, and ask them what they would want. Write a list of
requirements for the clock, together with some usability criteria and user experience cri-
teria based on the definitions in Chapter 1.

(b) Look around for similar devices and seek out other sources of inspiration that you might
find helpful. Make a note of any findings that are interesting, useful, or insightful.

(c) Sketch some initial designs for the timepiece. Try to develop at least two distinct alterna-
tives that meet your set of requirements.

(d) Evaluate the two designs by using your usability criteria and by role-playing an interac-
tion with your sketches. Involve potential users in the evaluation, if possible. Does it do
what you want? Is the time or other information being displayed always clear? Design is
iterative, so you may want to return to earlier elements of the process before you choose
one of your alternatives.

1 The method is called extreme because it pushes a key set of good practices to the limit; that is, it is good practice
to test often, so in XP the development is test-driven, and a complete set of tests is executed many times a day. It is
good practice to talk to people about their requirements, so rather than having weighty documentation, XP reduces
documentation to a minimum, thus forcing communication, and so on.

2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N66

Further Reading

ASHMORE, S. and RUNYAN, K. (2015) Introduction to Agile Methods, Addison Wesley.
This book introduces the basics of agile software development and the most popular agile
methods in an accessible way. It touches on usability issues and the relationship between
agile and marketing. It is a good place to start for someone new to the agile way of working.

KELLEY, T., with LITTMAN, J. (2016) The Art of Innovation, Profile Books. Tom Kelley is
a partner at IDEO. In this book, Kelley explains some of the innovative techniques used at

Summary
In this chapter, we looked at user-centered design and the process of interaction design. That
is, what is user-centered design, what activities are required in order to design an interactive
product, and how are these activities related? A simple interaction design lifecycle model
consisting of four activities was introduced, and issues surrounding the involvement and
identification of users, generating alternative designs, evaluating designs, and integrating user-
centered concerns with other lifecycles were discussed.

Key Points
• Different design disciplines follow different approaches, but they have commonalities that
are captured in the double diamond of design.

• It is important to have a good understanding of the problem space before trying to
build anything.

• The interaction design process consists of four basic activities: discover requirements,
design alternatives that meet those requirements, prototype the designs so that they can be
communicated and assessed, and evaluate them.

• User-centered design rests on three principles: early focus on users and tasks, empirical
measurement, and iterative design. These principles are also key for interaction design.

• Involving users in the design process assists with expectation management and feelings of
ownership, but how and when to involve users requires careful planning.

• There are many ways to understand who users are and what their goals are in using a prod-
uct, including rapid iterations of working prototypes.

• Looking at others’ designs and involving other people in design provides useful inspiration
and encourages designers to consider alternative design solutions, which is key to effec-
tive design.

• Usability criteria, technical feasibility, and users’ feedback on prototypes can all be used to
choose among alternatives.

• Prototyping is a useful technique for facilitating user feedback on designs at all stages.
• Interaction design activities are becoming better integrated with lifecycle models from other
related disciplines such as software engineering.

F u R T H E R R E A D I N G 67

IDEO, but more importantly he talks about the culture and philosophy underlying IDEO’s
success. There are some useful practical hints in here as well as an informative story about
building and maintaining a successful design company.

PRESSMAN, R.S. and MAXIM, B.R. (2014) Software Engineering: A Practitioner’s Approach
(Int’l Ed), McGraw-Hill Education. If you are interested in pursuing the software engineering
aspects of the lifecycle models section, then this book provides a useful overview of the main
models and their purpose.

SIROKER. D. and KOOMEN, P. (2015) A/B Testing: The Most Powerful Way to Turn Clicks
into Customers, John Wiley. This book is written by two experienced practitioners who have
been using A/B testing with a range of organizations. It is particularly interesting because of the
example cases that show the impact that applying A/B testing successfully can have.

ROGERS. Y. (2014) Secrets of Creative People (PDF available from www.id-book.com/).
This short book summarizes the findings from a two-year research project into creativity.
It emphasizes the importance of different perspectives to creativity and describes how suc-
cessful creativity arises from sharing, constraining, narrating, connecting, and even sparring
with others.

http://www.id-book.com/

Chapter 3

C O N C E P T U A L I Z I N G I N T E R A C T I O N

Objectives
The main goals of this chapter are to accomplish the following:

• Explain how to conceptualize interaction.
• Describe what a conceptual model is and how to begin to formulate one.
• Discuss the use of interface metaphors as part of a conceptual model.
• Outline the core interaction types for informing the development of a conceptual
model.

• Introduce paradigms, visions, theories, models, and frameworks informing interaction
design.

3.1 Introduction

When coming up with new ideas as part of a design project, it is important to conceptualize
them in terms of what the proposed product will do. Sometimes, this is referred to as creat-
ing a proof of concept. In relation to the double diamond framework, it can be viewed as an
initial pass to help define the area and also when exploring solutions. One reason for needing
to do this is as a reality check where fuzzy ideas and assumptions about the benefits of the
proposed product are scrutinized in terms of their feasibility: How realistic is it to develop
what they have suggested, and how desirable and useful will it actually be? Another reason is
to enable designers to begin articulating what the basic building blocks will be when develop-
ing the product. From a user experience (UX) perspective, it can lead to better clarity, forcing
designers to explain how users will understand, learn about, and interact with the product.

3.1 Introduction

3.2 Conceptualizing Interaction

3.3 Conceptual Models

3.4 Interface Metaphors

3.5 Interaction Types

3.6 Paradigms, Visions, Theories, Models, and Frameworks

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N70

For example, consider the bright idea that a designer has of creating a voice-assisted
mobile robot that can help waiters in a restaurant take orders and deliver meals to customers
(see Figure 3.1). The first question to ask is: why? What problem would this address? The
designer might say that the robot could help take orders and entertain customers by having
a conversation with them at the table. They could also make recommendations that can be
customized to different customers, such as restless children or fussy eaters. However, none of
these addresses an actual problem. Rather, they are couched in terms of the putative benefits
of the new solution. In contrast, an actual problem identified might be the following: “It is
difficult to recruit good wait staff who provide the level of customer service to which we have
become accustomed.”

Having worked through a problem space, it is important to generate a set of research
questions that need to be addressed, when considering how to design a robot voice interface
to wait on customers. These might include the following: How intelligent does it have to be?
How would it need to move to appear to be talking? What would the customers think of it?
Would they think it is too gimmicky and get easily tired of it? Or, would it always be a pleas-
ure for them to engage with the robot, not knowing what it would say on each new visit to
the restaurant? Could it be designed to be a grumpy extrovert or a funny waiter? What might
be the limitations of this voice-assisted approach?

Many unknowns need to be considered in the initial stages of a design project, especially
if it is a new product that is being proposed. As part of this process, it can be useful to show
where your novel ideas came from. What sources of inspiration were used? Is there any
theory or research that can be used to inform and support the nascent ideas?

Asking questions, reconsidering one’s assumptions, and articulating one’s concerns and
standpoints are central aspects of the early ideation process. Expressing ideas as a set of con-
cepts greatly helps to transform blue-sky and wishful thinking into more concrete models of

Figure 3.1 A nonspeaking robot waiter in Shanghai. What would be gained if it could also talk
with customers?
Source: ZUMA Press / Alamy Stock Photo

3 . 2 C O N C E P T U A L I Z I N G I N T E R A C T I O N 71

how a product will work, what design features to include, and the amount of functionality
that is needed. In this chapter, we describe how to achieve this through considering the dif-
ferent ways of conceptualizing interaction.

3.2 Conceptualizing Interaction

When beginning a design project, it is important to be clear about the underlying assump-
tions and claims. By an assumption, we mean taking something for granted that requires fur-
ther investigation; for example, people now want an entertainment and navigation system in
their cars. By a claim, we mean stating something to be true when it is still open to question.
For instance, a multimodal style of interaction for controlling this system—one that involves
speaking or gesturing while driving—is perfectly safe.

Writing down your assumptions and claims and then trying to defend and support them
can highlight those that are vague or wanting. In so doing, poorly constructed design ideas
can be reformulated. In many projects, this process involves identifying human activities and
interactivities that are problematic and working out how they might be improved through
being supported with a different set of functions. In others, it can be more speculative, requir-
ing thinking through how to design for an engaging user experience that does not exist.

Box 3.1 presents a hypothetical scenario of a team working through their assumptions
and claims; this shows how, in so doing, problems are explained and explored and leads to a
specific avenue of investigation agreed on by the team.

BOX 3.1
Working Through Assumptions and Claims

This is a hypothetical scenario of early design highlighting the assumptions and claims (itali-
cized) made by different members of a design team.

A large software company has decided that it needs to develop an upgrade of its web
browser for smartphones because its marketing team has discovered that many of the com-
pany’s customers have switched over to using another mobile browser. The marketing people
assume that something is wrong with their browser and that their rivals have a better product.
But they don’t know what the problem is with their browser.

The design team put in charge of this project assumes that they need to improve the
usability of a number of the browser’s functions. They claim that this will win back users by
making features of the interface simpler, more attractive, and more flexible to use.

The user researchers on the design team conduct an initial user study investigating how
people use the company’s web browser on a variety of smartphones. They also look at other
mobile web browsers on the market and compare their functionality and usability. They
observe and talk to many different users. They discover several things about the usability of
their web browser, some of which they were not expecting. One revelation is that many of their
customers have never actually used the bookmarking tool. They present their findings to the
rest of the team and have a long discussion about why each of them thinks it is not being used.

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N72

Explaining people’s assumptions and claims about why they think something might be
a good idea (or not) enables the design team as a whole to view multiple perspectives on
the problem space and, in so doing, reveals conflicting and problematic ones. The following
framework is intended to provide a set of core questions to aid design teams in this process:

• Are there problems with an existing product or user experience? If so, what are they?
• Why do you think there are problems?
• What evidence do you have to support the existence of these problems?
• How do you think your proposed design ideas might overcome these problems?

ACTIVITY 3.1
Use the framework in the previous list to guess what the main assumptions and claims were
behind 3D TV. Then do the same for curved TV, which was designed to be bendy so as to
make the viewing experience more immersive. Are the assumptions similar? Why were they
problematic?

Comment
There was much hype and fanfare about the enhanced user experience 3D and curved TVs
would offer, especially when watching movies, sports events, and dramas (see Figure 3.2).

One member claims that the web browser’s function for organizing bookmarks is tricky
and error-prone, and she assumes that this is the reason why many users do not use it. Another
member backs her up, saying how awkward it is to use this method when wanting to move
bookmarks between folders. One of the user experience architects agrees, noting how several
of the users with whom he spoke mentioned how difficult and time-consuming they found it
when trying to move bookmarks between folders and how they often ended up accidentally
putting them into the wrong folders.

A software engineer reflects on what has been said, and he makes the claim that the book-
mark function is no longer needed since he assumes that most people do what he does, which
is to revisit a website by flicking through their history of previously visited pages. Another
member of the team disagrees with him, claiming that many users do not like to leave a trail
of the sites they have visited and would prefer to be able to save only the sites that they think
they might want to revisit. The bookmark function provides them with this option. Another
option discussed is whether to include most-frequently visited sites as thumbnail images or as
tabs. The software engineer agrees that providing all of the options could be a solution but
worries how this might clutter the small screen interface.

After much discussion on the pros and cons of bookmarking versus history lists, the team
decides to investigate further how to support effectively the saving, ordering, and retrieving of
websites using a mobile web browser. All agree that the format of the existing web browser’s
structure is too rigid and that one of their priorities is to see how they can create a simpler way
of revisiting websites on a smartphone.

3 . 2 C O N C E P T U A L I Z I N G I N T E R A C T I O N 73

However, both never really took off. Why was this? One assumption for 3D TV was that
people would not mind wearing the glasses that were needed to see in 3D, nor would they
mind paying a lot more for a new 3D-enabled TV screen. A claim was that people would really
enjoy the enhanced clarity and color detail provided by 3D, based on the favorable feedback
received worldwide when viewing 3D films, such as Avatar, at a cinema. Similarly, an assump-
tion made about curved TV was that it would provide more flexibility for viewers to optimize
the viewing angles in someone’s living room.

The unanswered question for both concepts was this: Could the enhanced cinema view-
ing experience that both claimed become an actual desired living room experience? There was
no existing problem to overcome—what was being proposed was a new way of experiencing
TV. The problem they might have assumed existed was that the experience of viewing TV at
home was inferior to that of the cinema. The claim could have been that people would be
prepared to pay more for a better-quality viewing experience more akin to that of the cinema.

But were people prepared to pay extra for a new TV because of this enhancement? A
number of people did. However, a fundamental usability problem was overlooked—many
people complained of motion sickness when watching 3D TV. The glasses were also easily lost.
Moreover, wearing them made it difficult to do other things such as flicking through multiple
channels, texting, and tweeting. (Many people simultaneously use additional devices, such as
smartphones and tablets while watching TV.) Most people who bought 3D TVs stopped
watching them after a while because of these usability problems. While curved TV didn’t
require viewers to wear special glasses, it also failed because the actual benefits were not that
significant relative to the cost. While for some the curve provided a cool aesthetic look and an
improved viewing angle, for others it was simply an inconvenience.

Figure 3.2 A family watching 3D TV
Source: Andrey Popov/Shutterstock

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N74

Making clear what one’s assumptions are about a problem and the claims being made
about potential solutions should be carried out early on and throughout a project. Design
teams also need to work out how best to conceptualize the design space. Primarily, this
involves articulating the proposed solution as a conceptual model with respect to the user
experience. The benefits of conceptualizing the design space in this way are as follows:

Orientation Enabling the design team to ask specific kinds of questions about how the
conceptual model will be understood by the targeted users.

Open-Mindedness Allowing the team to explore a range of different ideas to address the
problems identified.

Common Ground Allowing the design team to establish a set of common terms that all can
understand and agree upon, reducing the chance of misunderstandings and confusion
arising later.

Once formulated and agreed upon, a conceptual model can then become a shared blue-
print leading to a testable proof of concept. It can be represented as a textual description and/
or in a diagrammatic form, depending on the preferred lingua franca used by the design team.
It can be used not just by user experience designers but also to communicate ideas to busi-
ness, engineering, finance, product, and marketing units. The conceptual model is used by the
design team as the basis from which they can develop more detailed and concrete aspects of
the design. In doing so, design teams can produce simpler designs that match up with users’
tasks, allow for faster development time, result in improved customer uptake, and need less
training and customer support (Johnson and Henderson, 2012).

3.3 Conceptual Models

A model is a simplified description of a system or process that helps describe how it works.
In this section, we look at a particular kind of model used in interaction design intended
to articulate the problem and design space—the conceptual model. In a later section, we
describe more generally how models have been developed to explain phenomena in human-
computer interaction.

Jeff Johnson and Austin Henderson (2002) define a conceptual model as “a high-level
description of how a system is organized and operates” (p. 26). In this sense, it is an abstrac-
tion outlining what people can do with a product and what concepts are needed to under-
stand how to interact with it. A key benefit of conceptualizing a design at this level is that
it enables “designers to straighten out their thinking before they start laying out their widg-
ets” (p. 28).

In a nutshell, a conceptual model provides a working strategy and a framework of gen-
eral concepts and their interrelations. The core components are as follows:

• Metaphors and analogies that convey to people how to understand what a product is used
for and how to use it for an activity (for example browsing and bookmarking).

• The concepts to which people are exposed through the product, including the task-domain
objects they create and manipulate, their attributes, and the operations that can be per-
formed on them (such as saving, revisiting, and organizing).

• The relationships between those concepts (for instance, whether one object contains another).

3 . 3 C O N C E P T U A L M O d E L s 75

• The mappings between the concepts and the user experience the product is designed to
support or invoke (for example, one can revisit a page through looking at a list of visited
sites, most-frequently visited, or saved websites).

How the various metaphors, concepts, and their relationships are organized determines
the user experience. By explaining these, the design team can debate the merits of providing
different methods and how they support the main concepts, for example, saving, revisiting,
categorizing, reorganizing, and their mapping to the task domain. They can also begin dis-
cussing whether a new overall metaphor may be preferable that combines the activities of
browsing, searching, and revisiting. In turn, this can lead the design team to articulate the
kinds of relationships between them, such as containership. For example, what is the best
way to sort and revisit saved pages, and how many and what types of containers should
be used (for example, folders, bars, or panes)? The same enumeration of concepts can be
repeated for other functions of the web browser—both current and new. In so doing, the
design team can begin to work out systematically what will be the simplest and most effective
and memorable way of supporting users while browsing the Internet.

The best conceptual models are often those that appear obvious and simple; that is,
the operations they support are intuitive to use. However, sometimes applications can
end up being based on overly complex conceptual models, especially if they are the result
of a series of upgrades, where more and more functions and ways of doing something
are added to the original conceptual model. While tech companies often provide videos
showing what new features are included in an upgrade, users may not pay much attention
to them or skip them entirely. Furthermore, many people prefer to stick to the methods
they have always used and trusted and, not surprisingly, become annoyed when they find
one or more have been removed or changed. For example, when Facebook rolled out its
revised newsfeed a few years back, many users were unhappy, as evidenced by their post-
ings and tweets, preferring the old interface that they had gotten used to. A challenge for
software companies, therefore, is how best to introduce new features that they have added
to an upgrade—and explain their assumed benefits to users—while also justifying why
they removed others.

BOX 3.2
Design Concept

Another term that is sometimes used is a design concept. Essentially, it is a set of ideas for a
design. Typically, it is composed of scenarios, images, mood boards, or text-based documents.
For example, Figure 3.3 shows the first page of a design concept developed for an ambient
display that was aimed at changing people’s behavior in a building, that is, to take the stairs
instead of the elevator. Part of the design concept was envisioned as an animated pattern of
twinkly lights that would be embedded in the carpet near the entrance of the building with the
intention of luring people toward the stairs (Hazlewood et al., 2010).

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N76

Most interface applications are actually based on well-established conceptual models.
For example, a conceptual model based on the core aspects of the customer experience when
at a shopping mall underlies most online shopping websites. These include the placement of
items that a customer wants to purchase into a shopping cart or basket and proceeding to
checkout when they’re ready to make the purchase. Collections of patterns are now readily
available to help design the interface for these core transactional processes, together with
many other aspects of a user experience, meaning interaction designers do not have to start
from scratch every time they design or redesign an application. Examples include patterns for
online forms and navigation on mobile phones.

It is rare for completely new conceptual models to emerge that transform the way daily
and work activities are carried out at an interface. Those that did fall into this category
include the following three classics: the desktop (developed by Xerox in the late 1970s), the
digital spreadsheet (developed by Dan Bricklin and Bob Frankston in the late 1970s), and
the World Wide Web (developed by Tim Berners Lee in the early 1980s). All of these inno-
vations made what was previously limited to a few skilled people accessible to all, while
greatly expanding what is possible. The graphical desktop dramatically changed how office
tasks could be performed (including creating, editing, and printing documents). Perform-
ing these tasks using the computers prevalent at the time was significantly more arduous,
having to learn and use a command language (such as DOS or UNIX). Digital spreadsheets
made accounting highly flexible and easier to accomplish, enabling a diversity of new com-
putations to be performed simply through filling in interactive boxes. The World Wide Web
allowed anyone to browse a network of information remotely. Since then, e-readers and digi-
tal authoring tools have introduced new ways of reading documents and books online, sup-
porting associated activities such as annotating, highlighting, linking, commenting, copying,

Figure 3.3 The first page of a design concept for an ambient display

3 . 3 C O N C E P T U A L M O d E L s 77

and tracking. The web has also enabled and made many other kinds of activities easier, such
as browsing for news, weather, sports, and financial information, as well as banking, shop-
ping, and learning online among other tasks. Importantly, all of these conceptual models
were based on familiar activities.

BOX 3.3
A Classic Conceptual Model: The Xerox Star

The Star interface, developed by Xerox in 1981 (see Figure 3.4), revolutionized the way that
interfaces were designed for personal computing (Smith et  al., 1982; Miller and Johnson,
1996) and is viewed as the forerunner of today’s Mac and Windows desktop interfaces. Origi-
nally, it was designed as an office system, targeted at workers not interested in computing per
se, and it was based on a conceptual model that included the familiar knowledge of an office.
Paper, folders, filing cabinets, and mailboxes were represented as icons on the screen and were
designed to possess some of the properties of their physical counterparts. Dragging a docu-
ment icon across the desktop screen was seen as equivalent to picking up a piece of paper
in the physical world and moving it (but this, of course, is a very different action). Similarly,
dragging a digital document into a digital folder was seen as being analogous to placing a
physical document into a physical cabinet. In addition, new concepts that were incorporated
as part of the desktop metaphor were operations that could not be performed in the physical
world. For example, digital files could be placed onto an icon of a printer on the desktop,
resulting in the computer printing them out.

Figure 3.4 The Xerox Star
Source: Used courtesy of Xerox

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N78

3.4 Interface Metaphors

Metaphors are considered to be a central component of a conceptual model. They provide
a structure that is similar in some way to aspects of a familiar entity (or entities), but they
also have their own behaviors and properties. More specifically, an interface metaphor is one
that is instantiated in some way as part of the user interface, such as the desktop metaphor.
Another well-known one is the search engine, originally coined in the early 1990s to refer
to a software tool that indexed and retrieved files remotely from the Internet using various
algorithms to match terms selected by the user. The metaphor invites comparisons between
a mechanical engine, which has several working parts, and the everyday action of looking in
different places to find something. The functions supported by a search engine also include
other features besides those belonging to an engine that searches, such as listing and prior-
itizing the results of a search. It also does these actions in quite different ways from how a
mechanical engine works or how a human being might search a library for books on a given
topic. The similarities implied by the use of the term search engine, therefore, are at a general
level. They are meant to conjure up the essence of the process of finding relevant information,
enabling the user to link these to less familiar aspects of the functionality provided.

Interface metaphors are intended to provide familiar entities that enable people read-
ily to understand the underlying conceptual model and know what to do at the interface.
However, they can also contravene people’s expectations about how things should be, such

Video The history of the Xerox Star at http://youtu.be/Cn4vC80Pv6Q.

ACTIVITY 3.2
Go to a few online stores and see how the interface has been designed to enable the customer
to order and pay for an item. How many use the “add to shopping cart/basket” followed by the
“checkout” metaphor? Does this make it straightforward and intuitive to make a purchase?

Comment
Making a purchase online usually involves spending money by inputting one’s credit/debit
card details. People want to feel reassured that they are doing this correctly and do not get
frustrated with lots of forms to fill in. Designing the interface to have a familiar metaphor
(with an icon of a shopping cart/basket, not a cash register) makes it easier for people to know
what to do at the different stages of making a purchase. Most important, placing an item in
the basket does not commit the customer to purchase it there and then. It also enables them
to browse further and select other items, as they might in a physical store.

3 . 4 I N T E R F A C E M E TA P h O R s 79

as the recycle bin (trash can) that sits on the desktop. Logically and culturally (meaning, in
the real world), it should be placed under the desk. But users would not have been able to see
it because it would have been hidden by the desktop surface. So, it needed to go on the desk-
top. While some users found this irksome, most did not find it to be a problem. Once they
understood why the recycle bin icon was on the desktop, they simply accepted it being there.

An interface metaphor that has become popular in the last few years is the card. Many
of the social media apps, such as Facebook, Twitter, and Pinterest, present their content on
cards. Cards have a familiar form, having been around for a long time. Just think of how
many kinds there are: playing cards, business cards, birthday cards, credit cards, and post-
cards to name a few. They have strong associations, providing an intuitive way of organizing
limited content that is “card sized.” They can easily be flicked through, sorted, and themed.
They structure content into meaningful chunks, similar to how paragraphs are used to chunk
a set of related sentences into distinct sections (Babich, 2016). In the context of the smart-
phone interface, the Google Now card provides short snippets of useful information. This
appears on and moves across the screen in the way people would expect a real card to do—in
a lightweight, paper-based sort of way. The elements are also structured to appear as if they
were on a card of a fixed size, rather than, say, in a scrolling web page (see Figure 3.5).

Figure 3.5 Google Now card for restaurant recommendation in Germany
Source: Used courtesy of Johannes Schöning

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N80

In many cases, new interface metaphors rapidly become integrated into common par-
lance, as witnessed by the way people talk about them. For example, parents talk about how
much screen time children are allowed each day in the same way they talk more generally
about spending time. As such, the interface metaphors are no longer talked about as familiar
terms to describe less familiar computer-based actions; they have become everyday terms
in their own right. Moreover, it is hard not to use metaphorical terms when talking about
technology use, as they have become so ingrained in the language that we use to express our-
selves. Just ask yourself or someone else to describe Twitter and Facebook and how people
use them. Then try doing it without using a single metaphor.

Albrecht Schmidt (2017) suggests a pair of glasses as a good metaphor for thinking
about future technologies, helping us think more about how to amplify human cognition.
Just as they are seen as an extension of ourselves that we are not aware of most of the time
(except when they steam up!), he asks can we design new technologies that enable users to
do things without having to think about how to use them? He contrasts this “amplify” meta-
phor with the “tool” metaphor of a pair of binoculars that is used for a specific task—where
someone consciously has to hold them up against their eyes while adjusting the lens to bring
what they are looking at into focus. Current devices, like mobile phones, are designed more
like binoculars, where people have to interact with them explicitly to perform tasks.

BOX 3.4
Why Are Metaphors So Popular?

People frequently use metaphors and analogies (here we use the terms interchangeably) as
a source of inspiration for understanding and explaining to others what they are doing, or
trying to do, in terms that are familiar to them. They are an integral part of human language
(Lakoff and Johnson, 1980). Metaphors are commonly used to explain something that is
unfamiliar or hard to grasp by way of comparison with something that is familiar and easy
to grasp. For example, they are frequently employed in education, where teachers use them
to introduce something new to students by comparing the new material with something they
already understand. An example is the comparison of human evolution with a game. We are
all familiar with the properties of a game: there are rules, each player has a goal to win (or
lose), there are heuristics to deal with situations where there are no rules, there is the propen-
sity to cheat when the other players are not looking, and so on. By conjuring up these proper-
ties, the analogy helps us begin to understand the more difficult concept of evolution—how it
happens, what rules govern it, who cheats, and so on.

It is not surprising, therefore, to see how widely metaphors have been used in interaction
design to conceptualize abstract, hard-to-imagine, and difficult-to-articulate computer-based
concepts and interactions in more concrete and familiar terms and as graphical visualizations
at the interface level. Metaphors and analogies are used in these three main ways:
• As a way of conceptualizing what we are doing (for instance, surfing the web)
• As a conceptual model instantiated at the interface level (for example, the card metaphor)
• As a way of visualizing an operation (such as an icon of a shopping cart into which items

are placed that users want to purchase on an online shopping site)

3 . 5 I N T E R A C T I O N T Y P E s 81

3.5 Interaction Types

Another way of conceptualizing the design space is in terms of the interaction types that will
underlie the user experience. Essentially, these are the ways a person interacts with a product
or application. Originally, we identified four main types: instructing, conversing, manipulating,
and exploring (Preece et al., 2002). A fifth type has since been proposed by Christopher Lueg
et al. (2019) that we have added to ours, which they call responding. This refers to proactive
systems that initiate a request in situations to which a user can respond, for example, when
Netflix pauses a person’s viewing to ask them whether they would like to continue watching.

Deciding upon which of the interaction types to use, and why, can help designers formu-
late a conceptual model before committing to a particular interface in which to implement
them, such as speech-based, gesture-based, touch-based, menu-based, and so on. Note that
we are distinguishing here between interaction types (which we discuss in this section) and
interface types (which are discussed in Chapter 7, “Interfaces”). While cost and other product
constraints will often dictate which interface style can be used for a given application, consid-
ering the interaction type that will best support a user experience can highlight the potential
trade-offs, dilemmas, and pros and cons.

Here, we describe in more detail each of the five types of interaction. It should be noted
that they are not meant to be mutually exclusive (for example, someone can interact with a
system based on different kinds of activities); nor are they meant to be definitive. Also, the
label used for each type refers to the user’s action even though the system may be the active
partner in initiating the interaction.

• Instructing: Where users issue instructions to a system. This can be done in a number of
ways, including typing in commands, selecting options from menus in a windows environ-
ment or on a multitouch screen, speaking aloud commands, gesturing, pressing buttons, or
using a combination of function keys.

• Conversing: Where users have a dialog with a system. Users can speak via an interface or
type in questions to which the system replies via text or speech output.

• Manipulating: Where users interact with objects in a virtual or physical space by manipu-
lating them (for instance, opening, holding, closing, and placing). Users can hone their
familiar knowledge of how to interact with objects.

• Exploring: Where users move through a virtual environment or a physical space. Virtual
environments include 3D worlds and augmented and virtual reality systems. They enable
users to hone their familiar knowledge by physically moving around. Physical spaces that
use sensor-based technologies include smart rooms and ambient environments, also ena-
bling people to capitalize on familiarity.

• Responding: Where the system initiates the interaction and the user chooses whether to
respond. For example, proactive mobile location-based technology can alert people to
points of interest. They can choose to look at the information popping up on their phone
or ignore it. An example is the Google Now Card, shown in Figure 3.5, which pops up a
restaurant recommendation for the user to contemplate when they are walking nearby.

Besides these core activities of instructing, conversing, manipulating, exploring, and
responding, it is possible to describe the specific domain and context-based activities in which
users engage, such as learning, working, socializing, playing, browsing, writing, problem-
solving, decision-making, and searching—just to name but a few. Malcolm McCullough (2004)

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N82

suggests describing them as situated activities, organized by work (for example, presenting to
groups), home (such as resting), in town (for instance, eating), and on the road (for example,
walking). The rationale for classifying activities in this way is to help designers be more sys-
tematic when thinking about the usability of technology-modified places in the environment.
In the following sections we illustrate in more detail the five core interaction types and how
to design applications for them.

3.5.1 Instructing
This type of interaction describes how users carry out their tasks by telling the system what
to do. Examples include giving instructions to a system to perform operations such as tell the
time, print a file, and remind the user of an appointment. A diverse range of products has been
designed based on this model, including home entertainment systems, consumer electronics,
and computers. The way in which the user issues instructions can vary from pressing buttons
to typing in strings of characters. Many activities are readily supported by giving instructions.

In Windows and other graphical user interfaces (GUIs), control keys or the selection of
menu options via a mouse, touch pad, or touch screen are used. Typically, a wide range of
functions are provided from which users have to select when they want to do something to
the object on which they are working. For example, a user writing a report using a word
processor will want to format the document, count the number of words typed, and check
the spelling. The user instructs the system to do these operations by issuing appropriate
commands. Typically, commands are carried out in a sequence, with the system responding
appropriately (or not) as instructed.

One of the main benefits of designing an interaction based on issuing instructions is that
the interaction is quick and efficient. It is particularly fitting where there is a frequent need
to repeat actions performed on multiple objects. Examples include the repetitive actions of
saving, deleting, and organizing files.

ACTIVITY 3.3
There are many different kinds of vending machines in the world. Each offers a range of
goods, requiring users to part with some of their money. Figure 3.6 shows photos of two dif-
ferent types of vending machines: one that provides soft drinks and the other that delivers a
range of snacks. Both machines use an instructional mode of interaction. However, the way
they do so is quite different.

What instructions must be issued to obtain a soda from the first machine and a bar of
chocolate from the second? Why has it been necessary to design a more complex mode of inter-
action for the second vending machine? What problems can arise with this mode of interaction?

Comment
The first vending machine has been designed using simple instructions. There is a small num-
ber of drinks from which to choose, and each is represented by a large button displaying the
label of each drink. The user simply has to press one button, and this will have the effect of
delivering the selected drink. The second machine is more complex, offering a wider range
of snacks. The trade-off for providing more options, however, is that the user can no longer

3 . 5 I N T E R A C T I O N T Y P E s 83

3.5.2 Conversing
This form of interaction is based on the idea of a person having a conversation with a system,
where the system acts as a dialogue partner. In particular, the system is designed to respond in
a way that another human being might when having a conversation. It differs from the activ-
ity of instructing insofar as it encompasses a two-way communication process, with the sys-
tem acting like a partner rather than a machine that obeys orders. It has been most commonly
used for applications where the user needs to find out specific kinds of information or wants
to discuss issues. Examples include advisory systems, help facilities, chatbots, and robots.

instruct the machine using a simple one-press action but is required to follow a more complex
process involving (1) reading off the code (for example, C12) under the item chosen, then
(2) keying this into the number pad adjacent to the displayed items, and finally (3) checking
the price of the selected option and ensuring that the amount of money inserted is the same
or greater (depending on whether the machine provides change). Problems that can arise from
this type of interaction are the customer misreading the code and/or incorrectly keying the
code, resulting in the machine not issuing the snack or providing the wrong item.

A better way of designing an interface for a large number of options of variable cost
might be to continue to use direct mapping but use buttons that show miniature versions of
the snacks placed in a large matrix (rather than showing actual versions). This would use the
available space at the front of the vending machine more economically. The customer would
need only to press the button of the object chosen and put in the correct amount of money.
There is a lower chance of error resulting from pressing the wrong code or keys. The trade-off
for the vending company, however, is that the machine is less flexible in terms of which snacks
it can sell. If a new product line comes out, they will also need to replace part of the physical
interface to the machine, which would be costly.

Figure 3.6 Two different types of vending machine

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N84

The kinds of conversation that are currently supported range from simple voice-
recognition, menu-driven systems, to more complex natural language–based systems that
involve the system parsing and responding to queries typed in or spoken by the user. Examples
of the former include banking, ticket booking, and train-time inquiries, where the user talks to
the system in single-word phrases and numbers, that is, yes, no, three, and so on, in response
to prompts from the system. Examples of the latter include help systems, where the user
types in a specific query, such as “How do I change the margin widths?” to which the system
responds by giving various answers. Advances in AI during the last few years have resulted
in a significant improvement in speech recognition to the extent that many companies now
routinely employ speech-based and chatbot-based interaction for their customer queries.

A main benefit of developing a conceptual model that uses a conversational style of inter-
action is that it allows people to interact with a system in a way that is familiar to them. For
example, Apple’s speech system, Siri, lets you talk to it as if it were another person. You can
ask it to do tasks for you, such as make a phone call, schedule a meeting, or send a message.
You can also ask it indirect questions that it knows how to answer, such as “Do I need an
umbrella today?” It will look up the weather for where you are and then answer with some-
thing like, “I don’t believe it’s raining” while also providing a weather forecast (see Figure 3.7).

Figure 3.7 Siri’s response to the question “Do I need an umbrella today?”

3 . 5 I N T E R A C T I O N T Y P E s 85

A problem that can arise from using a conversational-based interaction type is that certain
kinds of tasks are transformed into cumbersome and one-sided interactions. This is especially
true for automated phone-based systems that use auditory menus to advance the interaction.
Users have to listen to a voice providing several options, then make a selection, and repeat
through further layers of menus before accomplishing their goal, for example, reaching a real
human or paying a bill. Here is the beginning of a dialogue between a user who wants to find
out about car insurance and an insurance company’s phone reception system:

“Welcome to St. Paul’s Insurance Company. Press 1 if you are a new customer; 2 if you
are an existing customer.”

“Thank you for calling St. Paul’s Insurance Company. If you require house insurance,
say 1; car insurance, say 2; travel insurance, say 3; health insurance, say 4; other, say 5.”

“You have reached the car insurance division. If you require information about fully
comprehensive insurance, say 1; third-party insurance, say 2. …”

3.5.3 Manipulating
This form of interaction involves manipulating objects, and it capitalizes on users’ knowledge
of how they do so in the physical world. For example, digital objects can be manipulated by
moving, selecting, opening, and closing. Extensions to these actions include zooming in and
out, stretching, and shrinking—actions that are not possible with objects in the real world.
Human actions can be imitated through the use of physical controllers (for example, the
Wii) or gestures made in the air, such as the gesture control technology now used in some
cars. Physical toys and robots have also been embedded with technology that enable them to
act and react in ways depending on whether they are squeezed, touched, or moved. Tagged
physical objects (such as balls, bricks, or blocks) that are manipulated in a physical world
(for example, placed on a surface) can result in other physical and digital events occurring,
such as a lever moving or a sound or animation being played.

A framework that has been highly influential (originating from the early days of HCI) in
guiding the design of GUI applications is direct manipulation (Shneiderman, 1983). It pro-
poses that digital objects be designed at the interface level so that they can be interacted with
in ways that are analogous to how physical objects in the physical world are manipulated.

Source: © Glasbergen. Reproduced with permission
of Glasbergen Cartoon Service

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N86

In so doing, direct manipulation interfaces are assumed to enable users to feel that they are
directly controlling the digital objects represented by the computer. The three core principles
are as follows:

• Continuous representation of the objects and actions of interest
• Rapid reversible incremental actions with immediate feedback about the object of interest
• Physical actions and button pressing instead of issuing commands with complex syntax

According to these principles, an object on the screen remains visible while a user per-
forms physical actions on it, and any actions performed on it are immediately visible. For
example, a user can move a file by dragging an icon that represents it from one part of the
desktop to another. The benefits of direct manipulation include the following:

• Helping beginners learn basic functionality rapidly
• Enabling experienced users to work rapidly on a wide range of tasks
• Allowing infrequent users to remember how to carry out operations over time
• Preventing the need for error messages, except rarely
• Showing users immediately how their actions are furthering their goals
• Reducing users’ experiences of anxiety
• Helping users gain confidence and mastery and feel in control

Many apps have been developed based on some form of direct manipulation, includ-
ing word processors, video games, learning tools, and image editing tools. However, while
direct manipulation interfaces provide a versatile mode of interaction, they do have their
drawbacks. In particular, not all tasks can be described by objects, and not all actions can
be undertaken directly. Some tasks are also better achieved through issuing commands. For
example, consider how you edit a report using a word processor. Suppose that you had ref-
erenced work by Ben Shneiderman but had spelled his name as Schneiderman throughout.
How would you correct this error using a direct manipulation interface? You would need to
read the report and manually select the c in every Schneiderman, highlight it, and then delete
it. This would be tedious, and it would be easy to miss one or two. By contrast, this opera-
tion is relatively effortless and also likely to be more accurate when using a command-based
interaction. All you need to do is instruct the word processor to find every Schneiderman
and replace it with Shneiderman. This can be done by selecting a menu option or using a
combination of command keys and then typing the changes required into the dialog box
that pops up.

3.5.4 Exploring
This mode of interaction involves users moving through virtual or physical environments.
For example, users can explore aspects of a virtual 3D environment, such as the interior of a
building. Physical environments can also be embedded with sensing technologies that, when
they detect the presence of someone or certain body movements, respond by triggering cer-
tain digital or physical events. The basic idea is to enable people to explore and interact with
an environment, be it physical or digital, by exploiting their knowledge of how they move
and navigate through existing spaces.

Many 3D virtual environments have been built that comprise digital worlds designed for
people to move between various spaces to learn (for example, virtual campuses) and fantasy
worlds where people wander around different places to socialize (for instance, virtual parties)

3 . 5 I N T E R A C T I O N T Y P E s 87

or play video games (such as Fortnite). Many virtual landscapes depicting cities, parks, build-
ings, rooms, and datasets have also been built, both realistic and abstract, that enable users
to fly over them and zoom in and out of different parts. Other virtual environments that have
been built include worlds that are larger than life, enabling people to move around them,
experiencing things that are normally impossible or invisible to the eye (see Figure 3.8a);
highly realistic representations of architectural designs, allowing clients and customers to
imagine how they will use and move through planned buildings and public spaces; and visu-
alizations of complex datasets that scientists can virtually climb inside and experience (see
Figure 3.8b).

3.5.5 Responding
This mode of interaction involves the system taking the initiative to alert, describe, or show
the user something that it “thinks” is of interest or relevance to the context the user is pres-
ently in. It can do this through detecting the location and/or presence of someone in a vicinity
(for instance, a nearby coffee bar where friends are meeting) and notifying them about it on
their phone or watch. Smartphones and wearable devices are becoming increasingly proactive
in initiating user interaction in this way, rather than waiting for the user to ask, command,
explore, or manipulate. An example is a fitness tracker that notifies the user of a milestone
they have reached for a given activity, for example, having walked 10,000 steps in a day.
The fitness tracker does this automatically without any requests made by the user; the user
responds by looking at the notification on their screen or listening to an audio announce-
ment that is made. Another example is when the system automatically provides some funny
or useful information for the user, based on what it has learned from their repeated behaviors
when carrying out particular actions in a given context. For example, after taking a photo

(a) (b)

Figure 3.8 (a) A CAVE that enables the user to stand near a huge insect, for example, a beetle, be
swallowed, and end up in its abdomen; and (b) NCSA’s CAVE being used by a scientist to move
through 3D visualizations of the datasets
Source: (a) Used courtesy of Alexei Sharov (b) Used courtesy of Kalev Leetaru, National Center for Supercom-
puting Applications, University of Illinois.

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N88

of a friend’s cute dog in the park, Google Lens will automatically pop up information that
identifies the breed of the dog (see Figure 3.9).

For some people, this kind of system-initiated interaction—where additional informa-
tion is provided which has not been requested—might get a bit tiresome or frustrating, espe-
cially if the system gets it wrong. The challenge is knowing when the user will find it useful
and interesting and how much and what kind of contextual information to provide without
overwhelming or annoying them. Also, it needs to know what to do when it gets it wrong.
For example, if it thinks the dog is a teddy bear, will it apologize? Will the user be able to
correct it and tell it what the photo actually is? Or will the system be given a second chance?

3.6 Paradigms, Visions, Theories, Models,
and Frameworks

Other sources of conceptual inspiration and knowledge that are used to inform design and
guide research are paradigms, visions, theories, models, and frameworks (Carroll, 2003). These
vary in terms of their scale and specificity to a particular problem space. A paradigm refers
to a general approach that has been adopted by a community of researchers and designers
for carrying out their work in terms of shared assumptions, concepts, values, and practices.
A vision is a future scenario that frames research and development in interaction design—
often depicted in the form of a film or a narrative. A theory is a well-substantiated explana-
tion of some aspect of a phenomenon; for example, the theory of information processing that
explains how the mind, or some aspect of it, is assumed to work. A model is a simplification
of some aspect of human-computer interaction intended to make it easier for designers to
predict and evaluate alternative designs. A framework is a set of interrelated concepts and/or

Figure 3.9 Google Lens in action, providing pop-up information about Pembroke Welsh Corgi
having recognized the image as one
Source: https://lens.google.com

https://lens.google.com

3 . 6 PA R A d I G M s , V I s I O N s , T h E O R I E s , M O d E L s , A N d F R A M E w O R k s 89

a set of specific questions that are intended to inform a particular domain area (for example,
collaborative learning), or an analytic method (for instance, ethnographic studies).

3.6.1 Paradigms
Following a particular paradigm means adopting a set of practices upon which a community
has agreed. These include the following:

• The questions to be asked and how they should be framed
• The phenomena to be observed
• The way in which findings from studies are to be analyzed and interpreted (Kuhn, 1972)

In the 1980s, the prevailing paradigm in human-computer interaction was how to design
user-centered applications for the desktop computer. Questions about what and how to
design were framed in terms of specifying the requirements for a single user interacting with
a screen-based interface. Task analytic and usability methods were developed based on an
individual user’s cognitive capabilities. Windows, Icons, Menus, and Pointers (WIMP) was
used as a way of characterizing the core features of an interface for a single user. This was
later superseded by the graphical user interface (GUI). Now many interfaces have touch
screens that users tap, press and hold, pinch, swipe, slide, and stretch.

A big influence on the paradigm shift that took place in HCI in the 1990s was Mark
Weiser’s (1991) vision of ubiquitous technology. He proposed that computers would become
part of the environment, embedded in a variety of everyday objects, devices, and displays.
He envisioned a world of serenity, comfort, and awareness, where people were kept perpetu-
ally informed of what was happening around them, what was going to happen, and what
had just happened. Ubiquitous computing devices would enter a person’s center of attention
when needed and move to the periphery of their attention when not, enabling the person to
switch calmly and effortlessly between activities without having to figure out how to use a
computer when performing their tasks. In essence, the technology would be unobtrusive and
largely disappear into the background. People would be able to get on with their everyday
and working lives, interacting with information and communicating and collaborating with
others without being distracted or becoming frustrated with technology.

This vision was successful at influencing the computing community’s thinking; inspiring
them especially regarding what technologies to develop and problems to research (Abowd,
2012). Many HCI researchers began to think beyond the desktop and design mobile and perva-
sive technologies. An array of technologies was developed that could extend what people could
do in their everyday and working lives, such as smart glasses, tablets, and smartphones.

The next big paradigm shift that took place in the 2000s was the emergence of Big Data
and the Internet of Things (IoT). New and affordable sensor technologies enabled masses of
data to be collected about people’s health, well-being, and real-time changes happening in the
environment (for example, air quality, traffic congestion, and business). Smart buildings were
also built, where an assortment of sensors were embedded and experimented with in homes,
hospitals, and other public buildings. Data science and machine-learning algorithms were
developed to analyze the amassed data to draw new inferences about what actions to take
on behalf of people to optimize and improve their lives. This included introducing variable
speed limits on highways, notifying people via apps of dangerous pollution levels, crowds at
an airport, and so on. In addition, it became the norm for sensed data to be used to automate
mundane operations and actions—such as turning lights or faucets on and off or flushing
toilets automatically—replacing conventional knobs, buttons, and other physical controls.

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N90

3.6.2 Visions
Visions of the future, like Mark Weiser’s vision of ubiquitous technology, provide a powerful
driving force that can lead to a paradigm shift in terms of what research and development is
carried out in companies and universities. A number of tech companies have produced videos
about the future of technology and society, inviting audiences to imagine what life will be
like in 10, 15, or 20 years’ time. One of the earliest was Apple’s 1987 Knowledge Navigator,
which presented a scenario of a professor using a touchscreen tablet with a speech-based
intelligent assistant reminding him of what he needed to do that day while answering the
phone and helping him prepare his lectures. It was 25 years ahead of its time—set in 2011—
the actual year that Apple launched its speech system, Siri. It was much viewed and discussed,
inspiring widespread research into and development of future interfaces.

A current vision that has become pervasive is AI. Both utopian and dystopian visions
are being bandied about on how AI will make our lives easier on the one hand and how
it will take our jobs away on the other. This time, it is not just computer scientists who
are extolling the benefits or dangers of AI advances for society but also journalists, social
commentators, policy-makers, and bloggers. AI is now replacing the user interface for an
increasing number of applications where the user had to make choices, for example, smart-
phones learning your music preferences and home heating systems deciding when to turn
the heating on and off and what temperature you prefer. One objective is to reduce the
stress of people having to make decisions; another is to improve upon what they would
choose. For example, in the future instead of having to agonize over which clothes to buy,
or vacation to select, a personal assistant will be able to choose on your behalf. Another
example depicts what a driverless car will be like in a few years, where the focus is not so
much on current concerns with safety and convenience but more on improving comfort
and life quality in terms of the ultimate personalized passenger experience (for example, see
VW’s video). More and more everyday tasks will be transformed through AI learning what
choices are best in a given situation.

You can watch a video about the Apple Knowledge Navigator here: http://youtu
.be/hGYFEI6uLy0.

VW’s vision of its future car can be seen in this video: https://youtu.be/AyihacflLto.

Video IBM’s Internet of Things: http://youtu.be/sfEbMV295kk.

http://youtu.be/sfEbMV295Kk

3 . 6 PA R A d I G M s , V I s I O N s , T h E O R I E s , M O d E L s , A N d F R A M E w O R k s 91

While there are many benefits of letting machines make decisions for us, we may feel a
loss of control. Moreover, we may not understand why the AI system chose to drive the car
along a particular route or why our voice-assisted home robot keeps ordering too much milk.
There are increasing expectations that AI researchers find ways of explaining the rationale
behind the decisions that AI systems make on the user’s behalf. This need is often referred
to as transparency and accountability—which we discuss further in Chapter 10. It is an area
that is of central concern to interaction design researchers, who have started conducting user
studies on transparency and developing explanations that are meaningful and reassuring to
the user (e.g., Radar et al., 2018).

Another challenge is to develop new kinds of interfaces and conceptual models that can
support the synergy of humans and AI systems, which will amplify and extend what they
can do currently. This could include novel ways of enhancing group collaboration, creative
problem-solving, forward planning, policy-making, and other areas that can become intrac-
table, complex, and messy, such as divorce settlements.

Science fiction has also become a source of inspiration in interaction design. By this, we
mean in movies, writing, plays, and games that envision what role technology may play in
the future. Dan Russell and Svetlana Yarosh (2018) discuss the pros and cons of using dif-
ferent kinds of science fiction for inspiration in HCI design, arguing that they can provide a
good grounding for debate but are often not a great source of accurate predictions of future
technologies. They point out how, although the visions can be impressively futuristic, their
embellishments and what they actually look like are often limited by the author’s ability to
extend and build upon the ideas and the cultural expectations associated with the current
era. For example, the holodeck portrayed in the Star Trek TV series had 3D-bubble indicator
lights and push-button designs on its bridge with the sound of a teletype in the background.
This is the case to such an extent that Russell and Yarosh even argue that the priorities and
concerns of the author’s time and their cultural upbringing can bias the science fiction toward
telling narratives from the perspective of the present, rather than providing new insights and
paving the way to future designs.

The different kinds of future visions provide concrete scenarios of how society can use
the next generation of imagined technologies to make their lives more comfortable, safe,
informative, and efficient. Furthermore, they also raise many questions concerning privacy,
trust, and what we want as a society. They provide much food for thought for research-
ers, policy-makers, and developers, challenging them to consider both positive and negative
implications.

Many new challenges, themes, and questions have been articulated through such visions
(see, for example, Rogers, 2006; Harper et al., 2008; Abowd, 2012), including the following:

• How to enable people to access and interact with information in their work, social, and
everyday lives using an assortment of technologies

• How to design user experiences for people using interfaces that are part of the environ-
ment but where there are no obvious controlling devices

• How and in what form to provide contextually relevant information to people at appropri-
ate times and places to support them while on the move

• How to ensure that information that is passed around via interconnected displays, devices,
and objects is secure and trustworthy

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N92

3.6.3 Theories
Over the past 30 years, numerous theories have been imported into human-computer interac-
tion, providing a means of analyzing and predicting the performance of users carrying out
tasks for specific types of computer interfaces and systems (Rogers, 2012). These have been
primarily cognitive, social, affective, and organizational in origin. For example, cognitive
theories about human memory were used in the 1980s to determine the best ways of repre-
senting operations, given people’s memory limitations. One of the main benefits of applying
such theories in interaction design is to help identify factors (cognitive, social, and affective)
relevant to the design and evaluation of interactive products. Some of the most influential
theories in HCI, including distributed cognition, will be covered in the next chapter.

3.6.4 Models
We discussed earlier why a conceptual model is important and how to generate one when
designing a new product. The term model has also been used more generally in interaction
design to describe, in a simplified way, some aspect of human behavior or human-computer
interaction. Typically, it depicts how the core features and processes underlying a phenom-
enon are structured and related to one another. It is usually abstracted from a theory coming
from a contributing discipline, like psychology. For example, Don Norman (1988) developed
a number of models of user interaction based on theories of cognitive processing, arising out
of cognitive science, which were intended to explain the way users interacted with interactive
technologies. These include the seven stages of action model that describes how users move
from their plans to executing physical actions that they need to perform to achieve them
to evaluating the outcome of their actions with respect to their goals. More recent models
developed in interaction design are user models, which predict what information users want
in their interactions and models that characterize core components of the user experience,
such as Marc Hassenzahl’s (2010) model of experience design.

3.6.5 Frameworks
Numerous frameworks have been introduced in interaction design to help designers con-
strain and scope the user experience for which they are designing. In contrast to a model,
a framework offers advice to designers as to what to design or look for. This can come in
a variety of forms, including steps, questions, concepts, challenges, principles, tactics, and
dimensions. Frameworks, like models, have traditionally been based on theories of human
behavior, but they are increasingly being developed from the experiences of actual design
practice and the findings arising from user studies.

Many frameworks have been published in the HCI/interaction design literature, cover-
ing different aspects of the user experience and a diversity of application areas. For example,
there are frameworks for helping designers think about how to conceptualize learning, work-
ing, socializing, fun, emotion, and so on, and others that focus on how to design particular
kinds of technologies to evoke certain responses, for example, persuasive technologies (see
Chapter 6, “Emotional Interaction”). There are others that have been specifically developed
to help researchers analyze the qualitative data they collect in a user study, such as Dis-
tributed Cognition (Rogers, 2012). One framework, called DiCoT (Furniss and Blandford,
2006), was developed to analyze qualitative data at the system level, allowing researchers
to understand how technologies are used by teams of people in work or home settings.
(Chapter 9, “Data Analysis,” describes DiCoT in more detail.)

3 . 6 PA R A d I G M s , V I s I O N s , T h E O R I E s , M O d E L s , A N d F R A M E w O R k s 93

A classic example of a conceptual framework that has been highly influential in HCI is
Don Norman’s (1988) explanation of the relationship between the design of a conceptual
model and a user’s understanding of it. The framework comprises three interacting compo-
nents: the designer, the user, and the system. Behind each of these are the following:

Designer’s Model The model the designer has of how the system should work
System Image How the system actually works, which is portrayed to the user through the

interface, manuals, help facilities, and so on
User’s Model How the user understands how the system works

The framework makes explicit the relationship between how a system should function,
how it is presented to users, and how it is understood by them. In an ideal world, users should
be able to carry out activities in the way intended by the designer by interacting with the sys-
tem image that makes it obvious what to do. If the system image does not make the designer’s
model clear to the users, it is likely that they will end up with an incorrect understanding of
the system, which in turn will increase the likelihood of their using the system ineffectively
and making errors. This has been found to happen often in the real world. By drawing atten-
tion to this potential discrepancy, designers can be made aware of the importance of trying
to bridge the gap more effectively.

To summarize, paradigms, visions, theories, models, and frameworks are not mutually
exclusive, but rather they overlap in their way of conceptualizing the problem and design
space, varying in their level of rigor, abstraction, and purpose. Paradigms are overarching
approaches that comprise a set of accepted practices and framing of questions and phe-
nomena to observe; visions are scenarios of the future that set up challenges, inspirations,
and questions for interaction design research and technology development; theories tend to
be comprehensive, explaining human-computer interactions; models are an abstraction that
simplify some aspect of human-computer interaction, providing a basis for designing and
evaluating systems; and frameworks provide a set of core concepts, questions, or principles
to consider when designing for a user experience or analyzing data from a user study.

dILEMMA
Who Is in Control?

A recurrent theme in interaction design, especially in the current era of AI-based systems, is
who should be in control at the interface level. The different interaction types vary in terms
of how much control a user has and how much the computer has. While users are primarily
in control for instructing direct manipulation interfaces, they are less so in responding type
interfaces, such as sensor-based and context-aware environments where the system takes the
initiative to act. User-controlled interaction is based on the premise that people enjoy mastery
and being in control. It assumes that people like to know what is going on, be involved in the
action, and have a sense of power over the computer.

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N94

In contrast, autonomous and context-aware control assumes that having the environ-
ment monitor, recognize, and detect deviations in a person’s behavior can enable timely, help-
ful, and even critical information to be provided when considered appropriate (Abowd and
Mynatt, 2000). For example, elderly people’s movements can be detected in the home and
emergency or care services alerted if something untoward happens to them that might other-
wise go unnoticed, for instance, if they fall over and are unable to sound the alarm. But what
happens if a person chooses to take a rest in an unexpected area (on the carpet), which the
system detects as a fall? Will the emergency services be called out unnecessarily and cause
care givers undue worry? Will the person who triggered the alarm be mortified at triggering
a false alarm? And how will it affect their sense of privacy, knowing that their every move is
constantly being monitored?

Another concern is what happens when the locus of control switches between user and
system. For example, consider who is in control when using a GPS for vehicle navigation. At
the beginning, the driver is very much in control, issuing instructions to the system as to where
to go and what to include, such as highways, gas stations, and traffic alerts. However, once on
the road, the system takes over and is in control. People often find themselves slavishly follow-
ing what the GPS tells them to do, even though common sense suggests otherwise.

To what extent do you need to be in control in your everyday and working life? Are you
happy to let technology monitor and decide what you need or might be interested in know-
ing, or do you prefer to tell it what you want to do? What do you think of autonomous cars
that drive for you? While it might be safer and more fuel-efficient, will it take the pleasure
out of driving?

A tongue-in-cheek video made by Superflux, called Uninvited Guests, shows who is very
much in control when a man is given lots of smart gadgets by his children for his birthday to
help him live more healthily: https://vimeo.com/128873380.

Source: Cluff / Cartoon Stock

3 . 6 PA R A d I G M s , V I s I O N s , T h E O R I E s , M O d E L s , A N d F R A M E w O R k s 95

In-depth Activity
The aim of this in-depth activity is for you to think about the appropriateness of differ-
ent kinds of conceptual models that have been designed for similar physical and digital
information artifacts.
Compare the following:
• A paperback book and an ebook
• A paper-based map and a smartphone map app

What are the main concepts and metaphors that have been used for each? (Think about the
way time is conceptualized for each of them.) How do they differ? What aspects of the paper-
based artifact have informed the digital app? What is the new functionality? Are any aspects
of the conceptual model confusing? What are the pros and cons?

summary
This chapter explained the importance of conceptualizing the problem and design spaces before
trying to build anything. It stressed throughout the need to be explicit about the claims and
assumptions behind design decisions that are suggested. It described an approach to formu-
lating a conceptual model and explained the evolution of interface metaphors that have been
designed as part of the conceptual model. Finally, it considered other ways of conceptualizing
interaction in terms of interaction types, paradigms, visions, theories, models, and frameworks.

Key Points
• A fundamental aspect of interaction design is to develop a conceptual model.
• A conceptual model is a high-level description of a product in terms of what users can do
with it and the concepts they need to understand how to interact with it.

• Conceptualizing the problem space in this way helps designers specify what it is they are
doing, why, and how it will support users in the way intended.

• Decisions about conceptual design should be made before commencing physical design
(such as choosing menus, icons, dialog boxes).

• Interface metaphors are commonly used as part of a conceptual model.
• Interaction types (for example, conversing or instructing) provide a way of thinking about
how best to support the activities users will be doing when using a product or service.

• Paradigms, visions, theories, models, and frameworks provide different ways of framing
and informing design and research.

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N96

Further Reading

Here we recommend a few seminal readings on interaction design and the user experience
(in alphabetical order).

DOURISH, P. (2001) Where the Action Is. MIT Press. This book presents an approach for
thinking about the design of user interfaces and user experiences based on the notion of
embodied interaction. The idea of embodied interaction reflects a number of trends that have
emerged in HCI, offering new sorts of metaphors.

JOHNSON, J. and HENDERSON, A. (2012) Conceptual Models: Core to Good Design.
Morgan and Claypool Publishers. This short book, in the form of a lecture, provides a com-
prehensive overview of what is a conceptual model using detailed examples. It outlines how
to construct one and why it is necessary to do so. It is cogently argued and shows how and
where this design activity can be integrated into interaction design.

JU, W. (2015) The Design of Implicit Interactions. Morgan and Claypool Publishers. This
short book, in the form of a lecture, provides a new theoretical framework to help design
smart, automatic, and interactive devices by examining the small interactions that we engage
in our everyday lives, often without any explicit communication. It puts forward the idea of
implicit interaction as a central concept for designing future interfaces.

97

INTERVIEw with
Albrecht schmidt

Albrecht Schmidt is professor of human-
centered ubiquitous media in the com-
puter science department of the Ludwig-
Maximilians-Universität München in Ger-
many. He studied computer science in Ulm
and Manchester and received a PhD from
Lancaster University, United Kingdom,
in 2003. He held several prior academic
positions at different universities, includ-
ing Stuttgart, Cambridge, Duisburg-Essen,
and Bonn. He also worked as a researcher
at the Fraunhofer Institute for Intelligent
Analysis and Information Systems (IAIS)
and at Microsoft Research in Cambridge.
In his research, he investigates the inherent
complexity of human-computer interaction
in ubiquitous computing environments,
particularly in view of increasing computer
intelligence and system autonomy. Albrecht
has actively contributed to the scientific
discourse in human-computer interaction
through the development, deployment, and
study of functional prototypes of interac-
tive systems and interface technologies in
different real world domains. Most recently,
he focuses on how information technol-
ogy can provide cognitive and perceptual
support to amplify the human mind.

How do you think future visions inspire
research in interaction design? Can you
give an example from your own work?

Envisioning the future is key to research in
human-computer interaction. In contrast
to traditional fields that discover phenom-
ena (such as physics or sociology), research
in interaction design is constructive and
creates new things that potentially change
our world. Research in interaction design
also analyzes the world and aims to under-
stand phenomena but mainly as a means
to inspire and guide innovations. A major
aspect of research is then to create concrete
designs, build concepts and prototypes,
and evaluate them.

Future visions are an excellent way to
describe the big picture of a future where
we still have to invent and implement the
details. A vision enables us to communicate
the overall goal for which we are aiming.
In formulating the vision, we have to con-
textualize our ideas, link them to practices
in our lives, and describe the anticipated
impact on individuals and society. A pre-
requisite for formulating a coherent future
vision is a good understanding of the prob-
lems that we want to address. If formulated
well, the vision shows a clear direction,
but it leaves room for researchers in the
community to make their own interpreta-
tion. A well-formulated future vision also
leaves room for individuals to align their
research efforts with the goal or to criticize
it fundamentally through their research.

I N T E R V I E w w I T h A L B R E C h T s C h M I d T

3 C O N C E P T U A L I Z I N G I N T E R A C T I O N98

We have proposed the vision of ampli-
fying human perception and cognition
through digital technologies (see Schmidt,
2017a; 2017b). This vision emerged from
our various concrete research prototypes
over the last 10 years. We realized that
many of the prototypes and technologies
we developed were pointed in a similar
direction: enabling superhuman abilities
through devices and applications. At the
same time, we demonstrate why amplify-
ing human abilities is a timely endeavor,
in particular given the current advances in
artificial intelligence, in sensing technol-
ogies, as well as in personal display devices.
For our group and for colleagues with
whom we work, this vision has become a
means for inspiring new ideas, for investi-
gating relevant areas for potential innova-
tion systematically, and for assessing ideas
early on.

Why do metaphors persist in HCI?
Good metaphors allow people to transfer
their understanding and skills from another
domain in the real world to interaction
with computers and data. Good metaphors
are abstract enough to persist over time,
but concrete enough to simplify the use
of computers. Early metaphors included
computers as an advanced typewriter
and computers as intelligent assistants. Such
metaphors help in the design process to
create understandable interaction concepts
and user interfaces. A designer can take
their idea for a user interface or an inter-
action concept and evaluate it in the light
of the metaphor. They can assess if the
interaction is understandable for people
familiar with the concept or technologies
on which the metaphor is based. Meta-
phors often suggest interaction styles and
hence can help to create interfaces that are
more consistent and interaction designs

that can be used without explanation using
intuition (which in this case is the implicit
understanding of the metaphor).

Metaphors for which the underlying
concept has disappeared from everyday
usage may still persist. In many cases, users
will not know the original concept from
their own experience (for example, a type-
writer), but have grown up with technol-
ogies using the metaphor. For a metaphor
to persist, it must remain conducive and
helpful over time for new users as well as
for experienced ones.

What do you think of the rise of AI and
automation? Do you think there is a role
for HCI, and if so, what?
Advancements in AI and in automation
are exciting. They have the potential to
empower humans to do things, to think
things, and to experience things we cannot
even imagine right now. However, the key
to unlocking this potential is to create effi-
cient ways for interacting with artificial
intelligence. Meaningful automation and
intelligent systems always have boundaries
and intersections with human action. For
example, an autonomous car will transport
a human, a drone will deliver a parcel to a
person, an automated kitchen will prepare
a meal for a family, and large-scale data
analytics in companies will lead to better
services for their customers. With intelli-
gent systems and smart services taking a
more active role through artificial intelli-
gence, the way in which interaction and
interfaces are designed becomes even more
crucial. Creating a positive user experience
in the presence of artificial intelligence is
a challenge where new visions and meta-
phors are required.

One concept that we suggested is the
notion of intervention user interfaces
(Schmidt and Herrmann, 2017). The basic

99

expectation is that in the future many
intelligent systems in our environment will
work just fine, without any human inter-
action. However, to stay in control and to
tailor the system to current and unforesee-
able needs, as well as to customize the user
experience, human interventions should be
easily possible. Designing interaction con-
cepts for interventions and user interfaces
that empower humans to make the most
of a system driven by artificial intelligence
is a huge challenge and includes many ba-
sic research questions. Getting the interac-
tion with AI right, basically finding ways
for humans to harness the power of AI for
what they want to do, is as important as
developing the underlying algorithms. One
without the other is of very limited value.

What do you see are the challenges ahead
for HCI at scale?
There are many challenges ahead in the
context of autonomous systems and
artificial intelligence as outlined earlier.
Closely related to this is human data inter-
action beyond interactive visualization.
How can we empower humans to work
with big and unstructured data? Here is a
concrete example: I had a discussion with
medical professionals this morning. For a
specific cancer type, there are several thou-
sand publications readily available. Many
of them may have similar results, and
others may have conflicting ones. Reading
all of the publications in their current form
is an intractable problem for a human
reader, as it would take too long and
would overload a person’s working mem-
ory. The simple question that resulted from

the discussion is: what would a system and
interface look like that uses AI to prepro-
cess 10,000 papers, allows interactive pre-
sentation of relevant content, and enables
humans to make sense of the state of the
art and come up with its own hypotheses?
Preferably, the interface would support the
person to do this in a few hours, rather
than in their entire lifetime.

Another challenge at the societal scale is
to understand the long-term impact of inter-
active systems that we create. So far, this was
very much trial and error. Providing unlim-
ited and easy-to-use mass communication
to individuals without journalistic training
has changed how we read news. Personal
communication devices and instant mes-
saging have altered communication pat-
terns in families and classrooms. Working
in the office using a computer in order to
create texts is reducing our physical move-
ments. The way that we design interactive
systems, the things we make easy or hard
to use, and the modalities that we choose
in our interaction design have inevitably
resulted in long-term impacts on people.
With the current methods and tools in
HCI, we are well equipped to do a great job
in developing easy-to-use systems with an
amazing short-term user experience for the
individual. However, looking at upcoming
major innovations in mobility and health-
care technologies, the interfaces we design
may have many more consequences. One
major challenge at scale is to design for a
longer-term user experience (months to
years) on a societal scale. Here we still first
have to research and invent methods and
tools.

I N T E R V I E w w I T h A L B R E C h T s C h M I d T

Chapter 4

C O G N I T I V E A S P E C T S

Objectives
The main goals of the chapter are to accomplish the following:

• Explain what cognition is and why it is important for interaction design.
• Discuss what attention is and its effects on our ability to multitask.
• Describe how memory can be enhanced through technology aids.
• Show the difference between various cognitive frameworks that have been applied
to HCI.

• Explain what are mental models.
• Enable you to elicit a mental model and understand what it means.

4.1 Introduction

Imagine that it is getting late in the evening and you are sitting in front of your laptop. You
have a report to complete by tomorrow morning, but you are not getting very far with it.
You begin to panic and start biting your nails. You see two text messages flash up on your
smartphone. You instantly abandon your report and cradle your smartphone to read them.
One is from your mother, and the other is from your friend asking if you want to go out for a
drink. You reply immediately to both of them. Before you know it, you’re back on Facebook
to see whether any of your friends have posted anything about the party you wanted to go to
but had to say no. Your phone rings, and you see that it’s your dad calling. You answer it, and
he asks if you have been watching the football game. You say that you are too busy working
toward a deadline, and he tells you that your team has just scored. You chat with him for a
while and then say you have to get back to work. You realize 30 minutes have passed, and
you return your attention to your report. But before you realize it, you click your favorite
sports site to check the latest score of the football game and discover that your team has just
scored again. Your phone starts buzzing. Two new WhatsApp messages are waiting for you.
And on it goes. You glance at the time on your laptop. It is midnight. You really are in a panic
now and finally close everything down except your word processor.

4.1 Introduction

4.2 What Is Cognition?

4.3 Cognitive Frameworks

4 C O G N I T I V E A S P E C T S102

In the past 10–15 years, it has become increasingly common for people to be switching
their attention constantly among multiple tasks. The study of human cognition can help us
understand the impact of multitasking on human behavior. It can also provide insights into
other types of digital behaviors, such as decision-making, searching, and designing when
using computer technologies by examining human abilities and limitations.

This chapter covers these aspects by examining the cognitive aspects of interaction design.
It considers what humans are good and bad at, and it shows how this knowledge can inform
the design of technologies that both extend human capabilities and compensate for human
weaknesses. Finally, relevant cognitive theories, which have been applied in HCI to inform
technology design, are described. (Other ways of conceptualizing human behavior that focus
on the social and emotional aspects of interaction are presented in the following two chapters.)

4.2 What Is Cognition?

There are many different kinds of cognition, such as thinking, remembering, learning, day-
dreaming, decision-making, seeing, reading, writing, and talking. A well-known way of dis-
tinguishing between different modes of cognition is in terms of whether it is experiential or
reflective (Norman, 1993). Experiential cognition is a state of mind where people perceive,
act, and react to events around them intuitively and effortlessly. It requires reaching a certain
level of expertise and engagement. Examples include driving a car, reading a book, hav-
ing a conversation, and watching a video. In contrast, reflective cognition involves mental
effort, attention, judgment, and decision-making, which can lead to new ideas and creativity.
Examples include designing, learning, and writing a report. Both modes are essential for eve-
ryday life. Another popular way of describing cognition is in terms of fast and slow thinking
( Kahneman, 2011). Fast thinking is similar to Don Norman’s experiential mode insofar as it
is instinctive, reflexive, and effortless, and it has no sense of voluntary control. Slow thinking,
as the name suggests, takes more time and is considered to be more logical and demanding,
and it requires greater concentration. The difference between the two modes is easy to see
when asking someone to give answers to the following two arithmetic equations:

2 2
21 19

The former can be done by most adults in a split second without thinking, while the lat-
ter requires much mental effort; many people need to externalize the task to be able to com-
plete it by writing it down on paper and using the long multiplication method. Nowadays,
many people simply resort to fast thinking by typing the numbers to be added or multiplied
into a calculator app on a smartphone or computer.

Other ways of describing cognition are in terms of the context in which it takes place,
the tools that are employed, the artifacts and interfaces that are used, and the people involved
(Rogers, 2012). Depending on when, where, and how it happens, cognition can be distrib-
uted, situated, extended, and embodied. Cognition has also been described in terms of spe-
cific kinds of processes (Eysenck and Brysbaert, 2018). These include the following:

• Attention
• Perception

4 . 2 W h AT I S C O G N I T I O N ? 103

• Memory
• Learning
• Reading, speaking, and listening
• Problem-solving, planning, reasoning, and decision-making

It is important to note that many of these cognitive processes are interdependent: several
may be involved for a given activity. It is rare for one to occur in isolation. For example, when
reading a book one has to attend to the text, perceive and recognize the letters and words,
and try to make sense of the sentences that have been written.

In the following sections we describe the main kinds of cognitive processes in more
detail, followed by a summary box highlighting the core design implications for each. The
most relevant for interaction design are attention and memory, which we describe in the
greatest detail.

4.2.1 Attention
Attention is central to everyday life. It enables us to cross the road without being hit by a
car or bicycle, notice when someone is calling our name, and be able to text while at the
same time watching TV. It involves selecting things on which to concentrate, at a point in
time, from the range of possibilities available, allowing us to focus on information that is
relevant to what we are doing. The extent to which this process is easy or difficult depends
on (1) whether someone has clear goals and (2) whether the information they need is salient
in the environment.

4.2.1.1 Clear Goals
If someone knows exactly what they want to find out, they try to match this with the infor-
mation that is available. For example, when someone has just landed at an airport after a
long flight, which did not have Wi-Fi onboard, and they want to find out who won the World
Cup, they might scan the headlines on their smartphone or look at breaking news on a public
TV display inside the airport. When someone is not sure exactly what they are looking for,
they may browse through information, allowing it to guide their attention to interesting or
salient items. For example, when going to a restaurant, someone may have the general goal of
eating a meal but only a vague idea of what they want to eat. They peruse the menu to find
things that whet their appetite, letting their attention be drawn to the imaginative descrip-
tions of various dishes. After scanning through the possibilities and imagining what each
dish might be like, as well as considering other factors, such as cost, who they are with, what
are the specials, what the waiter recommends, and whether they want a two- or three-course
meal, and so on), they then decide.

4.2.1.2 Information Presentation
The way information is displayed can also greatly influence how easy or difficult it is to
comprehend appropriate pieces of information. Look at Figure 4.1, and try the activity
(based on Tullis, 1997). Here, the information-searching tasks are precise, requiring spe-
cific answers.

4 C O G N I T I V E A S P E C T S104

South Carolina

City

Charleston
Charleston
Charleston
Charleston
Charleston
Charleston
Charleston

Best Western
Days Inn
Holiday Inn N
Holiday Inn SW
Howard Johnsons
Ramada Inn
Sheraton Inn

803
803
803
803
803
803
803

$126
$118
$136
$133
$131
$133
$134

$130
$124
$146
$147
$136
$140
$142

747-0961
881-1000
744-1621
556-7100
524-4148
774-8281
744-2401

Columbia
Columbia
Columbia
Columbia
Columbia
Columbia
Columbia
Columbia

Best Western
Carolina Inn
Days Inn
Holiday Inn NW
Howard Johnsons
Quality Inn
Ramada Inn
Vagabond Inn

803
803
803
803
803
803
803
803

$129
$142
$123
$132
$125
$134
$136
$127

$134
$148
$127
$139
$127
$141
$144
$130

796-9400
799-8200
736-0000
794-9440
772-7200
772-0270
796-2700
796-6240

Phone elbuoDelgniSletoH/letoM
Area
code

Rates

Pennsylvania
Bedford Motel/Hotel: Crinaline Courts
(814) 623-9511 S: $118 D: $120
Bedford Motel/Hotel: Holiday Inn
(814) 623-9006 S: $129 D: $136
Bedford Motel/Hotel: Midway
(814) 623-8107 S: $121 D: $126
Bedford Motel/Hotel: Penn Manor
(814) 623-8177 S: $119 D: $125
Bedford Motel/Hotel: Quality Inn
(814) 623-5189 S: $123 D: $128
Bedford Motel/Hotel: Terrace
(814) 623-5111 S: $122 D: $124
Bradley Motel/Hotel: De Soto
(814) 362-3567 S: $120 D: $124
Bradley Motel/Hotel: Holiday House
(814) 362-4511 S: $122 D: $125
Bradley Motel/Hotel: Holiday Inn
(814) 362-4501 S: $132 D: $140
Breezewood Motel/Hotel: Best Western Plaza
(814) 735-4352 S: $120 D: $127
Breezewood Motel/Hotel: Motel 70
(814) 735-4385 S: $116 D: $118

(a)

(b)

Figure 4.1 Two different ways of structuring the same information at the interface level. One makes
it much easier to find information than the other.
Source: Used courtesy of Dr. Tom Tullis

4 . 2 W h AT I S C O G N I T I O N ? 105

4.2.1.3 Multitasking and Attention
As mentioned in the introduction to this chapter, many people now multitask, frequently
switching their attention among different tasks. For example, in a study of teenage multitask-
ing, it was found that the majority of teenagers were found to multitask most or some of
the time while listening to music, watching TV, using a computer, or reading (Rideout et al.,
2010). It is probably even higher now, considering their use of smartphones while walking,
talking, and studying. While attending a presentation at a conference, we witnessed some-
one deftly switch between four ongoing instant message chats (one at the conference, one at
school, one with friends, and one at her part-time job), read, answer, delete, and place all new
messages in various folders of her two email accounts, and check and scan her Facebook and
her Twitter feeds, all while appearing to listen to the talk, take some notes, conduct a search
on the speaker’s background, and open up their publications. When she had a spare moment,
she played the game Patience. It was exhausting just watching her for a few minutes. It was
as if she were capable of living in multiple worlds simultaneously while not letting a moment
go to waste. But how much did she really take in of the presentation?

Is it possible to perform multiple tasks without one or more of them being detrimentally
affected? There has been much research on the effects of multitasking on memory and attention
(Burgess, 2015). The general finding is that it depends on the nature of the tasks and how much
attention each demands. For example, listening to gentle music while working can help people
tune out background noise, such as traffic or other people talking, and help them concentrate
on what they are doing. However, if the music is loud, like heavy metal, it can be distracting.

Individual differences have also been found. For example, the results of a series of
experiments comparing heavy with light multitaskers showed that heavy media multitaskers
(such as the person described above) were more prone to being distracted by the multiple
streams of media they are viewing than those who infrequently multitask. The latter were
found to be better at allocating their attention when faced with competing distractions

ACTIVITY 4.1
Look at the top screen of Figure 4.1 and (1) find the price for a double room at the Quality
Inn in Columbia, South Carolina, and (2) find the phone number of the Days Inn in Charles-
ton, South Carolina. Then look at the bottom screen in Figure 4.1 and (1) find the price of a
double room at the Holiday Inn in Bradley, Pennsylvania, and (2) find the phone number of
the Quality Inn in Bedford, Pennsylvania. Which took longer to do?

In an early study, Tullis found that the two screens produced quite different results: It
took an average of 3.2 seconds to search the top screen, while it took an average of 5.5 sec-
onds to find the same kind of information in the bottom screen. Why is this so, considering
that both displays have the same density of information relative to the background?

Comment
The primary reason for the disparity is the way that the characters are grouped in the display.
In the top screen, they are grouped into vertical categories of information (that is, place, type
of accommodation, phone number, and rates), and this screen has space in between the col-
umns of information. In the bottom screen, the information is bunched together, making it
much more difficult to search.

4 C O G N I T I V E A S P E C T S106

(Ophir et al., 2009). This suggests that people who are heavy multitaskers are likely to be those
who are easily distracted and find it difficult to filter out irrelevant information. However, a
more recent study by Danielle Lotteridge et al. (2015) found that it may be more complex.
They found that while heavy multitaskers are easily distracted, they can also put this to good
use if the distracting sources are relevant to the task in hand. Lotteridge et al. conducted a
study that involved writing an essay under two conditions—either with relevant or irrelevant
information. They found that if the information sources are relevant, they don’t affect the
essay writing. The condition where irrelevant information was provided was found to nega-
tively impact task performance. In summary, they found that multitasking can be both good
and bad—it depends on what you are distracted by and how relevant it is to the task at hand.

The reason why multitasking is thought to be detrimental for human performance is that it
overloads people’s capacity to focus. Having switched attention from what someone is working
on to another piece of information requires additional effort to get back into the other task and
to remember where they were in the ongoing activity. Thus, the time to complete a task can be
significantly increased. A study of completion rates of coursework found that students who were
involved in instant messaging took up to 50 percent longer to read a passage from a textbook com-
pared with those who did not instant message while reading (Bowman et al., 2010). Multitasking
can also result in people losing their train of thought, making errors, and needing to start over.

Nevertheless, many people are expected to multitask in the workplace nowadays, such as
in hospitals, as a result of the introduction of ever more technology (for example, multiple
screens in an operating room). The technology is often introduced to provide new kinds of
real-time and changing information. However, this usually requires the constant attention of
clinicians to check whether any of the data is unusual or unexpected. Managing the ever-
increasing information load requires professionals, like clinicians, to develop new attention
and scanning strategies, looking out for anomalies in data visualizations and listening for
audio alarms alerting them to potential dangers. Interaction designers have tried to make this
easier by including the use of ambient displays that come on when something needs atten-
tion—flashing arrows to direct attention to a particular type of data or history logs of recent
actions that can be quickly examined to refresh one’s memory of what has just happened on a
given screen. However, how well clinicians manage to switch and divide their attention among
different tasks in tech-rich environments has barely been researched (Douglas et al., 2017).

Source: Chris Wildt / Cartoon Stock

4 . 2 W h AT I S C O G N I T I O N ? 107

DILEMMA
Is It OK to Use a Phone While Driving?

There has been considerable debate about whether drivers should be able to talk or text on
their phones at the same time as driving (see Figure 4.2). People talk on their phones while
walking, so why not be able to do the same thing when driving? The main reasons are that
driving is more demanding, drivers are more prone to being distracted, and there is a greater
chance of causing accidents (however, it is also the case that some people, when using their
phones, walk out into a road without looking to see whether any cars are coming).

A meta-review of research that has investigated mobile phone use in cars has found that
drivers’ reaction times are longer to external events when engaged in phone conversations
(Caird et al., 2018). Drivers who use phones have also been found to be much poorer at stay-
ing in their lane and maintaining the correct speed (Stavrinos et al., 2013). The reason for this
is that drivers on a phone rely more on their expectations about what is likely to happen next
and, as a result, respond much more slowly to unexpected events, such as the car in front of
them stopping (Briggs et al., 2018). Moreover, phone conversations cause the driver visually
to imagine what is being talked about. The driver may also imagine the facial expression of the
person to whom they are speaking. The visual imagery involved competes for the processing
resources also needed to enable the driver to notice and react to what is in front of them on the
road. The idea that using a hands-free device is safer than actually holding the phone to carry
out a conversation is false, as the same type of cognitive processing takes place both ways.

(Continued)

Figure 4.2 How distracting is it to be texting the phone while driving?
Source: Tetra Images / Alamy Stock Photo

4 C O G N I T I V E A S P E C T S108

In several contexts, therefore, multitasking can be detrimental to performance, such as text-
ing or speaking on the phone when driving. The cost of switching attention varies from person
to person and which information resources are being switched between. When developing new
technology to provide more information for people in their work settings, it is important to
consider how best to support them so that they can easily switch their attention back and forth

Design Implications
Attention

• Consider context. Make information salient when it requires attention at a given stage
of a task.

• Use techniques to achieve this when designing visual interfaces, such as animated graphics,
color, underlining, ordering of items, sequencing of different information, and spac-
ing of items.

• Avoid cluttering visual interfaces with too much information. This applies especially to the
use of color and graphics: It is tempting to use lots of these attributes, which results in a
mishmash of media that is distracting and annoying rather than helping the user attend to
relevant information.

• Consider designing different ways of supporting effective switching and returning to a
particular interface. This could be done subtly, such as the use of pulsing lights gradually
getting brighter, or abruptly, such as the use of alerting sounds or voice. How much com-
peting visual information or ambient sound is present also needs to be considered.

It has also been found that drivers who engage in conversation with their passengers expe-
rienced similar negative effects. However, there is a difference between having a conversation
with a passenger sitting next to the driver and one with a person located remotely. The driver
and front-seat passenger can observe jointly what is happening in front of them on the road and
will moderate or cease their conversation in order to switch their full attention to a potential or
actual hazard. Someone on the other end of a phone, however, is not privy to what the driver
is seeing and will carry on the conversation. They might have just asked “Where did you leave
the spare set of keys?” and caused the driver mentally to search for them in their home, making
it more difficult for them to switch their full attention back to what is happening on the road.

Because of these hazardous problems, many countries have banned the use of phones
while driving. To help drivers resist the temptation to answer a phone that rings or glance at an
incoming notification that pings, smartphone device manufacturers have been asked by some
governments to introduce a driver mode akin to the airplane mode that could automatically
lock down a smartphone, preventing access to apps, while disabling the phone’s keyboard
when it detects a person who is driving. For example, the iPhone now has implemented
this option.

4 . 2 W h AT I S C O G N I T I O N ? 109

among the multiple displays or devices and be able to return readily to what they were doing after
an interruption (for instance, the phone ringing or people entering their space to ask questions).

4.2.2 Perception
Perception refers to how information is acquired from the environment via the five sense
organs (vision, hearing, taste, smell, and touch) and transformed into experiences of objects,
events, sounds, and tastes (Roth, 1986). In addition, we have the additional sense of kines-
thesia, which relates to the awareness of the position and movement of the parts of the body
through internal sensory organs (known as proprioceptors) located in the muscles and joints.
Perception is complex, involving other cognitive processes such as memory, attention, and
language. Vision is the most dominant sense for sighted individuals, followed by hearing and
touch. With respect to interaction design, it is important to present information in a way that
can be readily perceived in the manner it was intended.

As was demonstrated in Activity 4.1, grouping items together and leaving spaces between
them can aid attention because it breaks up the information. Having chunks of informa-
tion makes it easier to scan, rather than one long list of text that is all the same. In addi-
tion, many designers recommend using blank space (more commonly known as white space)
when grouping objects, as it helps users to perceive and locate items more easily and quickly
(Malamed, 2009). In a study comparing web pages displaying the same amount of informa-
tion but structured using different graphical methods (see Figure  4.3), it was found that
people took less time to locate items from information that was grouped using a border than

Design Implications
Perception

Representations of information need to be designed to be perceptible and recognizable across
different media.
• Design icons and other graphical representations so that users can readily distinguish

between them.
• Obvious separators and white space are effective visual methods for grouping information

that make it easier to perceive and locate items.
• Design audio sounds to be readily distinguishable from one another so that users can per-

ceive how they differ and remember what each one represents.
• Research proper color contrast techniques when designing an interface, especially when

choosing a color for text so that it stands out from the background. For example, it is okay
to use yellow text on a black or blue background, but not on a white or green background.

• Haptic feedback should be used judiciously. The kinds of haptics used should be easily
distinguishable so that, for example, the sensation of squeezing is represented in a tactile
form that is different from the sensation of pushing. Overuse of haptics can cause confu-
sion. Apple iOS suggests providing haptic feedback in response to user-initiated actions,
such as when the action of unlocking a vehicle using a smartwatch has been completed.

4 C O G N I T I V E A S P E C T S110

Black Hills Forest
Cheyenne River
Social Science
South San Jose
Badlands Park
Juvenile Justice

Peters Landing
Public Health
San Bernardino
Moreno Valley
Altamonte Springs
Peach Tree City

Jefferson Farms
Psychophysics
Political Science
Game Schedule
South Addision
Cherry Hills Village

Devlin Hall
Positions
Hubard Hall
Fernadino Beach
Council Bluffs
Classical Lit

Sociology
Greek
Wallace Hall
Concert Tickets
Public Radio FM
Children’s Museum

Creative Writing
Lake Havasu City
Engineering Bldg
Sports Studies
Lakewood Village
Rock Island

Highland Park
Machesney Park
Vallecito Mts.
Rock Falls
Freeport
Slaughter Beach

Results and Stats
Thousand Oaks
Promotions
North Palermo
Credit Union
Wilner Hall

Deerfield Beach
Arlington Hill
Preview Game
Richland Hills
Experts Guids
Neff Hall

Writing Center
Theater Auditions
Delaware City
Scholarships
Hendricksville
Knights Landing

Rocky Mountains
Latin
Pleasant Hills
Observatory
Public Affairs
Heskett Center

Performing Arts
Italian
Coaches
Mckees Rocks
Glenwood Springs
Urban Affairs

Grand Wash Cliffs
Indian Well Valley
Online Courses
Lindquist Hall
Fisk Hall
Los Padres Forest

Modern Literature
Studio Arts
Hugher Complex
Cumberland Flats
Central Village
Hoffman Estates

Brunswick
East Millinocket
Women’s Studies
Vacant
News Theatre
Candlewood Isle

McLeansboro
Experimental Links
Graduation
Emory Lindquist
Clinton Hall
San Luis Obispo

Webmaster
Russian
Athletics
Go Shockers
Degree Options
Newsletter

Curriculum
Emergency (EMS)
Statistics
Award Documents
Language Center
Future Shockers

Student Life
Accountancy
Mc Knight Center
Council of Women
Commute
Small Business

Dance
Gerontloge
Marketing
College Bylaws
Why Wichita?
Tickets

Educational Map
Physical Plant
Graphic Design
Non Credit Class
Media Relations
Advertising

Beta Alpha Psi
Liberal Arts
Counseling
Biological Science
Duerksen Fine Art
EMT Program

Staff
Aerospace
Choral Dept.
Alberg Hall
French
Spanish

Softball, Men’s
McKinley Hall
Email
Dental Hygiene
Tenure
Personnel Policies

English
Graduate Complex
Music Education
Advising Center
Medical School
Levitt Arena

Religion
Art Composition
Physics
Entrepreneurship
Koch Arena
Roster

Parents
Wrestling
Philosopy
Wichita Lyceum
Fairmount Center
Women’s Museum

Instrmental
Nrsing
Opera
Sports History
Athletic Dept.
Health Plan

Gelogy
Manufacturing
Management
UCATS
Alumni News
Saso

Intercollegiate
Bowling
Wichita Gateway
Transfer Day
Job Openings
Live Radio

Thinker & Movers
Alumni
Foundations
Corbin Center
Jardine Hall
Hugo Wall School

Career Services
Doers & Shockers
Core Values
Grace Wilkie Hall
Strategic Plan
Medical Tech

Figure 4.3 Two ways of structuring information on a web page
Source: Weller (2004)

4 . 2 W h AT I S C O G N I T I O N ? 111

when using color contrast (Weller, 2004). The findings suggest that using contrasting colors
in this manner may not be a good way to group information on a screen, but that using bor-
ders is more effective (Galitz, 1997).

4.2.3 Memory
Memory involves recalling various kinds of knowledge that allow people to act appropri-
ately. For example, it allows them to recognize someone’s face, remember someone’s name,
recall when they last met them, and know what they said to them last.

It is not possible for us to remember everything that we see, hear, taste, smell, or touch,
nor would we want to, as our brains would get overloaded. A filtering process is used to
decide what information gets further processed and memorized. This filtering process, how-
ever, is not without its problems. Often, we forget things that we would like to remember
and conversely remember things that we would like to forget. For example, we may find it
difficult to remember everyday things, like people’s names, or scientific knowledge such as
mathematical formulae. On the other hand, we may effortlessly remember trivia or tunes that
cycle endlessly through our heads.

How does this filtering process work? Initially, encoding takes place, determining
which information is paid attention to in the environment and how it is interpreted. The
extent to which it takes place affects people’s ability to recall that information later.
The more attention that is paid to something and the more it is processed in terms of
thinking about it and comparing it with other knowledge, the more likely it is to be
remembered. For example, when learning about a topic, it is much better to reflect on it,
carry out exercises, have discussions with others about it, and write notes rather than pas-
sively reading a book or watching a video about it. Thus, how information is interpreted
when it is encountered greatly affects how it is represented in memory and how easy it is
to retrieve subsequently.

Another factor that affects the extent to which information can be subsequently retrieved
is the context in which it is encoded. One outcome is that sometimes it can be difficult for
people to recall information that was encoded in a different context from the one in which
they are at present. Consider the following scenario:

You are on a train and someone comes up to you and says hello. You don’t recognize this
person for a few moments, but then you realize it is one of your neighbors. You are only
used to seeing them in the hallway of your apartment building and seeing them out of
context makes this person initially difficult to recognize.

Another well-known memory phenomenon is that people are much better at recognizing
things than recalling things. Furthermore, certain kinds of information is easier to recognize
than others. In particular, people are good at recognizing thousands of pictures even if they
have only seen them briefly before. In contrast, people are not as good at remembering details
about the things they photograph when visiting places, such as museums. It seems that they
remember less about objects when they have photographed them than when they observe
them with the naked eye (Henkel, 2014). The reason for this is that the study participants
appeared to be focusing more on framing the photo and less on the details of the object being
photographed. Consequently, people don’t process as much information about an object
when taking photos of it compared with when they are actually looking at it; hence, they are
unable to remember as much about it later.

4 C O G N I T I V E A S P E C T S112

Increasingly, people rely on the Internet and their smartphones to act as cognitive pros-
theses. Smartphones with Internet access have become an indispensable extension of the
mind. Sparrow et al. (2011) showed how expecting to have readily available Internet access
reduces the need and hence the extent to which people attempt to remember the information
itself, while enhancing their memory for knowing where to find it online. Many people will
whip out a smartphone to find out who acted in a movie, the name of a book, or what year
a pop song was first released, and so on. Besides search engines, there are a number of other
cognitive prosthetic apps that instantly help people find out or remember something, such as
Shazam.com, the popular music recognition app.

4.2.3.1 Personal Information Management
The number of documents written, images created, music files recorded, videoclips down-
loaded, emails with attachments saved, URLs bookmarked, and so on, increases every day. A
common practice is for people to store these files on a phone, on a computer, or in the cloud
with a view to accessing them later. This is known as personal information management
(PIM). The design challenge here is deciding which is the best way of helping users organize
their content so that it can be easily searched, for example, via folders, albums, or lists. The
solution should help users readily access specific items at a later date, for example, a par-
ticular image, video, or document. This can be difficult, however, especially when there are
thousands or hundreds of thousands of pieces of information available. How does someone
find that photo they took of their dog spectacularly jumping into the sea to chase a seagull,
which they believe was taken two or three years ago? It can take them ages to wade through
the hundreds of folders they have catalogued by date, name, or tag. Do they start by homing
in on folders for a given year, looking for events, places, or faces, or typing in a search term
to find the specific photo?

ACTIVITY 4.2
Try to remember the birthdays of all the members of your family and closest friends. How
many can you remember? Then try to describe the image/graphic of the latest app you
downloaded.

Comment
It is likely that you remembered the image, the colors, and the name of the app you down-
loaded much better than the birthdays of your family and friends—most people now rely on
Facebook or other online app to remind them about such special dates. People are good at
remembering visual cues about things, for example, the color of items, the location of objects
(for example, a book being on the top shelf), and marks on an object (like a scratch on a
watch, a chip on a cup, and so on). In contrast, people find other kinds of information per-
sistently difficult to learn and remember, especially arbitrary material like phone numbers.

http://Shazam.com

4 . 2 W h AT I S C O G N I T I O N ? 113

It can become frustrating if an item is not easy to locate, especially when users have to
spend lots of time opening numerous folders when searching for a particular image or an old
document, simply because they can’t remember what they called it or where they stored it.
How can we improve upon this cognitive process of remembering?

Naming is the most common means of encoding content, but trying to remember a name
someone created some time back can be difficult, especially if they have tens of thousands
of named files, images, videos, emails, and so forth. How might such a process be facilitated,
considering individual memory abilities? Ofer Bergman and Steve Whittaker (2016) have
proposed a model for helping people manage their “digital stuff” based on curation. The
model involves three interdependent processes: how to decide what personal information
to keep, how to organize that information when storing it, and which strategies to use to
retrieve it later. The first stage can be assisted by the system they use. For example, email,
texts, music, and photos are stored as default by many devices. Users have to decide whether
to place these in folders or delete them. In contrast, when browsing the web, they have to
make a conscious decision as to whether a site they are visiting is worth bookmarking as one
they might want to revisit later.

A number of ways of adding metadata to documents have been developed, includ-
ing time stamping, categorizing, tagging, and attribution (for example color, text, icon,
sound, or image). Surprisingly, however, the majority of people still prefer the old-
fashioned way of using folders for holding their files and other digital content. One
reason is that folders provide a powerful metaphor (see Chapter  3, “Conceptualizing
Interaction”) that people can readily understand—placing things that have something in
common into a container.

A folder that is often seen on many users’ desktop is one simply labeled “stuff.” This
is where documents, images, and so forth, that don’t have an obvious place to go are often
placed but that people still want to keep somewhere. It has also been found that there is
a strong preference for scanning across and within folders when looking for something
rather than simply typing a term into a search engine (Ofer and Whittaker, 2016). Part of
the problem with using search engines is that it can be difficult to recall the name of the file
someone is seeking. This process requires more cognitive effort than navigating through a
set of folders.

To help users with searching, a number of search and find tools, such as Apple’s Spot-
light, now enable them to type a partial name or even the first letter of a file that it then
searches for throughout the entire system, including the content inside documents, apps,
games, emails, contacts, images, calendars, and applications. Figure 4.4 shows a partial list of
files that Spotlight matched to the word cognition, categorized in terms of documents, mail
and text messages, PDF documents, and so on.

4.2.3.2 Memory Load and Passwords
Phone, online, and mobile banking allow customers to carry out financial transactions, such
as paying bills and checking the balance of their accounts, at their convenience. One of the
problems confronting banks that provide these capabilities, however, is how to manage secu-
rity concerns, especially preventing fraudulent transactions.

4 C O G N I T I V E A S P E C T S114

Figure 4.4 Apple’s Spotlight search tool

BOX 4.1
The Problem with the Magical Number Seven, Plus or Minus Two

Perhaps the best-known finding in psychology (certainly the one that nearly all students
remember many years after they have finished their studies) is George Miller’s (1956) theory
that seven, plus or minus two, chunks of information can be held in short-term memory at any
one time. However, it is also one that has been misapplied in interaction design because several
designers assume that it means they should design user interfaces only to have seven, plus or
minus two, widgets on a screen, such as menus. In fact, however, this is a misapplication of the
phenomenon, as explained here.

4 . 2 W h AT I S C O G N I T I O N ? 115

By short-term memory, Miller meant a memory store in which information was assumed to
be processed when first perceived. By chunks of information, Miller meant a range of items such
as numbers, letters, or words. According to Miller’s theory, therefore, people’s immediate mem-
ory capacity is very limited. They are able to remember only a few words or numbers that they
have heard or seen. If you are not familiar with this phenomenon, try the following exercise:

Read the first set of numbers here (or get someone to read them to you), cover it up, and
then try to recall as many of the items as possible. Repeat this for the other sets.
• 3, 12, 6, 20, 9, 4, 0, 1, 19, 8, 97, 13, 84
• cat, house, paper, laugh, people, red, yes, number, shadow, broom, rain, plant, lamp,

chocolate, radio, one, coin, jet
• t, k, s, y, r, q, x, p, a, z, l, b, m, e

How many did you correctly remember for each set? Between five and nine, as suggested
by Miller’s theory?

Chunks of information can also be combined items that are meaningful. For example, it is
possible to remember the same number of two-word phrases like hot chocolate, banana split,
cream cracker, rock music, cheddar cheese, leather belt, laser printer, tree fern, fluffy duckling,
or cold rain. When these are all jumbled up (that is, split belt, fern crackers, banana laser,
printer cream, cheddar tree, rain duckling, or hot rock), however, it is much harder to remem-
ber as many chunks. This is mainly because the first set contains all meaningful two-word
phrases that have been heard before and that require less time to be processed in short-term
memory, whereas the second set is made up of completely novel phrases that don’t exist in the
real world. You need to spend time linking the two parts of the phrase together while trying
to memorize them. This takes more time and effort to achieve. Of course, it is possible to do if
you have time to spend rehearsing them, but if you are asked to do it having heard them only
once in quick succession, it is most likely that you will remember only a few.

So, how might people’s ability to remember only 7 ± 2 chunks of information that they
have just read or heard be usefully applied to interaction design? According to a survey by
Bob Bailey (2000), several designers have been led to believe the following guidelines and have
created interfaces based on them:
• Have only seven options on a menu.
• Display only seven icons on a menu bar.
• Never have more than seven bullets in a list.
• Place only seven tabs at the top of a website page.
• Place only seven items on a pull-down menu.

He points out how this is not how the principle should be applied. The reason is that these are
all items that can be scanned and rescanned visually and hence do not have to be recalled from
short-term memory. They don’t just flash up on the screen and disappear, requiring the user to
remember them before deciding which one to select. If you were asked to find an item of food
most people crave in the set of single words listed earlier, would you have any problem? No,
you would just scan the list until you recognized the one (chocolate) that matched the task
and then select it—just as people do when interacting with menus, lists, and tabs, regardless of
whether they consist of three or 30 items. What users are required to do here is not remember
as many items as possible, having only heard or seen them once in a sequence, but instead scan
through a set of items until they recognize the one they want. This is a quite different task.

4 C O G N I T I V E A S P E C T S116

One solution has been to develop rigorous security measures whereby customers must
provide multiple pieces of information before gaining access to their accounts. This is called
multifactor authentication (MFA). The method requires a user to provide two or more pieces
of evidence that only they know, such as the following:

• Their ZIP code or postal code
• Their mother’s maiden name
• Their birthplace
• The last school they attended
• The first school they attended
• A password of between five and ten letters
• A memorable address (not their home)
• A memorable date (not their birthday)

Many of these are relatively easy to remember and recall since they are familiar to the
specific user. But consider the last two. How easy is it for someone to come up with such
memorable information and then be able to recall it readily? Perhaps the customer can give
the address and birthday of another member of their family as a memorable address and
date. But what about the request for a password? Suppose a customer selects the word inter-
action as a password—fairly easy to remember, yes? The problem is that banks do not ask
for the full password because of the danger that someone in the vicinity might overhear or
oversee. Instead, they ask the customer to provide specific letters or numbers from it, like the
seventh followed by the fifth. Certainly, such information does not spring readily to mind.
Instead, it requires mentally counting each alphanumeric character of the password until
the desired one is reached. How long does it take you to determine the seventh letter of the
password interaction? How did you do it?

To make things harder, banks also randomize the questions they ask. Again, this is to pre-
vent someone else who is nearby from memorizing the sequence of information. However, it
also means that the customers themselves cannot learn the sequence of information required,
meaning that they have to generate different information each time.

This requirement to remember and recall such information puts a big memory load on cus-
tomers. Some people find such a procedure quite nerve-racking and are prone to forget certain
pieces of information. As a coping strategy, they write down their details on a sheet of paper.
Having such an external representation at hand makes it much easier for them to read off the
necessary information rather than having to recall it from memory. However, it also makes them
vulnerable to the fraud the banks are trying to prevent should anyone else get ahold of that piece
of paper! Software companies have also developed password managers to help reduce memory
load. An example is LastPass (https://www.lastpass.com/), which is designed to remember all of
your passwords, meaning that you only have to remember one master password.

ACTIVITY 4.3
How can banks overcome the problem of providing a secure system while making the mem-
ory load easier for people wanting to use online and mobile phone banking?

https://www.lastpass.com/

4 . 2 W h AT I S C O G N I T I O N ? 117

Much research has been conducted into how to design technology to help people suffer-
ing from memory loss (for instance those with Alzheimer’s disease). An early example was the
SenseCam, which was originally developed by Microsoft Research Labs in Cambridge (UK)
to enable people to remember everyday events. The device they developed was a wearable
camera that intermittently took photos, without any user intervention, while it was worn (see
Figure 4.5). The camera could be set to take pictures at particular times, for example, every
30 seconds, or based on what it sensed (for example, acceleration). The camera employed
a fish-eye lens, enabling nearly everything in front of the wearer to be captured. The digital
images for each day were stored, providing a record of the events that a person experienced.
Several studies were conducted on patients with various forms of memory loss using the
device. For example, Steve Hodges et al. (2006) describe how a patient, Mrs. B, who had amnesia

Comment
Advances in computer vision and biometrics technology means that it is now possible to
replace the need for passwords to be typed in each time. For example, facial and touch ID
can be configured on newer smartphones to enable password-free mobile banking. Once
these are set up, a user simply needs to put their face in front of their phone’s camera or their
finger on the fingerprint sensor. These alternative approaches put the onus on the phone to
recognize and authenticate the person rather than the person having to learn and remember
a password.

BOX 4.2
Digital Forgetting

Much of the research on memory and interaction design has focused on developing cogni-
tive aids that help people to remember, for example, reminders, to-do lists, and digital photo
collections. However, there are times when we want to forget a memory. For example, when
someone breaks up with their partner, it can be emotionally painful to be reminded of them
through shared digital images, videos, and Facebook friends. How can technology be designed
to help people forget such memories? How could social media, such as Facebook, be designed
to support this process?

Corina Sas and Steve Whittaker (2013) suggest designing new ways of harvesting digital
materials connected to a broken relationship through using various automatic methods, such
as facial recognition, which dispose of them without the person needing to go through them
personally and be confronted with painful memories. They also suggest that during a separa-
tion, people could create a collage of their digital content connected to the ex, so as to trans-
form them into something more abstract, thereby providing a means for closure and helping
with the process of moving on.

4 C O G N I T I V E A S P E C T S118

was given a SenseCam to wear. The images that were collected were uploaded to a computer
at the end of each day. For the next two weeks, Mrs. B and her husband looked through
these and talked about them. During this period, Mrs. B’s recall of an event nearly tripled,
to a point where she could remember nearly everything about that event. Prior to using the
SenseCam, Mrs. B would have typically forgotten the little that she could initially remember
about an event within a few days.

Since this seminal research, a number of digital memory apps have been developed for
people with dementia. For example, RemArc has been designed to trigger long-term memo-
ries in people with dementia using BBC Archive material such as old photos, videos, and
sound clips.

Figure 4.5 The SenseCam device and a digital image taken with it
Source: Used courtesy of Microsoft Research Cambridge

Design Implications
Memory

• Reduce cognitive load by avoiding long and complicated procedures for carrying out tasks.
• Design interfaces that promote recognition rather than recall by using familiar interaction

patterns, menus, icons, and consistently placed objects.
• Provide users with a variety of ways of labeling digital information (for example files,

emails, and images) to help them easily identify it again through the use of folders, cate-
gories, color, tagging, time stamping, and icons.

4 . 2 W h AT I S C O G N I T I O N ? 119

4.2.4 Learning
Learning is closely connected with memory. It involves the accumulation of skills and knowl-
edge that would be impossible to achieve without memory. Likewise, people would not be
able to remember things unless they had learned them. Within cognitive psychology, learning
is thought to be either incidental or intentional. Incidental learning occurs without any inten-
tion to learn. Examples include learning about the world such as recognizing faces, streets,
and objects, and what you did today. In contrast, intentional learning is goal-directed with
the goal of being able to remember it. Examples include studying for an exam, learning a
foreign language, and learning to cook. This is much harder to achieve. Software develop-
ers, therefore, cannot assume that users will simply be able to learn how to use an app or a
product. It often requires much conscious effort.

Moreover, it is well known that people find it hard to learn by reading a set of
instructions in a manual. Instead, they much prefer to learn through doing. GUIs and
direct manipulation interfaces are good environments for supporting this kind of active
learning by supporting exploratory interaction and, importantly, allowing users to undo
their actions, that is, return to a previous state if they make a mistake by clicking the
wrong option.

There have been numerous attempts to harness the capabilities of different technologies
to support intentional learning. Examples include online learning, multimedia, and virtual
reality. They are assumed to provide alternative ways of learning through interacting with
information that is not possible with traditional technologies, for example, books. In so
doing, they have the potential of offering learners the ability to explore ideas and concepts in
different ways. For example, multimedia simulations, wearables, and augmented reality (see
Chapter 7, “Interfaces”) have been designed to help teach abstract concepts (such as mathe-
matical formulae, notations, laws of physics, biological processes) that students find difficult
to grasp. Different representations of the same process (for instance, a graph, formula, sound,
or simulation) are displayed and interacted with in ways that make their relationship with
each other clearer to the learner.

People often learn effectively when collaborating together. Novel technologies have also
been designed to support sharing, turn-taking, and working on the same documents. How
these can enhance learning is covered in the next chapter.

Design Implications
Learning

• Design interfaces that encourage exploration.
• Design interfaces that constrain and guide users to select appropriate actions when initially

learning.

4 C O G N I T I V E A S P E C T S120

4.2.5 Reading, Speaking, and Listening
Reading, speaking, and listening are three forms of language processing that have some
similar and some different properties. One similarity is that the meaning of sentences or
phrases is the same regardless of the mode in which it is conveyed. For example, the sen-
tence “Computers are a wonderful invention.” essentially has the same meaning whether
one reads it, speaks it, or hears it. However, the ease with which people can read, listen,
or speak differs depending on the person, task, and context. For example, many people
find listening easier than reading. Specific differences between the three modes include the
following:

• Written language is permanent while listening is transient. It is possible to re-read informa-
tion if not understood the first time around. This is not possible with spoken information
that is being broadcast unless it is recorded.

• Reading can be quicker than speaking or listening, as written text can be rapidly scanned
in ways not possible when listening to serially presented spoken words.

• Listening requires less cognitive effort than reading or speaking. Children often prefer to
listen to narratives provided in multimedia or web-based learning material rather than to
read the equivalent text online. The popularity of audiobooks suggests adults also enjoy
listening to novels, and so forth.

• Written language tends to be grammatical, while spoken language is often ungrammati-
cal. For example, people often start talking and stop in midsentence, letting someone else
start speaking.

• Dyslexics have difficulties understanding and recognizing written words, making it hard
for them to write grammatical sentences and spell correctly.

Many applications have been developed either to capitalize on people’s reading, writing,
and listening skills, or to support or replace them where they lack or have difficulty with
them. These include the following:

• Interactive books and apps that help people to read or learn foreign languages.
• Speech-recognition systems that allow people to interact with them by using spoken com-
mands (for example, Dragon Home, Google Voice Search, and home devices, such as Ama-
zon Echo, Google Home, and Home Aware that respond to vocalized requests).

• Speech-output systems that use artificially generated speech (for instance, written text-to-
speech systems for the blind).

• Natural-language interfaces that enable people to type in questions and get written
responses (for example, chatbots).

• Interactive apps that are designed to help people who find it difficult to read, write, or
speak. Customized input and output devices that allow people with various disabilities to
have access to the web and use word processors and other software packages.

• Tactile interfaces that allow people who are visually impaired to read graphs (for example,
Designboom’s braille maps for the iPhone).

4 . 2 W h AT I S C O G N I T I O N ? 121

4.2.6 Problem-Solving, Planning, Reasoning, and Decision-Making
Problem-solving, planning, reasoning, and decision-making are processes involving reflective
cognition. They include thinking about what to do, what the available options are, and what
the consequences might be of carrying out a given action. They often involve conscious pro-
cesses (being aware of what one is thinking about), discussion with others (or oneself), and
the use of various kinds of artifacts (for example, maps, books, pens, and paper). Reasoning
involves working through different scenarios and deciding which is the best option or solu-
tion to a given problem. For example, when deciding on where to go on a vacation, people
may weigh the pros and cons of different locations, including cost, weather at the location,
availability and type of accommodation, time of flights, proximity to a beach, the size of the
local town, whether there is nightlife, and so forth. When weighing all of the options, they
reason through the advantages and disadvantages of each before deciding on the best one.

There has been a growing interest in how people make decisions when confronted with
information overload, such as when shopping on the web or at a store (Todd et al., 2011).
How easy is it to decide when confronted with an overwhelming choice? Classical rational
theories of decision-making (for instance, von Neumann and Morgenstern, 1944) posit that
making a choice involves weighing up the costs and benefits of different courses of action.
This is assumed to involve exhaustively processing the information and making trade-offs
between features. Such strategies are very costly in computational and informational terms—
not the least because they require the decision-maker to find a way of comparing the differ-
ent options. In contrast, research in cognitive psychology has shown how people tend to use
simple heuristics when making decisions (Gigerenzer et al., 1999). A theoretical explanation
is that human minds have evolved to act quickly, making just good enough decisions by using
fast and frugal heuristics. We typically ignore most of the available information and rely
only on a few important cues. For example, in the supermarket, shoppers make snap judg-
ments based on a paucity of information, such as buying brands that they recognize, that are
low-priced, or that offer attractive packaging—seldom reading other package information.
This suggests that an effective design strategy is to make key information about a product

Design Implications
Reading, Speaking, and Listening

• Keep the length of speech-based menus and instructions to a minimum. Research has
shown that people find it hard to follow spoken menus with more than three or four
options. Likewise, they are bad at remembering sets of instructions and directions that
have more than a few parts.

• Accentuate the intonation of artificially generated speech voices, as they are harder to
understand than human voices.

• Provide opportunities for making text large on a screen, without affecting the formatting,
for people who find it hard to read small text.

4 C O G N I T I V E A S P E C T S122

highly salient. However, what exactly is salient will vary from person to person. It may
depend on the user’s preferences, allergies, or interests. For example, one person might have a
nut allergy and be interested in food miles, while another may be more concerned about the
farming methods used (such as organic, FairTrade, and so on) and a product’s sugar content.

Thus, instead of providing ever more information to enable people to compare products
when making a choice, a better strategy is to design technological interventions that provide
just enough information, and in the right form, to facilitate good choices. One solution is to
exploit new forms of augmented reality and wearable technology that enable information-
frugal decision-making and that have glanceable displays that can represent key information
in an easy-to-digest form (Rogers et al., 2010b). The interface for an AR or wearable app
could be designed to provide certain “food” or other information filters, which could be
switched on or off by the user to match their preferences.

DILEMMA
Can You Make Up Your Mind Without an App?

In their book The App Generation (Yale University Press, 2014), Howard Gardner and Katie
Davis note how some young people find it hard to make their own decisions because they are
becoming more and more risk averse. The reason for this is that they now rely on using an
increasing number of mobile apps to help them in their decision-making, removing the risk of
having to decide for themselves. Often, they will first read what others have said on social media
sites, blogs, and recommender apps before choosing where to eat or go, what to do or listen
to, and so on. However, relying on a multitude of apps means that young people are becom-
ing increasingly unable to make decisions by themselves. For many, their first big decision is
choosing which college or university to attend. This has become an agonizing and prolonged
experience where both parents and apps play a central role in helping them out. They will read
countless reviews, go on numerous visits to colleges and universities with their parents over
several months, study university rankings that apply different measures, read up on what others
say on social networking sites, and so on. In the end, however, they may finally choose the insti-
tution where their friends attend or the one they liked the look of in the first place.

Design Implications
Problem-Solving, Planning, Reasoning, and Decision-Making

• Provide information and help pages that are easy to access for people who want to under-
stand more about how to carry out an activity more effectively (for example, web searching).

• Use simple and memorable functions to support rapid decision-making and planning.
Enable users to set or save their own criteria or preferences.

4 . 3 C O G N I T I V E F r A M E W O r k S 123

4.3 Cognitive Frameworks

A number of conceptual frameworks have been developed to explain and predict user behav-
ior based on theories of cognition. In this section, we outline three that focus primarily on
mental processes and three others that explain how humans interact and use technologies in
the context in which they occur. These are mental models, gulfs of execution and evaluation,
information processing, distributed cognition, external cognition, and embodied interaction.

4.3.1 Mental Models
Mental models are used by people when needing to reason about a technology, in particular,
to try to fathom what to do when something unexpected happens with it or when encounter-
ing unfamiliar products for the first time. The more someone learns about a product and how
it functions, the more their mental model develops. For example, broadband engineers have a
deep mental model of how Wi-Fi networks work that allows them to work out how to set them
up and fix them. In contrast, an average citizen is likely to have a reasonably good mental model
of how to use the Wi-Fi network in their home but a shallow mental model of how it works.

Within cognitive psychology, mental models have been postulated as internal construc-
tions of some aspect of the external world that are manipulated, enabling predictions and
inferences to be made (Craik, 1943). This process is thought to involve the fleshing out
and the running of a mental model (Johnson-Laird, 1983). This can involve both unconscious and
conscious mental processes, where images and analogies are activated.

ACTIVITY 4.4
To illustrate how we use mental models in our everyday reasoning, imagine the following two
scenarios:
• You arrive home from a vacation on a cold winter’s night to a cold house. You have a small
baby, and you need to get the house warm as quickly as possible. Your house is centrally
heated, but it does not have a smart thermostat that can be controlled remotely. Do you set
the thermostat as high as possible or turn it to the desired temperature (for instance, 70°F)?

• You arrive home after being out all night and you’re starving hungry. You look in the freezer
and find all that is left is a frozen pizza. The instructions on the package say heat the oven
to 375°F and then place the pizza in the oven for 20 minutes. Your oven is electric. How do
you heat it up? Do you turn it to the specified temperature or higher?

Comment
Most people when asked the first question imagine the scenario in terms of what they would do
in their own house and choose the first option. A typical explanation is that setting the tempera-
ture to be as high as possible increases the rate at which the room warms up. While many people
may believe this, it is incorrect. Thermostats work by switching on the heat and keeping it going
at a constant speed until the desired set temperature is reached, at which point it cuts out. They
cannot control the rate at which heat is given out from a heating system. Left at a given setting,
thermostats will turn the heat on and off as necessary to maintain the desired temperature.

(Continued)

4 C O G N I T I V E A S P E C T S124

Why do people use erroneous mental models? It seems that in the previous two scenarios,
they are using a mental model based on a general valve theory of the way something works
(Kempton, 1986). This assumes the underlying principle of more is more: the more you turn or
push something, the more it causes the desired effect. This principle holds for a range of phys-
ical devices, such as faucets, where the more you turn them, the more water that comes out.
However, it does not hold for thermostats, which instead function based on the principle of
an on-off switch. What seems to happen is that in everyday life, people develop a core set
of abstractions about how things work and apply these to a range of devices, irrespective of
whether they are appropriate.

Using incorrect mental models to guide behavior is surprisingly common. Just watch
people at a pedestrian crossing or waiting for an elevator. How many times do they press the
button? A lot of people will press it at least twice. When asked why, a common reason is that
they think it will make the lights change faster or ensure the elevator arrives.

Many people’s understanding of how technologies and services work is poor, for instance,
the Internet, wireless networking, broadband, search engines, computer viruses, the cloud, or
AI. Their mental models are often incomplete, easily confusable, and based on inappropri-
ate analogies and superstition (Norman, 1983). As a consequence, they find it difficult to
identify, describe, or solve a problem, and they lack the words or concepts to explain what
is happening.

How can user experience (UX) designers help people to develop better mental models?
A major obstacle is that people are resistant to spending much time learning about how
things work, especially if it involves reading manuals or other documentation. An alterna-
tive approach is to design technologies to be more transparent, which makes them easier to
understand in terms of how they work and what to do when they don’t. This includes provid-
ing the following:

• Clear and easy-to-follow instructions
• Appropriate online help, tutorials, and context-sensitive guidance for users in the form of
online videos and chatbot windows, where users can ask how to do something

• Background information that can be accessed to let people know how something works
and how to make the most of the functionality provided

• Affordances of what actions an interface allows (for example, swiping, clicking, or
selecting).

The concept of transparency has been used to refer to making interfaces intuitive to use
so that people can simply get on with their tasks, such as taking photos, sending messages,
or talking to someone remotely without having to worry about long sequences of buttons to

When asked the second question, most people say they would turn the oven to the speci-
fied temperature and put the pizza in when they think it is at the right temperature. Some
people answer that they would turn the oven to a higher temperature in order to warm it up
more quickly. Electric ovens work on the same principle as central heating, so turning the heat
up higher will not warm it up any quicker. There is also the problem of the pizza burning if
the oven is too hot!

4 . 3 C O G N I T I V E F r A M E W O r k S 125

press or options to select. An ideal form of transparency is where the interface simply disap-
pears from the focus of someone’s attention. Imagine if every time you had to give a presenta-
tion that all you had to do was say, “Upload and start my slides for the talk I prepared today,”
and they would simply appear on the screen for all to see. That would be bliss! Instead, many
AV projector systems persist in being far from transparent, requiring many counterintuitive
steps for someone to get their slides to show. This can include trying to find the right dongle,
setting up the system, typing in a password, setting up audio controls, and so forth, all of
which seems to take forever, especially when there is an audience waiting.

4.3.2 Gulfs of Execution and Evaluation
The gulf of execution and the gulf of evaluation describe the gaps that exist between the user
and the interface (Norman, 1986; Hutchins et al., 1986). The gulfs are intended to show how
to design the latter to enable the user to cope with them. The first one, the gulf of execution,
describes the distance from the user to the physical system while the second one, the gulf of eval-
uation, is the distance from the physical system to the user (see Figure 4.6). Don Norman and
his colleagues suggest that designers and users need to concern themselves with how to bridge
the gulfs to reduce the cognitive effort required to perform a task. This can be achieved, on the
one hand, by designing usable interfaces that match the psychological characteristics of the user
(for example, taking into account their memory limitations) and, on the other hand, by the user
learning to create goals, plans, and action sequences that fit with how the interface works.

The conceptual framework of the gulfs is still considered useful today, as it can help
designers consider whether their proposed interface design is increasing or decreasing cog-
nitive load and whether it makes it obvious as to which steps to take for a given task. For
example, Kathryn Whitenton (2018), who is a digital strategy manager, describes how the
gulfs prevented her from understanding and why she could not get her Bluetooth headset to
connect with her computer despite following the steps in the manual. She wasted a whole
hour repeating the steps and getting more and more frustrated and not making any progress.
Eventually, she discovered that the system she thought was toggled “on” was actually show-
ing her that it was “off” (see Figure 4.7). She found this out by searching the web to see

Gulf of
Evaluation

What’s the current
system state?

The
User

The
World

Gulf of
Execution
How do I use
this system?

Figure 4.6 Bridging the gulfs of execution and evaluation
Source: https://www.nngroup.com/articles/two-ux-gulfs-evaluation-execution. Used courtesy of the Nielsen Norman
Group

4 C O G N I T I V E A S P E C T S126

whether someone else could help her. She found a site that showed a screenshot of what the
settings switch looks like when turned on. There was an inconsistency between the labels
of two similar-looking switches, one showing the current status of the interaction (off) and
the other showing what would happen if the interaction were engaged (Add Bluetooth Or
Other Device).

This inconsistency of similar functions illustrated how the gulfs of execution and evalu-
ation were poorly bridged, making it confusing and difficult for the user to know what the
problem was or why they could not get their headset to connect with their computer despite
many attempts. In the article, she explains how the gulfs could be easily bridged by designing
all sliders to give the same information as to what happens when they are moved from one
side to the other. For more details about this situation, see https://www.nngroup.com/articles/
two-ux-gulfs-evaluation-execution/.

4.3.3. Information Processing
Another approach to conceptualizing how the mind works has been to use metaphors and
analogies to describe cognitive processes. Numerous comparisons have been made, including
conceptualizing the mind as a reservoir, a telephone network, a digital computer, and a deep
learning network. One prevalent metaphor from cognitive psychology is the idea that the
mind is an information processor. Information is thought to enter and exit the mind through
a series of ordered processing stages (see Figure 4.8). Within these stages, various processes
are assumed to act upon mental representations. Processes include comparing and matching.
Mental representations are assumed to comprise images, mental models, rules, and other
forms of knowledge.

The information processing model provides a basis from which to make predictions
about human performance. Hypotheses can be made about how long someone will take
to perceive and respond to a stimulus (also known as reaction time) and what bottlenecks
occur if a person is overloaded with too much information. One of the first HCI models to

Figure 4.7 An example where the gulfs helped explain how a seemingly trivial design decision led
to much user frustration
Source: https://www.nngroup.com/articles/two-ux-gulfs-evaluation-execution. Used courtesy of the Nielsen Norman
Group

https://www.nngroup.com/articles/two-ux-gulfs-evaluation-execution/

https://www.nngroup.com/articles/two-ux-gulfs-evaluation-execution/

4 . 3 C O G N I T I V E F r A M E W O r k S 127

be derived from the information processing theory was the human processor model, which
modeled the cognitive processes of a user interacting with a computer (Card et al., 1983).
Cognition was conceptualized as a series of processing stages, where perceptual, cognitive,
and motor processors are organized in relation to one another. The model predicts which
cognitive processes are involved when a user interacts with a computer, enabling calcula-
tions to be made of how long a user will take to carry out various tasks. In the 1980s, it was
found to be a useful tool for comparing different word processors for a range of editing tasks.
Even though it is not often used today to inform interaction design, it is considered to be a
HCI classic.

The information processing approach was based on modeling mental activities that hap-
pen exclusively inside the head. Nowadays, it is more common to understand cognitive activ-
ities in the context in which they occur, analyzing cognition as it happens in the wild (Rogers,
2012). A central goal has been to look at how structures in the environment can both aid
human cognition and reduce cognitive load. The three external approaches we consider next
are distributed cognition, external cognition, and embodied cognition.

4.3.4 Distributed Cognition
Most cognitive activities involve people interacting with external kinds of representations,
such as books, documents, and computers and also with each other. For example, when
someone goes home from wherever they have been, they do not need to remember the details
of the route because they rely on cues in the environment (for instance, they know to turn
left at the red house, right when the road comes to a T-junction, and so on). Similarly, when
they are at home, they do not have to remember where everything is because information is
available as needed. They decide what to eat and drink by scanning the items in the fridge,
look out the window to see whether it is raining or not, and so on. Likewise, they are always
creating external representations for a number of reasons, not only to help reduce memory
load and the cognitive cost of computational tasks, but also, importantly, to extend what they
can do and allow people to think more powerfully (Kirsh, 2010).

The distributed cognition approach was developed to study the nature of cognitive phe-
nomena across individuals, artifacts, and internal and external representations (Hutchins,
1995). Typically, it involves describing a cognitive system, which entails interactions among
people, the artifacts they use, and the environment in which they are working. An example
of a cognitive system is an airline cockpit, where the top-level goal is to fly the plane (see
Figure 4.9). This involves all of the following:

• The pilot, captain, and air traffic controller interacting with one another
• The pilot and captain interacting with the instruments in the cockpit
• The pilot and captain interacting with the environment in which the plane is flying (that
is, the sky, runway, and so on)

Figure 4.8 Human information processing model
Source: P. Barber (1998). Applied Cognitive Psychology. London: Methuen. Used courtesy of Taylor & Francis

4 C O G N I T I V E A S P E C T S128

A primary objective of the distributed cognition approach is to describe these interactions
in terms of how information is propagated through different media. By this we mean how
information is represented and re-represented as it moves across individuals and through the
array of artifacts that are used (for example, maps, instrument readings, scribbles, and spo-
ken word) during activities. These transformations of information are referred to as changes
in representational state.

This way of describing and analyzing a cognitive activity contrasts with other cognitive
approaches, such as the information processing model, in that it focuses not on what is hap-
pening inside the head of an individual but on what is happening across a system of individu-
als and artifacts. For example, in the cognitive system of the cockpit, a number of people and
artifacts are involved in the activity of flying at a higher altitude. The air traffic controller
initially tells the pilot when it is safe to ascend to a higher altitude. The pilot then alerts the
captain, who is flying the plane, by moving a knob on the instrument panel in front of them,
confirming that it is now safe to fly.

Hence, the information concerning this activity is transformed through different media
(over the radio, through the pilot, and via a change in the position of an instrument). This
kind of analysis can be used to derive design recommendations, suggesting how to change or

Figure 4.9 A cognitive system in which information is propagated through different media

4 . 3 C O G N I T I V E F r A M E W O r k S 129

redesign an aspect of the cognitive system, such as a display or a socially mediated practice.
In the previous example, distributed cognition could draw attention to the importance of any
new design needing to keep shared awareness and redundancy in the system so that both the
pilot and the captain can be kept aware and also know that the other is aware of the changes
in altitude that are occurring. It is also the basis for the DiCOT analytic framework that has
been developed specifically for understanding healthcare settings and has also been used for
software team interactions (see Chapter 9, “Data Analysis”).

4.3.5 External Cognition
People interact with or create information by using a variety of external representations,
including books, multimedia, newspapers, web pages, maps, diagrams, notes, drawings, and
so on. Furthermore, an impressive range of tools has been developed throughout history to
aid cognition, including pens, calculators, spreadsheets, and software workflows. The com-
bination of external representations and physical tools has greatly extended and supported
people’s ability to carry out cognitive activities (Norman, 2013). Indeed, they are such an
integral part of our cognitive activities that it is difficult to imagine how we would go about
much of our everyday life without them.

External cognition is concerned with explaining the cognitive processes involved when
we interact with different external representations such as graphical images, multimedia, and
virtual reality (Scaife and Rogers, 1996). A main goal is to explain the cognitive benefits of
using different representations for different cognitive activities and the processes involved.
The main ones include the following:

• Externalizing to reduce memory load
• Computational offloading
• Annotating and cognitive tracing

4.3.5.1 Externalizing to Reduce Memory Load
Numerous strategies have been developed for transforming knowledge into external rep-
resentations to reduce memory load. One such strategy is externalizing things that we find
difficult to remember, such as birthdays, appointments, and addresses. Diaries, personal
reminders, and calendars are examples of cognitive artifacts that are commonly used for this
purpose, acting as external reminders of what we need to do at a given time, such as buy a
card for a relative’s birthday.

Other kinds of external representations that people frequently employ are notes, such
as sticky notes, shopping lists, and to-do lists. Where these are placed in the environment
can also be crucial. For example, people often place notes in prominent positions, such as
on walls, on the side of computer screens, by the front door, and sometimes even on their
hands in a deliberate attempt to ensure that they do remind them of what needs to be done or
remembered. People also place things in piles in their offices and by the front door, indicating
what needs to be done urgently versus what can wait for a while.

Externalizing, therefore, can empower people to trust that they will be reminded without
having to remember themselves, thereby reducing their memory burden in the following ways:

• Reminding them to do something (for example, get something for mother’s birthday)
• Reminding them of what to do (such as buy a card)
• Reminding them of when to do something (for instance, send it by a certain date)

4 C O G N I T I V E A S P E C T S130

This is an obvious area where technology can be designed to help remind. Indeed, many
apps have been developed to reduce the burden on people to remember things, including
to-do and alarm-based lists. These can also be used to help improve people’s time manage-
ment and work-life balance.

4.3.5.2 Computational Offloading
Computational offloading occurs when we use a tool or device in conjunction with an exter-
nal representation to help us carry out a computation. An example is using pen and paper to
solve a math problem as mentioned in the introduction of the chapter where you were asked
to multiply 21 × 19 in your head versus using a pen and paper. Now try doing the sum again
but using roman numerals: XXI × XVIIII. It is much harder unless you are an expert in using
roman numerals—even though the problem is equivalent under both conditions. The reason
for this is that the two different representations transform the task into one that is easy and
one that is more difficult, respectively. The kind of tool used also can change the nature of
the task to being easier or more difficult.

4.3.5.3 Annotating and Cognitive Tracing
Another way in which we externalize our cognition is by modifying representations to reflect
changes that are taking place that we want to mark. For example, people often cross things
off a to-do list to indicate tasks that have been completed. They may also reorder objects in
the environment by creating different piles as the nature of the work to be done changes.
These two types of modification are called annotating and cognitive tracing.

• Annotating involves modifying external representations, such as crossing off or under-
lining items.

• Cognitive tracing involves externally manipulating items into different orders or structures.

Annotating is often used when people go shopping. People usually begin their shop-
ping by planning what they are going to buy. This often involves looking in their cupboards
and fridge to see what needs stocking up. However, many people are aware that they won’t
remember all this in their heads, so they often externalize it as a written shopping list. The
act of writing may also remind them of other items that they need to buy, which they may
not have noticed when looking through the cupboards. When they actually go shopping at
the store, they may cross off items on the shopping list as they are placed in the shopping
basket or cart. This provides them with an annotated externalization, allowing them to see at
a glance what items are still left on the list that need to be bought.

There are a number of digital annotation tools that allow people to use pens, styluses, or
their fingers to annotate documents, such as circling data or writing notes. The annotations
can be stored with the document, enabling the users to revisit theirs or others’ externaliza-
tions at a later date.

Cognitive tracing is useful in conditions where the current situation is in a state of flux
and the person is trying to optimize their position. This typically happens when playing
games, such as the following:

• In a card game, when the continuous rearrangement of a hand of cards into suits, in
ascending order, or collecting same numbers together helps to determine what cards to
keep and which to play as the game progresses and tactics change

4 . 3 C O G N I T I V E F r A M E W O r k S 131

• In Scrabble, where shuffling letters around in the tray helps a person work out the best
word given the set of letters (Maglio et al., 1999)

Cognitive tracing has also been used as an interactive function, for example, letting stu-
dents know what they have studied in an online learning package. An interactive diagram can
be used to highlight all of the nodes visited, exercises completed, and units still to be studied.

A general cognitive principle for interaction design based on the external cognition
approach is to provide external representations at an interface that reduce memory load,
support creativity, and facilitate computational offloading. Different kinds of information
visualizations can be developed that reduce the amount of effort required to make inferences
about a given topic (for example, financial forecasting or identifying programming bugs). In
so doing, they can extend or amplify cognition, allowing people to perceive and do activi-
ties that they couldn’t do otherwise. For example, information visualizations (discussed in
chapter 10) are used to represent big data in a visual form that can make it easier to make
cross-comparisons across dimensions and see patterns and anomalies. Workflow and contex-
tual dialog boxes can also pop up at appropriate times to guide users through their interac-
tions, especially where there are potentially hundreds and sometimes thousands of options
available. This reduces memory load significantly and frees up more cognitive capacity for
enabling people to complete desired tasks.

4.3.6 Embodied Interaction
Another way of describing our interactions with technology and the world is to conceive of it
as embodied. By this we mean the practical engagement with the social and physical environ-
ment (Dourish, 2001). This involves creating, manipulating, and making meaning through
our engaged interaction with physical things, including mundane objects such as cups and
spoons, and technological devices, such as smartphones and robots. Artifacts and technolo-
gies that indicate how they are coupled to the world make it clear how they should be used.
For example, a physical artifact, like a book when left opened on someone’s desk, can remind
them to complete an unfinished task the next day (Marshall and Hornecker, 2013).

Eva Hornecker et  al. (2017) further explain embodied interaction in terms of how
our bodies and active experiences shape how we perceive, feel, and think. They describe
how our ability to think abstractly is thought to be a result of our sensorimotor experiences
with the world. This enables us to learn how to think and talk using abstract concepts, such
as inside-outside, up-down, on top of, and behind. Our numerous experiences of moving
through and manipulating the world since we were born (for example, climbing, walking,
crawling, stepping into, holding, or placing) is what enables us to develop a sense of the
world at both a concrete and abstract level.

Within HCI, the concept of embodied interaction has been used to describe how the body
mediates our various interactions with technology (Klemmer et al., 2006) and also our emo-
tional interactions (Höök, 2018). By theorizing about embodied interactions in these ways
has helped researchers uncover problems that can arise in the use of existing technologies
while also informing the design of new technologies in the context in which they will be used.

David Kirsh (2013) suggests that a theory of embodiment can provide HCI practition-
ers and theorists with new ideas about interaction and new principles for better designs. He
explains how interacting with tools changes the way people think and perceive of their envi-
ronments. He also argues that a lot of times we think with our bodies and not just with our
brains. He studied choreographers and dancers and observed that they often partially model

4 C O G N I T I V E A S P E C T S132

a dance (known as marking) through using abbreviated moves and small gestures rather than
doing a full workout or mentally simulating the dance in their heads. This kind of marking
was found to be a better method of practice than the other two methods. The reason for
doing it this way is not that it is saving energy or preventing dancers from getting exhausted
emotionally, but that it enables them to review and explore particular aspects of a phrase
or movement without the mental complexity involved in a full work out. The implication of
how people use embodiment in their lives is that learning new procedures and skills might be
better taught by a process like marking, where learners create little models of things or use
their own bodies to act out. For example, rather than developing fully fledged virtual real-
ity simulations for learning golf, tennis, skiing, and so on, it might be better to teach sets of
abbreviated actions, using augmented reality, as a form of embodied marking.

In-Depth Activity
The aim of this in-depth activity is for you to try to elicit mental models from people. In par-
ticular, the goal is for you to understand the nature of people’s knowledge about an interactive
product in terms of how to use it and how it works.
1. First, elicit your own mental model. Write down how you think contactless cards (see

Figure 4.10) work—where customers place their debit or credit card over a card reader. If
you are not familiar with contactless cards, do the same for a smartphone app like Apple
Pay or Google Pay. Then answer the following questions:

Figure 4.10 A contactless debit card indicated by symbol

133

Further Reading

BERGMAN, O. and WHITTAKER, S. (2016). The Science of Managing Our Digital Stuff.
MIT Press. This very readable book provides a fascinating account of how we manage all of
our digital stuff that increases by the bucket load each day. It explains why we persist with
seemingly old-fashioned methods when there are alternative, seemingly better approaches
that have been designed by software companies.

• What information is sent between the card/smartphone and the card reader when it is placed
in front of it?

• What is the maximum amount you can pay for something using a contactless card, or Apple/
Google Pay?

• Why is there an upper limit?
• How many times can you use a contactless card or Apple/Google Pay in a day?
• What happens if you have two contactless cards in the same wallet/purse?
• What happens when your contactless card is stolen and you report it to the bank?

Next, ask two other people the same set of questions.
2. Now analyze your answers. Do you get the same or different explanations? What do the

findings indicate? How accurate are people’s mental models about the way contactless cards
and smartphone Apple/Google Pay work?

Summary
This chapter explained the importance of understanding the cognitive aspects of interaction. It
described relevant findings and theories about how people carry out their everyday activities
and how to learn from these to help in designing interactive products. It provided illustrations
of what happens when you design systems with the user in mind and what happens when you
don’t. It also presented a number of conceptual frameworks that allow ideas about cognition
to be generalized across different situations.

Key points
• Cognition comprises many processes, including thinking, attention, memory, perception,
learning, decision-making, planning, reading, speaking, and listening.

• The way in which an interface is designed can greatly affect how well people can perceive,
attend, learn, and remember how to carry out their tasks.

• The main benefits of conceptual frameworks based on theories of cognition are that they
can explain user interaction, inform design, and predict user performance.

F U r T h E r r E A D I N G

4 C O G N I T I V E A S P E C T S134

ERICKSON, T. D. and MCDONALD, D. W. (2008) HCI Remixed: Reflections on Works
That Have Influenced the HCI Community. MIT Press. This collection of essays from more
than 50 leading HCI researchers describes the accessible prose papers, books, and software
that influenced their approach to HCI and shaped its history. They include some of the classic
papers on cognitive theories, including the psychology of HCI and the power of external
representations.

EYSENCK, M. and BRYSBAERT, M. (2018) Fundamentals of Cognition (3rd ed.). Rout-
ledge. This introductory textbook about cognition provides a comprehensive overview of
the fundamentals of cognition. In particular, it describes the processes that allow us to make
sense of the world around us and to enable us to make decisions about how to manage our
everyday lives. It also covers how technology can provide new insights into how the mind
works, for example, revealing how CAPTCHAs tell us more about perception.

GIGERENZER, G. (2008) Gut Feelings. Penguin. This provocative paperback is written by
a psychologist and behavioral expert in decision-making. When confronted with choice in a
variety of contexts, he explains how often “less is more.” He explains why this is so in terms
of how people rely on fast and frugal heuristics when making decisions, which are often
unconscious rather than rational. These revelations have huge implications for interaction
design that are only just beginning to be explored.

JACKO, J. (ed.) (2012) The Human-Computer Interaction Handbook: Fundamentals, Evolv-
ing Technologies and Emerging Applications (3rd ed). CRC Press. Part 1 is about human
aspects of HCI and includes in-depth chapters on information processing, mental models,
decision-making, and perceptual motor interaction.

KAHNEMAN, D. (2011) Thinking, Fast and Slow. Penguin. This bestseller presents an over-
view of how the mind works, drawing on aspects of cognitive and social psychology. The
focus is on how we make judgments and choices. It proposes that we use two ways of think-
ing: one that is quick and based on intuition and one that is slow and more deliberate and
challenging. The book explores the many facets of life and how and when we use each.

Chapter 5

S O C I A L I N T E R A C T I O N

5.1 Introduction

5.2 Being Social

5.3 Face-to-Face Conversations

5.4 Remote Conversations

5.5 Co-presence

5.6 Social Engagement

Objectives
The main goals of the chapter are to accomplish the following:

• Explain what is meant by social interaction.
• Describe the social mechanisms that people use to communicate and collaborate.
• Explain what social presence means.
• Give an overview of new technologies intended to facilitate collaboration and group
participation.

• Discuss how social media has changed how we keep in touch, make contacts, and man-
age our social and working lives.

• Outline examples of new social phenomena that are a result of being able to connect
online.

5.1 Introduction

People are inherently social: we live together, work together, learn together, play together,
interact and talk with each other, and socialize. A number of technologies have been developed
specifically to enable us to persist in being social when physically apart from one another,
many of which have now become part of the fabric of society. These include the widespread
use of smartphones, video chat, social media, gaming, messaging, and telepresence. Each of
these afford different ways of supporting how people connect.

There are many ways to study what it means to be social. In this chapter, we focus on
how people communicate and collaborate face-to-face and remotely in their social, work,

5 S O C I A L I N T E R A C T I O N136

and everyday lives—with the goal of providing models, insights, and guidelines to inform
the design of “social” technologies that can better support and extend them. A diversity of
communication technologies is also examined that have changed the way people live—how
they keep in touch, make friends, and coordinate their social and work networks. The con-
versation mechanisms that have conventionally been used in face-to-face interactions are
described and discussed in relation to how they have been adapted for the various kinds of
computer-based conversations that now take place at a distance. Examples of social phenom-
ena that have emerged as a result of social engagement at scale are also presented.

5.2 Being Social

A fundamental aspect of everyday life is being social, and that entails interacting with each
other. People continually update each other about news, changes, and developments on a
given project, activity, person, or event. For example, friends and families keep each other
posted on what’s happening at work, at school, at a restaurant or club, next door, in reality
shows, and in the news. Similarly, people who work together keep each other informed about
their social lives and everyday events, as well as what is happening at work, for instance
when a project is about to be completed, plans for a new project, problems with meeting
deadlines, rumors about closures, and so on.

While face-to-face conversations remain central to many social interactions, the use of
social media has dramatically increased. People now spend several hours a day communicat-
ing with others online—texting, emailing, tweeting, Facebooking, Skyping, instant messaging,
and so on. It is also common practice for people at work to keep in touch with each other via
WhatsApp groups and other workplace communication tools, such as Slack, Yammer, or Teams.

The almost universal adoption of social media in mainstream life has resulted in most
people now being connected in multiple ways over time and space—in ways that were
unimaginable 25 or even 10 years ago. For example, adults average about 338 Facebook
friends, while it is increasingly common for people to have more than 1,000 connections
on LinkedIn—many more than those made through face-to-face networking. The way that
people make contact, how they stay in touch, who they connect to, and how they maintain
their social networks and family ties have irrevocably changed. During the last 20 or so years,
social media, teleconferencing, and other social-based technologies (often referred to as social
computing) have also transformed how people collaborate and work together globally—
including the rise of flexible and remote working, the widespread use of shared calendars and
collaboration tools (for example Slack, Webex, Trello, and Google Docs), and professional
networking platforms (such as LinkedIn, Twitter, and WhatsApp).

A key question that the universal adoption of social media and other social computing
tools in society raises is how it has affected people’s ability to connect, work, and interact
with one another. Have the conventions, norms, and rules established in face-to-face interac-
tions to maintain social order been adopted in social media interactions, or have new norms
emerged? In particular, are the established conversational rules and etiquette, whose function
it is to let people know how they should behave in social groups, also applicable to online
social behavior? Or, have new conversational mechanisms evolved for the various kinds
of social media? For example, do people greet each other in the same way, depending on
whether they are chatting online, Skyping, or at a party? Do people take turns when online

5 . 2 B E I N g S O C I A L 137

chatting in the way they do when talking with each other face to face? How do they choose
which technology or app to use from the variety available today for their various work and
social activities, such as SnapChat, text messaging, Skype, or phone calls? Answering these
questions can help us understand how existing tools support communication and collabora-
tive work while helping to inform the design of new ones.

When planning and coordinating social activities, groups often switch from one mode
to another. Most people send texts in preference to calling someone up, but they may switch
to calling or mobile group messaging (such as WhatsApp, GroupMe) at different stages of
planning to go out (Schuler et al., 2014). However, there can be a cost as conversations about
what to do, where to meet, and who to invite multiply across people. Some people might get
left off or others might not reply, and much time can be spent to-ing and fro-ing across the
different apps and threads. Also, some people may not look at their notifications in a timely
manner, while further developments in the group planning have evolved. This is compounded
by the fact that often people don’t want to commit until close to the time of the event, in case
an invitation to do something from another friend appears that is more interesting to them.
Teenagers, especially, often leave it until the last minute to micro-coordinate their arrange-
ments with their friends before deciding on what to do. They will wait and see if a better offer
comes their way rather than deciding for themselves a week in advance, say, to see a movie
with a friend and sticking to it. This can make it frustrating for those who initiate the plan-
ning and are waiting to book tickets before they sell out.

A growing concern that is being raised within society is how much time people spend
looking at their phones—whether interacting with others, playing games, tweeting, and so
forth—and its consequences on people’s well-being (see Ali et al., 2018). A report on the
impact of the “decade of a smartphone” notes that on average a person in the United King-
dom spends more than a day a week online (Ofcom, 2018). Often, it is the first thing they do
upon waking and the last thing they do before going to bed. Moreover, lots of people cannot
go for long without checking their phone. Even when sitting together, they resort to being in
their own digital bubbles (see Figure 5.1). Sherry Turkle (2015) bemoans the negative impact
that this growing trend is having on modern life, especially how it is affecting everyday con-
versation. She points out that many people will admit to preferring texting to talking to oth-
ers, as it is easier, requires less effort, and is more convenient. Furthermore, her research has
shown that when children hear adults talking less, they likewise talk less. This in turn reduces
opportunities to learn how to empathize. She argues that while online communication has its
place in society, it is time to reclaim conversation, where people put down their phones more
often and (re)learn the art and joy of spontaneously talking to each other.

On the other hand, it should be stressed that several technologies have been designed
to encourage social interaction to good effect. For example, voice assistants that come with
smart speakers, such as Amazon’s Echo devices, provide a large number of “skills” intended
to support multiple users taking part at the same time, offering the potential for families to
play together. An example skill is “Open the Magic Door,” which allows group members
(such as families) to choose their path in a story by selecting different options through the
narrative. Social interaction may be further encouraged by the affordance of a smart speaker
when placed on a surface in the home, such as a kitchen counter or mantelpiece. In particular,
its physical presence in this shared location affords joint ownership and use—similar to other
domestic devices, such as the radio or TV. This differs from other virtual voice assistants that
are found on phones or laptops that support individual use.

5 S O C I A L I N T E R A C T I O N138

Figure 5.1 A family sits together, but they are all in their own digital bubbles—including the dog!
Source: Helen Sharp

ACTIVITY 5.1
Think of a time where you enjoyed meeting up with friends to catch up in a cafe. Compare
this social occasion with the experience that you have when texting with them on your smart-
phone. How are the two kinds of conversations different?

Comment
The nature of the conversations is likely to be very different with pros and cons for each.
Face-to-face conversations ebb and flow unpredictably and spontaneously from one topic to
the next. There can be much laughing, gesturing, and merriment among those taking part in the
conversation. Those present pay attention to the person speaking, and then when someone
else starts talking, all eyes move to them. There can be much intimacy through eye contact,
facial expressions, and body language, in contrast to when texters send intermittent messages
back and forth in bursts of time. Texting is also more premeditated; people decide what to say
and can review what they have written. They can edit their message or decide even not to send
it, although sometimes people press the Send button without much thought about its impact
on the interlocutor that can lead to regrets afterward.

Emoticons are commonly used as a form of expressivity to compensate for nonverbal
communication. While they can enrich a message by adding humor, affection, or a personal
touch, they are nothing like a real smile or a wink shared at a key moment in a conversation.
Another difference is that people say things and ask each other things in conversations that
they would never do via text. On the one hand, such confiding and directness may be more
engaging and enjoyable, but on the other hand, it can sometimes be embarrassing. It depends
on the context as to whether conversing face-to-face versus texting is preferable.

5 . 3 F A C E – T O – F A C E C O N V E R S AT I O N S 139

5.3 Face-to-Face Conversations

Talking is something that is effortless and comes naturally to most people. And yet holding a
conversation is a highly skilled collaborative achievement, having many of the qualities of a
musical ensemble. In this section we examine what makes up a conversation. Understanding
how conversations start, progress, and finish is useful when designing dialogues that take place
with chatbots, voice assistants, and other communication tools. In particular, it helps research-
ers and developers understand how natural it is, how comfortable people are when conversing
with digital agents, and the extent to which it is important to follow conversation mechanisms
that are found in human conversations. We begin by examining what happens at the beginning.

A: Hi there.
B: Hi!
C: Hi.
A: All right?
C: Good. How’s it going?
A: Fine, how are you?
C: Good.
B: OK. How’s life treating you?

Such mutual greetings are typical. A dialogue may then ensue in which the participants
take turns asking questions, giving replies, and making statements. Then, when one or more of
the participants wants to draw the conversation to a close, they do so by using either implicit or
explicit cues. An example of an implicit cue is when a participant looks at their watch, signal-
ing indirectly to the other participants that they want the conversation to draw to a close. The
other participants may choose to acknowledge this cue or carry on and ignore it. Either way,
the first participant may then offer an explicit signal, by saying, “Well, I have to go now. I got
a lot of work to do” or, “Oh dear, look at the time. I gotta run. I have to meet someone.” Fol-
lowing the acknowledgment by the other participants of such implicit and explicit signals, the
conversation draws to a close, with a farewell ritual. The different participants take turns say-
ing, “Goodbye,” “Bye,” “See you,” repeating themselves several times until they finally separate.

ACTIVITY 5.2
How do you start and end a conversation when (1) talking on the phone and (2) chatting online?
Do you use the same conversational mechanisms that are used in face-to-face conversations?

Comment
The person answering the call will initiate the conversation by saying “hello” or, more formally,
the name of their company/department. Most phones (landline and smartphones) have the facil-
ity to display the name of the caller (Caller ID) so the receiver can be more personal when answer-
ing, for example “Hello, John. How are you doing?” Phone conversations usually start with a
mutual greeting and end with a mutual farewell. In contrast, conversations that take place when
chatting online have evolved new conventions. The use of opening and ending greetings when
joining and leaving is rare; instead, most people simply start their message with what they want to
talk about and then stop when they have gotten an answer, as if in the middle of a conversation.

5 S O C I A L I N T E R A C T I O N140

Many people are now overwhelmed by the number of emails they receive each day and
find it difficult to reply to them all. This has raised the question of which conversational
techniques to use to improve the chances of getting someone to reply. For example, can the
way people compose their emails, especially the choice of opening and ending a conversation,
increase the likelihood that the recipient will respond to it? A study by Boomerang (Brendan
G, 2017) of 300,000 emails taken from mailing list archives of more than 20 different online
communities examined whether the opening or closing phrase that was used affected the
reply rate. They found that the most common opening phrase used “hey” (64 percent), fol-
lowed by “hello” (63 percent), and then “hi” (62 percent) were the ones that got the highest
rate of reply, in the region of 63–64 percent. This was found to be higher than emails that
opened with more formal phrases, like “Dear” (57 percent) or “Greetings” (56 percent). The
most popular form of sign-off was found to be “thanks” (66 percent), “regards” (63 per-
cent), and “cheers” (58 percent), with “best” being used less (51 percent). Again, they found
that emails that used closings with a form of “thank you” got the highest rate of responses.
Hence, which conversational mechanism someone uses to address the recipient can deter-
mine whether they will reply to it.

Conversational mechanisms enable people to coordinate their talk with one another,
allowing them to know how to start and stop. Throughout a conversation, further turn-
taking rules are followed that enable people to know when to listen, when it is their cue to
speak, and when it is time for them to stop again to allow the others to speak. Sacks et al.
(1978), famous for their work on conversation analysis, describe these in terms of three
basic rules.

Rule 1 The current speaker chooses the next speaker by asking a question, inviting an
opinion, or making a request.

Rule 2 Another person decides to start speaking.
Rule 3 The current speaker continues talking.

The rules are assumed to be applied in this order so that whenever there is an opportu-
nity for a change of speaker to occur, for instance, someone comes to the end of a sentence,
rule 1 is applied. If the listener to whom the question or request is addressed does not accept
the offer to take the floor, rule 2 is applied, and someone else taking part in the conversation
may take up the opportunity and offer a view on the matter. If this does not happen, then
rule 3 is applied, and the current speaker continues talking. The rules are cycled through
recursively until someone speaks again.

To facilitate rule following, people use various ways of indicating how long they are
going to talk and on what topic. For example, a speaker might say right at the beginning
of his turn in the conversation that he has three things to say. A speaker may also explic-
itly request a change in speaker by saying to the listeners, “OK, that’s all I want to say
on that matter. So, what do you think?” More subtle cues to let others know that their
turn in the conversation is coming to an end include the lowering or raising of the voice
to indicate the end of a question or the use of phrases like “You know what I mean?” or
simply “OK?” Back channeling (uh-huh, mmm), body orientation (such as moving away
from or closer to someone), gaze (staring straight at someone or glancing away), and
gesturing (for example, raising of arms) are also used in different combinations when
talking in order to signal to others when someone wants to hand over or take up a turn
in the conversation.

5 . 3 F A C E – T O – F A C E C O N V E R S AT I O N S 141

Another way in which conversations are coordinated and given coherence is through the use
of adjacency pairs (Schegloff and Sacks, 1973). Utterances are assumed to come in pairs in which
the first part sets up an expectation of what is to come next and directs the way in which what
does come next is heard. For example, A may ask a question to which B responds appropriately.

A: So, shall we meet at 8:00?
B: Um, can we make it a bit later, say 8:30?

Sometimes adjacency pairs get embedded in each other, so it may take some time for a
person to get a reply to their initial request or statement.

A: So, shall we meet at 8:00?
B: Wow, look at them.
A: Yes, what a funny hairdo!
B: Um, can we make it a bit later, say 8:30?

For the most part, people are not aware of following conversational mechanisms and
would be hard-pressed to articulate how they can carry on a conversation. Furthermore,
people don’t necessarily abide by the rules all the time. They may interrupt each other or talk
over each other, even when the current speaker has clearly indicated a desire to hold the floor
for the next two minutes to finish an argument. Alternatively, a listener may not take up a
cue from a speaker to answer a question or take over the conversation but instead continue
to say nothing even though the speaker may be making it glaringly obvious that it is the
listener’s turn to say something. Oftentimes, a teacher will try to hand over the conversation
to a student in a seminar by staring at them and asking a specific question, only to see the
student look at the floor and say nothing. The outcome is an embarrassing silence, followed
by either the teacher or another student picking up the conversation again.

Other kinds of breakdowns in conversation arise when someone says something that is
ambiguous, and the interlocutor misinterprets it to mean something else. In such situations,
the participants will collaborate to overcome the misunderstanding by using repair mecha-
nisms. Consider the following snippet of conversation between two people:

A: Can you tell me the way to get to the Multiplex Ranger cinema?
B: Yes, you go down here for two blocks and then take a right (pointing to the right), proceed

until you get to the light, and then it’s on the left.
A: Oh, so I go along here for a couple of blocks and then take a right, and the cinema is at the

light (pointing ahead of him)?
B: No, you go down this street for a couple of blocks (gesturing more vigorously than before to

the street to the right of him while emphasizing the word this).
A: Ahhhh! I thought you meant that one: so it’s this one (pointing in the same direction as the

other person).
B: Uh-hum, yes, that’s right: this one.

Detecting breakdowns in conversation requires that the speaker and listener both pay
attention to what the other says (or does not say). Once they have understood the nature
of the failure, they can then go about repairing it. As shown in the previous example, when
the listener misunderstands what has been communicated, the speaker repeats what they
said earlier, using a stronger voice intonation and more exaggerated gestures. This allows
the speaker to repair the mistake and be more explicit with the listener, allowing them to
understand and follow better what they are saying. Listeners may also signal when they don’t

5 S O C I A L I N T E R A C T I O N142

understand something or want further clarification by using various tokens, like “Huh?” or
“What?” (Schegloff, 1981), together with giving a puzzled look (usually frowning). This is
especially the case when the speaker says something that is vague. For example, they might
say “I want it” to their partner, without saying what it is they want. The partner may reply
using a token or, alternatively, explicitly ask, “What do you mean by it?” Nonverbal com-
munication also plays an important role in augmenting face-to-face conversation, involving
the use of facial expressions, back channeling, voice intonation, gesturing, and other kinds
of body language.

Taking turns also provides opportunities for the listener to initiate repair or request
clarification or for the speaker to detect that there is a problem and initiate repair. The lis-
tener will usually wait for the next turn in the conversation before interrupting the speaker
in order to give the speaker the chance to clarify what is being said by completing the
utterance.

ACTIVITY 5.3
How do people repair breakdowns when conversing via email? Do they do the same when texting?

Comment
As people usually cannot see each other when communicating by email or text, they have to
rely on other means of repairing the conversation when things are left unsaid or are unclear.
For example, when someone proposes an ambiguous meeting time, where the date and day
given don’t match up for the month, the person receiving the message may begin their reply
by asking politely, “Did you mean this month or June?” rather than baldly stating the other
person’s error, for example, “the 13th May is not a Wednesday!”

When someone does not reply to an email or text when the sender is expecting them to
do so, it can put them in a quandary as to what to do next. If someone does not reply to an
email within a few days, then the sender might send them a gentle nudge message that dimin-
ishes any blame, for example, “I am not sure if you got my last email as I was using a different
account” rather than explicitly asking them why they have not answered the email they sent.
When texting, it depends on whether it is a dating, family, or business-related text that has
been sent. When starting to date, some people will deliberately wait a while before replying
to a text as a form of “playing games” and trying not to appear to be overly keen. If they
don’t reply at all, it is a generally accepted notion that they are not interested, and no further
texts should be sent. In contrast, in other contexts, double-texting has become an acceptable
social norm as a way of reminding someone, without sounding too rude, to reply. It implicitly
implies that the sender understands that the recipient has overlooked the first text because
they were too busy or doing something else at the time, thereby saving face.

Emails and texts can also appear ambiguous, especially when things are left unsaid. For
example, the use of an ellipsis (…) at the end of a sentence can make it difficult to work out
what the sender intended when using it. Was it to indicate something was best left unsaid, the
sender is agreeing to something but their heart is not in it, or simply that the sender did not
know what to say? This email or text convention puts the onus on the receiver to decide what
is meant by the ellipsis and not on the sender to explain what they meant.

5 . 4 R E m O T E C O N V E R S AT I O N S 143

5.4 Remote Conversations

The telephone was invented in the nineteenth century by Alexander Graham Bell, enabling two
people to talk to one another at a distance. Since then, a number of other technologies have
been developed that support synchronous remote conversations, including videophones that were
developed in the 1960s–1970s (see Figure 5.2). In the late 1980s and 1990s, a range of “media
spaces” were the subjects of experimentation—audio, video, and computer systems were com-
bined to extend the world of desks, chairs, walls, and ceilings (Harrison, 2009). The goal was
to see whether it was possible for people, distributed over space and different time zones, could
communicate and interact with one another as if they were actually physically present.

An example of an early media space was the VideoWindow (Bellcore, 1989) that was
developed to enable people in different locations to carry on a conversation as they would do
if they were drinking coffee together in the same room (see Figure 5.3). Two lounge areas that
were 50 miles apart were connected via a 3-foot by 5-foot picture window onto which video
images of each location were projected. The large size enabled viewers to see a room of peo-
ple roughly the same size as themselves. A study of its use showed that many of the conver-
sations that took place between the remote conversants were indeed indistinguishable from
similar face-to-face interactions, with the difference being that they spoke a bit louder and
constantly talked about the video system (Kraut et al., 1990). Other research on how people
interact when using videoconferencing has shown that they tend to project themselves more,
take longer conversational turns, and interrupt each other less (O’Connaill et al., 1993).

Since this early research, videoconferencing has come of age. The availability of cheap
webcams and cameras now embedded as a default in tablets, laptops, and phones has

Figure 5.2 One of British Telecom’s early videophones
Source: British Telecommunications Plc

5 S O C I A L I N T E R A C T I O N144

greatly helped make videoconferencing mainstream. There are now numerous platforms
available from which to choose, both free and commercial. Many videoconferencing apps
(for example, Zoom or Meeting Owl) also allow multiple people at different sites to connect
synchronously. To indicate who has the floor, screen effects are often used, such as enlarging
the person who is talking to take up most of the screen or highlighting their portal when they
take the floor. The quality of the video has also improved, making it possible for people to
appear more life-like in most setups. This is most noticeable in high-end telepresence rooms
that use multiple high-definition cameras with eye-tracking features and directional micro-
phones (see Figure 5.4). The effect can be to make remote people appear more present by
projecting their body movements, actions, voice, and facial expressions to the other location.

Another way of describing this development is in terms of the degree of telepresence.
By this we mean the perception of being there when physically remote. Robots, for example,
have been built with telepresence in mind to enable people to attend events and communicate
with others by controlling them remotely. Instead of sitting in front of a screen from their

Figure 5.3 Diagram of VideoWindow system in use

Figure 5.4 A telepresence room
Source: Cisco Systems, Inc.

5 . 4 R E m O T E C O N V E R S AT I O N S 145

location and seeing the remote place solely through a fixed camera at the other place, they
can look around the remote place by controlling the “camera’s” eyes, which are placed on the
robot and are physically moving it around. For example, telepresence robots have been devel-
oped to enable children who are in a hospital to attend school by controlling their assigned
robots to roam around the classroom (Rae et al., 2015).

Telepresence robots are also being investigated to determine whether they can help peo-
ple who have developmental difficulties visit places remotely, such as museums. Currently,
several of the activities that are involved in going on such a visit, such as buying a ticket
or using public transport, are cognitively challenging, preventing them from going on such
trips. Natalie Friedman and Alex Cabral (2018) conducted a study with six participants with
developmental difficulties to see whether providing them each with a telepresence robot
would increase their physical and social self-efficacy and well-being. The participants were
taken on a remote tour of two museum exhibits and then asked to rate their experience after-
ward. Their responses were positive, suggesting that this kind of telepresence can open doors
to social experiences that were previously denied to those with disabilities.

BOX 5.1
Facebook Spaces: How Natural Is It to Socialize in a 3D World?

Facebook’s vision of social networking is to immerse people in 3D, where they interact with
their friends in virtual worlds. Figure 5.5 shows what it might look like: Two avatars (Jack
and Diane) are talking at a virtual table beside a lake and with some mountains in the back-
ground. Users experience this by wearing virtual reality (VR) headsets. The goal is to provide
users with a magical feeling of presence, one where they can feel as if they are together, even

(Continued)

Figure 5.5 Facebook’s vision of socializing in a 3D world
Source: Facebook

5 S O C I A L I N T E R A C T I O N146

Telepresence robots have also become a regular feature at conferences, including the
ACM CHI conference, enabling people to attend who cannot travel. They are typically about
5-feet tall, have a display at the top that shows the remote person’s head, and have a base
at the bottom holding wheels allowing the robot to move forward, move backward, or turn
around. A commercial example is the Beam+ (https://suitabletech.com/). To help the robot
navigate in its surroundings, two cameras are embedded in the display, one facing outward
to provide the remote person with a view of what is in front of them and the other facing
downward to provide a view of the floor. The robots also contain a microphone and speak-
ers to enable the remote person to be heard and to hear what is being said locally. Remote
users connect via Wi-Fi to the remote site and steer their Beam+ robot using a web interface.

A PhD student from the University College London (UCL) attended her first CHI confer-
ence remotely, during which time she gave a demo of her research every day by talking to
the attendees using the Beam+ robot (see Figure 5.6). Aside from a time difference of eight
hours (meaning that she had to stay up through the night to attend), it was an enriching
experience for her. She met lots of new people who not only were interested in her demo but
who also learned how she felt about attending the conference remotely. Her colleagues at the
conference also dressed up her robot to make her appear more like her, giving the robot a
set of foam-cutout arms with waving hands, and they put a university T-shirt on the robot.
However, she could not see how she appeared to others at the conference, so local attendees
took photos of her Beam+ robot to show her how she looked. She also commented how she
could not gauge the volume of her voice, and on one occasion she accidentally set the volume
control to be at its highest setting. When speaking to someone, she did not realize how loud

though they are apart in the physical world. To make the experience appear more life-like,
users can move their avatar’s arms through controls provided by the VR OculusTouch.

While big strides have been made toward improving social presence, there is still a way
to go before the look and feel of socializing with virtual avatars becomes more like the real
thing. For a start, the facial expressions and skin tone of virtual avatars still appear to be
cartoon-like.

Similar to the term telepresence, social presence refers to the feeling of being there with
a real person in virtual reality. Specifically, it refers to the degree of awareness, feeling, and
reaction to other people who are virtually present in an online environment. The term differs
from telepresence, which refers to one party being virtually present with another party who is
present in a physical space, such as a meeting room (note that it is possible for more than one
telepresence robot to be in the same physical space). Imagine if avatars become more convinc-
ing in their appearance to users. How many people would switch from their current use of 2D
media to catch up and chat with friends in this kind of immersive 3D Facebook page? Do you
think it would enhance the experience of how they would interact and communicate with oth-
ers remotely?

How many people would don a VR headset 10 times a day or more to teleport to meet
their friends virtually? (The average number of times that someone looks at Facebook on their
phones is now 14 times each day.) There is also the perennial problem of motion sickness that
25–40 percent of people say that they have experienced in VR (Mason, 2017).

https://suitabletech.com/

5 . 4 R E m O T E C O N V E R S AT I O N S 147

she was until another person across the room told her that she was yelling. (The person she
was talking to was too polite to tell her to lower her voice.)

Another navigation problem that can occur is when the remote person wants to move
from one floor to another in a building. They don’t have a way of pressing the elevator but-
ton to achieve this. Instead, they have to wait patiently beside the elevator for someone to
come along to help them. They also lack awareness of others who are around them. For
example, when moving into a room to get a good spot to see a presentation, they may not
realize that they have obscured the view of people sitting behind them. It can also be a bit sur-
real when their image starts breaking up on the robot “face” as the Wi-Fi signal deteriorates.
For example, Figure 5.7 shows Johannes Schöning breaking up into a series of pixels that
makes him look a bit like David Bowie!

Despite these usability problems, a study of remote users trying a telepresence robot
for the first time at a conference found the experience to be positive (Neustaedter et al.,
2016). Many felt that it provided them with a real sense of being at the conference—quite
different from the experience of watching or listening to talks online—as happens when
connecting via a livestream or a webinar. Being able to move around the venue also ena-
bled them to see familiar faces and to bump into people during coffee breaks. For the

Figure 5.6 Susan Lechelt’s Beam+ robot given a human touch with cut-out foam arms and a uni-
versity logo T-shirt
Source: Used courtesy of Susan Lechelt

5 S O C I A L I N T E R A C T I O N148

conference attendees, the response was also largely positive, enabling them to chat with
those who could not make the conference. However, sometimes the robot’s physical pres-
ence obstructed their view in a room when watching a speaker, and that could be frustrat-
ing. It is difficult to know how to tell a telepresence robot discreetly to move out of the way
while a talk is in progress and for the remote person to know where to move that is out of
the way as they have been told.

Figure 5.7 The image of Johannes Schöning breaking up on the Beam+ robot video display when
the Wi-Fi signal deteriorated
Source: Yvonne Rogers

ACTIVITY 5.4
Watch these two videos about Beam and Cisco’s telepresence. How does the experi-
ence of being at a meeting using a telepresence robot compare with using a telepres-
ence videoconferencing system?

Videos
BeamPro overview of how robotic telepresence works—https://youtu.be/SQCigphfSvc
Cisco TelePresence Room EndPoints MX700 and MX800—https://youtu.be/52lgl0kh0FI

Comment
The BeamPro allows the remote person to move around a workplace as well as sit in on meet-
ings. They can also have one-on-one conversations with someone at their desk. When moving

https://youtu.be/SQCigphfSvc

5 . 4 R E m O T E C O N V E R S AT I O N S 149

around, the remote individual can even bump into other remote workers, in the corridor, for
example, who are also using a BeamPro. Hence, it supports a range of informal and formal
social interactions. Using a BeamPro also allows someone to feel as if they are at work while
still being at home.

In contrast, the Cisco telepresence room has been designed specifically to support meet-
ings between small remote groups to make them feel more natural. When someone is speak-
ing, the camera zooms in on them to have them fill the screen. From the video, it appears
effortless and allows the remote groups to focus on their meeting rather than worry about
the technology. However, there is limited flexibility—conducting one-on-one meetings, for
example.

BOX 5.2
Simulating Human Mirroring Through Artificial Smiles

A common phenomenon that occurs during face-to-face conversations is mirroring, where
people mimic each other’s facial expressions, gestures, or body movements. Have you ever
noticed that when you put your hands behind your head, yawn, or rub your face during a
conversation with someone that they follow suit? These kinds of mimicking behaviors are
assumed to induce empathy and closeness between those conversing (Stel and Vonk, 2010).
The more people engage in mimicry, the more they view each other as being similar, which
in turn increases the rapport between them (Valdesolo and DeSteno, 2011). Mimicry doesn’t
always occur during a conversation, however—sometimes it requires a conscious effort, while
in other situations it does not occur. Might the use of technology increase its occurrence in
conversations?

One way would be to use special video effects. Suppose that an artificial smile could be
superimposed on the video face of someone to make them appear to smile. What might hap-
pen? Would they both begin to smile and in doing so feel closer to each other? To investigate
this possibility of simulating smiling mimicry, Keita Suzuki et al. (2017) developed a technique
called FaceShare. The system was developed so that it could deform the image of someone’s
face to make it appear to smile—even though they were not—whenever their partner’s face
began smiling. The mimicry method used 3D modeling of key feature points of the face,
including the contours, eyes, nose, and mouth to detect where to place the smile. The smile
was created by raising the lower eyelids and both ends of the mouth in conjunction with the
cheeks. The findings from this research showed that FaceShare was effective at making con-
versations appear smoother and that the pseudo smiles appearing on someone’s video face
were judged to be natural.

5 S O C I A L I N T E R A C T I O N150

5.5 Co-presence

Together with telepresence, there has been much interest in enhancing co-presence, that is,
supporting people in their activities when interacting in the same physical space. A number
of technologies have been developed to enable more than one person to use them at the same
time. The motivation is to enable co-located groups to collaborate more effectively when
working, learning, and socializing. Examples of commercial products that support this kind
of parallel interaction are Smartboards and Surfaces, which use multitouch, and Kinect, which
uses gesture and object recognition. To understand how effective they are, it is important to
consider the coordination and awareness mechanisms already in use by people in face-to-face
interactions and then to see how these have been adapted or replaced by the technology.

5.5.1 Physical Coordination
When people are working closely together, they talk to each other, issuing commands and
letting others know how they are progressing. For example, when two or more people are
collaborating, as when moving a piano, they shout instructions to each other, like “Down
a bit, left a touch, now go straight forward,” to coordinate their actions. A lot of nonverbal
communication is also used, including nods, shakes, winks, glances, and hand-raising in com-
bination with such coordination talk in order to emphasize and sometimes replace it.

For time-critical and routinized collaborative activities, especially where it is difficult
to hear others because of the physical conditions, people frequently use gestures (although
radio-controlled communication systems may also be used). Various types of hand signals
have evolved, with their own set of standardized syntax and semantics. For example, the arm
and baton movements of a conductor coordinate the different players in an orchestra, while
the arm and orange baton movements of ground personnel at an airport signal to a pilot how
to bring the plane into its assigned gate. Universal gestures, such as beckoning, waving, and
halting hand movement, are also used by people in their everyday settings.

The use of physical objects, such as wands and batons, can also facilitate coordination.
Group members can use them as external thinking props to explain a principle, an idea, or a
plan to the others (Brereton and McGarry, 2000). In particular, the act of waving or holding
up a physical object in front of others is very effective at commanding attention. The per-
sistence and ability to manipulate physical artifacts may also result in more options being
explored in a group setting (Fernaeus and Tholander, 2006). They can help collaborators
gain a better overview of the group activity and increase awareness of others’ activities.

5.5.2 Awareness
Awareness involves knowing who is around, what is happening, and who is talking with whom
(Dourish and Bly, 1992). For example, when attending a party, people move around the physi-
cal space, observing what is going on and who is talking to whom, eavesdropping on oth-
ers’ conversations, and passing on gossip to others. A specific kind of awareness is peripheral
awareness. This refers to a person’s ability to maintain and constantly update a sense of what
is going on in the physical and social context, by keeping an eye on what is happening in the
periphery of their vision. This might include noticing whether people are in a good or bad
mood by the way they are talking, how fast the drink and food is being consumed, who has
entered or left the room, how long someone has been absent, and whether the lonely person in

5 . 5 C O – p R E S E N C E 151

the corner is finally talking to someone—all while we are having a conversation with someone
else. The combination of direct observations and peripheral monitoring keeps people informed
and updated on what is happening in the world.

Another form of awareness that has been studied is situational awareness. This refers
to being aware of what is happening around you in order to understand how information,
events, and your own actions will affect ongoing and future events. Having good situational
awareness is critical in technology-rich work domains, such as air traffic control or an oper-
ating theater, where it is necessary to keep abreast of complex and continuously changing
information.

People who work closely together also develop various strategies for coordinating their
work, based on an up-to-date awareness of what the others are doing. This is especially so for
interdependent tasks, where the outcome of one person’s activity is needed for others to be
able to carry out their tasks. For example, when putting on a show, the performers will con-
stantly monitor what each other is doing in order to coordinate their performance efficiently.
The metaphorical expression close-knit teams exemplifies this way of collaborating. People
become highly skilled in reading and tracking what others are doing and the information to
which they are paying attention.

A classic study of this phenomenon is of two controllers working together in a control
room in the London Underground subway system (Heath and Luff, 1992). An overriding
observation was that the actions of one controller were tied closely to what the other was
doing. One of the controllers (controller A) was responsible for the movement of trains on
the line, while the other (controller B) was responsible for providing information to passen-
gers about the current service. In many instances, it was found that controller B overheard
what controller A was saying and doing and acted accordingly, even though controller A had
not said anything explicitly to him. For example, on overhearing controller A discussing a
problem with a train driver over the in-cab intercom system, controller B inferred from the
conversation that there was going to be a disruption in the service and so started to announce
this to the passengers on the platform before controller A had even finished talking with the
train driver. At other times, the two controllers kept a lookout for each other, monitoring
the environment for actions and events that they might not have noticed but that could have
been important for them to know about so that they could act appropriately.

ACTIVITY 5.5
What do you think happens when one person in a close-knit team does not see or hear some-
thing, or misunderstands what has been said, while the others in the group assume that person
has seen, heard, or understood what has been said?

Comment
The person who has noticed that someone has not acted in the manner expected may use one
of a number of subtle repair mechanisms, say coughing or glancing at something that needs
to be attended to. If this doesn’t work, they may then resort to stating explicitly aloud what

(Continued)

5 S O C I A L I N T E R A C T I O N152

5.5.3 Shareable Interfaces
A number of technologies have been designed to capitalize on existing forms of coordination
and awareness mechanisms. These include whiteboards, large touch screens, and multitouch
tables that enable groups of people to collaborate while interacting at the same time with
content on the surfaces. Several studies have investigated whether different arrangements
of shared technologies can help co-located people work better together (for example, see
Müller-Tomfelde, 2010). An assumption is that shareable interfaces provide more opportuni-
ties for flexible kinds of collaboration compared with single-user interfaces, through enabling
co-located users to interact simultaneously with digital content. The use of fingers or pens
as input on a public display is observable by others, increasing opportunities for building
situational and peripheral awareness. The sharable surfaces are also considered to be more
natural than other technologies, enticing people to touch them without feeling intimidated or
embarrassed by the consequences of their actions. For example, small groups found it more
comfortable working together around a tabletop compared with sitting in front of a PC or
standing in a line in front of a vertical display (Rogers and Lindley, 2004).

BOX 5.3
Playing Together in the Same Place

Augmented reality (AR) sandboxes have been developed for museum visitors to interact with
a landscape, consisting of mountains, valleys, and rivers. The sand is real, while the landscape
is virtual. Visitors can sculpt the sand into different-shaped contours that change their appear-
ance to look like a river or land, depending on the height of the sand piles. Figure 5.8 shows
a AR sandbox that was installed at the V&A museum in London. On observing two young
children playing at the sandbox, this author overheard one say to the other while flattening a
pile of sand, “Let’s turn this land into sea.” The other replied “OK, but let’s make an island on
that.” They continued to talk about how and why they should change their landscape. It was
a pleasure to watch this dovetailing of explaining and doing.

The physical properties of the sand, together with the real-time changing superimposed
landscape, provided a space for children (and adults) to collaborate in creative ways.

had previously been signaled implicitly. Conversely, the unaware person may wonder why
the event hasn’t happened and, likewise, look over at the other team members, cough to get
their attention, or explicitly ask them a question. The kind of repair mechanism employed at
a given moment will depend on a number of factors, including the relationship among the
participants, for instance, whether one is more senior than the others. This determines who
can ask what, the perceived fault or responsibility for the breakdown, and the severity of the
outcome of not acting there and then on the new information.

5 . 5 C O – p R E S E N C E 153

Often in meetings, some people dominate while others say very little. While this is OK
in certain settings, in others it is considered more desirable for everyone to have a say. Is it
possible to design shareable technologies so that people can participate around them more
equally? Much research has been conducted to investigate whether this is possible. Of primary
importance is whether the interface invites people to select, add, manipulate, or remove digi-
tal content from the displays and devices. A user study showed that a tabletop that allowed
group members to add digital content by using physical tokens resulted in more equitable
participation than if only digital input was allowed via touching icons and menus at the
tabletop (Rogers et al., 2009). This suggests that it was easier for people who are normally
shy in groups to contribute to the task. Moreover, people who spoke the least were found to
make the largest contribution to the design task at the tabletop, in terms of selecting, adding,
moving, and removing options. This reveals how changing the way people can interact with a
surface can affect group participation. It shows that it is possible for more reticent members
to contribute without feeling under pressure to speak more.

Figure 5.8 Visitors creating together using an Augmented Reality Sandbox at the V&A
Museum in London
Source: Helen Sharp

5 S O C I A L I N T E R A C T I O N154

Experimentation with real-time feedback presented via ambient displays has also been
shown to provide a new form of awareness for co-located groups. LEDs glowing in tabletops
and abstract visualizations on handheld and wall displays have been designed to represent
how different group members are performing, such as turn-taking. The assumption is that
this kind of real-time feedback can promote self and group regulation and in so doing modify
group members’ contributions to make them more equitable. For example, the Reflect Table
was designed based on this assumption (Bachour et al., 2008). The table monitors and analyzes
ongoing conversations using embedded microphones in front of each person and represents
this in the form of increasing numbers of colored LEDs (see Figure 5.9). A study investigated
whether students became more aware of how much they were speaking during a group meet-
ing when their relative levels of talk were displayed in this manner and, if so, whether they
regulated their levels of participation more effectively. In other words, would the girl in the
bottom right reduce her contributions (as she clearly has been talking the most) while the boy
in the bottom left increase his (as he has been talking the least)? The findings were mixed:
Some participants changed their level to match the levels of others, while others became frus-
trated and chose simply to ignore the LEDs. Specifically, those who spoke the most changed
their behavior the most (that is, reduced their level) while those who spoke the least changed theirs
the least (in other words, did not increase their level). Another finding was that participants
who believed that it was beneficial to contribute equally to the conversation took more
notice of the LEDs and regulated their conversation level accordingly. For example, one
participant said that she “refrained from talking to avoid having a lot more lights than the
others” (Bachour et al., 2010). Conversely, participants who thought it was not important
took less notice. How do you think you would react?

An implication from the various user studies on co-located collaboration around tab-
letops is that designing shareable interfaces to encourage more equitable participation isn’t
straightforward. Providing explicit real-time feedback on how much someone is speaking
in a group may be a good way of showing everyone who is talking too much, but it may be
intimidating for those who are talking too little. Allowing discreet and accessible ways for
adding and manipulating content to an ongoing collaborative task at a shareable surface may

Figure 5.9 The Reflect Table
Source: Used courtesy of Pierre Dillenbourg

5 . 5 C O – p R E S E N C E 155

be more effective at encouraging greater participation from people who normally find it dif-
ficult or who are simply unable to contribute verbally to group settings (for example, those
on the autistic spectrum, those who stutter, or those who are shy or are non-native speakers).

How best to represent the activity of online social networks in terms of who is taking
part has also been the subject of much research. A design principle that has been influential is
social translucence (Erickson and Kellogg, 2000). This refers to the importance of designing
communication systems to enable participants and their activities to be visible to one another.
This idea was very much behind the early communication tool, Babble, developed at IBM
by David Smith (Erickson et al., 1999), which provided a dynamic visualization of the par-
ticipants in an ongoing chat room. A large 2D circle was depicted using colored marbles on
each user’s monitor. Marbles inside the circle conveyed those individuals active in the current
conversation. Marbles outside the circle showed users involved in other conversations. The
more active a participant was in the conversation, the more the corresponding marble moved
toward the center of the circle. Conversely, the less engaged a person was in the ongoing con-
versation, the more the marble moved toward the periphery of the circle.

Since this early work on visualizing social interactions, there have been a number of
virtual spaces developed that provide awareness about what people are doing, where they
are, and their availability, with the intention of helping them feel more connected. Work-
ing in remote teams can be isolating, especially if they rarely get to see their colleagues face
to face. When teams are not co-located, they also miss out on in-person collaboration and
valuable informal conversations that build team alignment. This is where the concept of
the “online office” comes in. For example, Sococo (https://www.sococo.com/) is an online
office platform that is bridging the gap between remote and co-located work. It uses the
spatial metaphor of a floor plan of an office to show where people are situated, who is in
a meeting, and who is chatting with whom. The Sococo map (see Figure 5.10) provides a

Search for colleagues
across the
workspace to see
status or instantly chat

See a team in a meeting,
sharing screens and viewing

documents in a room

Name a room to reflect
the topic of a meeting in
progress

Knock on a door to join a
meeting or just pop in

Send a link for a guest to
join you in your Sococo office

Blinking avatars are
colleagues collaborating

Share documents or links
on a desk for immediate
access to anyone in the
room

Instantly “Get”
colleagues to
spontaneously
collaborate

Scale instantly with
unlimited floors to your
Sococo space

Figure 5.10 Sococo floor plan of a virtual office, showing who is where and who is meeting with whom
Source: Used courtesy of Leeann Brumby

Home

5 S O C I A L I N T E R A C T I O N156

bird’s-eye view of a team’s online office, giving everyone at-a-glance insight into teammates’
availability and what’s happening organizationally. Sococo also provides the sense of pres-
ence and virtual “movement” that you get in a physical office—anyone can pop into a room,
turn on their microphone and camera, and meet with another member of their team face to
face. Teams can work through projects, get feedback from management, and collaborate ad
hoc in their online office regardless of physical location. This allows organizations to take
advantage of the benefits of the distributed future of work while still providing a central,
online office for their teams.

BOX 5.4
Can Technologies Be Designed to Help People Break the Ice and
Socialize?

Have you ever found yourself at a party, wedding, conference, or other social gathering, stand-
ing awkwardly by yourself, not knowing who to talk to or what to talk about? Social embar-
rassment and self-consciousness affect most of us at such moments, and such feelings are most
acute when one is a newcomer and by oneself, such as a first-time attendee at a conference.
How can conversation initiation be made easier and less awkward for people who do not
know each other?

A number of mechanisms have been employed by organizers of social events, such
as asking old-timers to act as mentors and the holding of various kinds of ice-breaking
activities. Badge-wearing, the plying of drink and food, and introductions by others are also
common ploys. While many of these methods can help, engaging in ice-breaking activities
requires people to act in a way that is different from the way they normally socialize and
which they may find equally uncomfortable or painful to do. They often require people to
agree to join in a collaborative game, which they may find embarrassing. This can be exac-
erbated by the fact that once people have agreed to take part, it is difficult for them to drop
out because of the perceived consequences that it will have for the others and themselves
(such as being seen by the others as a spoilsport or party-pooper). Having had one such
embarrassing experience, most people will shy away from any further kinds of ice-breaking
activities.

An alternative approach is to design a physical space where people can enter and exit a
conversation with a stranger in subtler ways, that is, one where people do not feel threatened
or embarrassed and that does not require a high level of commitment. The classic Opinionizer
system was designed along these lines, with the goal of encouraging people in an informal
gathering to share their opinions visually and anonymously (Brignull and Rogers, 2003). The
collective creation of opinions via a public display was intended to provide a talking point
for the people standing beside it. Users submitted their opinions by typing them in at a public
keyboard. To add color and personality to their opinions, a selection of small cartoon avatars
and speech bubbles were available. The screen was also divided into four labeled quadrants
representing different backgrounds, such as techie, softie, designer, or student, to provide a
factor on which people could comment (see Figure 5.11).

5 . 5 C O – p R E S E N C E 157

A range of other ambient-based displays have been developed and placed in physical work
settings with the purpose of encouraging people to socialize and talk more with each other.
For example, the Break-Time Barometer was designed to persuade people to come out of their
offices for a break to meet others they might not talk with otherwise (Kirkham et al., 2013).
An ambient display, based on a clock metaphor, shows how many people are currently in the
common room; if there are people present, it also sends an alert that it would be a good time
to join them for a break. While the system nudged some people to go for a break in the staff
room, it also had the opposite effect on others who used it to determine when breaks weren’t
happening so that they could take a break without their colleagues being around for company.

When the Opinionizer was placed in various social gatherings, a honey-pot effect was
observed. By this it is meant the creation of a sociable buzz in the immediate vicinity of the
Opinionizer as a result of an increase in the number of people moving into the area. Further-
more, by standing in this space and showing an interest, for example visibly facing the screen
or reading the text, people gave off a tacit signal to others that they were open to discussion
and interested in meeting new people.

There are now a number of commercial ice-breaking phone apps available that use arti-
ficial intelligence (AI) matchmaking algorithms to determine which preferences and interests
shared among people make them suitable conversational partners. Wearable technology is
also being developed as a new form of digital ice-breaker. Limbic Media (https://limbicmedia
.ca/social-wearables/), for example, has developed a novel pendant device colored with LED
lights for this purpose. When two people touch their pendants together, the effect is for them
to vibrate. This coming together action can break the ice in a fun and playful way.

(a) (b)

Figure 5.11 (a) The Opinionizer interface and (b) a photo of it being used at a book launch party
Source: Helen Sharp

This video features Limbic Media’s novel type of social wearable being used at
the 2017 BCT Tech Summit: https://vimeo.com/216045804.

https://limbicmedia.ca/social-wearables/

https://limbicmedia.ca/social-wearables/

5 S O C I A L I N T E R A C T I O N158

5.6 Social Engagement

Social engagement refers to participation in the activities of a social group (Anderson and
Binstock, 2012). Often it involves some form of social exchange where people give or receive
something from others. Another defining aspect is that it is voluntary and unpaid. Increas-
ingly, different forms of social engagement are mediated by the Internet. For example, there
are many websites now that support pro-social behavior by offering activities intended to
help others. One of the first websites of this ilk was GoodGym (www.goodgym.org/), which
connects runners with isolated older people. While out running, the runners stop for a chat
with an older person who has signed up to the service, and the runner helps them with
their chores. The motivation is to help others in need while getting fit. There is no obliga-
tion, and anyone is welcome to join. Another website that was set up is conservation vol-
unteers (https://www.tcv.org.uk/). The website brings together those who want to help out
with existing conservation activities. By bringing different people together, social cohesion
is also promoted.

Not only has the Internet enabled local people to meet who would not have otherwise,
it has proven to be a powerful way of connecting millions of people with a common interest
in ways unimaginable before. An example is retweeting a photo that resonates with a large
crowd who finds it amusing and wants to pass it on further. For example, in 2014, the most
retweeted selfie was one taken by Ellen DeGeneres (an American comedian and television
host) at the Oscar Academy Awards of her in front of a star-studded, smiling group of actors
and friends. It was retweeted more than 2 million times (more than three-quarters of a mil-
lion in the first half hour of being tweeted)—far exceeding the one taken by Barack Obama
at Nelson Mandela’s funeral the previous year.

There has even been an “epic Twitter battle.” A teenager from Nevada, Carter Wilker-
son, asked Wendy’s fast-food restaurant how many retweets were needed for him to receive
a whole year’s supply of free chicken nuggets. The restaurant replied “18 million” (see
Figure 5.12). From that moment on, his quest became viral with his tweet being retweeted
more than 2 million times. Ellen’s record was suddenly put in jeopardy, and she intervened,
putting out a series of requests on her show for people to continue to retweet her tweet so
her record would be upheld. Carter, however, surpassed her record at the 3.5 million mark.
During the Twitter battle, he used his newly found fame to create a website that sold T-shirts
promoting his chicken nugget challenge. He then donated all of the proceeds from the sales
toward a charity that was close to his heart. The restaurant also gave him a year’s supply of
free chicken nuggets—even though he didn’t reach the target of 18 million. Not only that, it
also donated $100,000 to the same charity in honor of Carter achieving a new record. It was
a win-win situation (except maybe for Ellen).

Another way that Twitter connects people rapidly and at scale is when unexpected events
and disasters happen. Those who have witnessed something unusual may upload an image
that they have taken of it or retweet what others have posted to inform others about it. Those
who like to reach out in this way are sometimes called digital volunteers. For example, while
writing this chapter, there was a massive thunderstorm overhead that was very dramatic.
I checked out the Twitter hashtag #hove (I was in the United Kingdom) and found that
hundreds of people had uploaded photos of the hailstones, flooding, and minute-by-minute
updates of how public transport and traffic were being affected. It was easy to get a sense

http://www.goodgym.org/

Home

5 . 6 S O C I A L E N g A g E m E N T 159

of the scale of the storm before it was picked up by the official media channels, which then
used some of the photos and quotes from Twitter in their coverage (see Figure 5.13). Relying
on Twitter for breaking news has increasingly become the norm. When word came of a huge
explosion in San Bruno, California, the chief of the Federal Emergency Management Agency
in the United States logged on to Twitter and searched for the word explosion. Based on the
tweets coming from that area, he was able to discern that the gas explosion and ensuing fire
was a localized event that would not spread to other communities. He noted how he got bet-
ter situational awareness more quickly from reading Twitter than by hearing about it from
official sources.

Clearly, the immediacy and global reach of Twitter provides an effective form of com-
munication, providing first responders and those living in the affected areas with up-to-the-
minute information about how a wildfire, storm, or gas plume is spreading. However, the
reliability of the tweeted information can sometimes be a problem. For example, some people
end up obsessively checking and posting, sometimes without realizing that this can start or
fuel rumors by adding news that is old or incorrect. Regulars can go into a frenzy, constantly
adding new tweets about an event, as witnessed when an impending flood was announced
(Starbird et al., 2010). While such citizen-led dissemination and retweeting of information
from disparate sources is well intentioned, it can also flood the Twitter streams, making it
difficult to know what is old, actual, or hearsay.

Figure 5.12 Carter Wilkerson’s tweet that went viral

5 S O C I A L I N T E R A C T I O N160

Figure 5.13 A weather warning photo tweeted and retweeted about a severe storm in Hove,
United Kingdom

BOX 5.5
Leveraging Citizen Science and Engagement Through Technology

The growth and success of citizen science and citizen engagement has been made possible
by the Internet and mobile technology, galvanizing and coordinating the efforts of millions
of people throughout the world. Websites, smartphone apps, and social media have been
instrumental in leveraging the reach and impact of a diversity of citizen science projects
across time and geographical zones (Preece et al., 2018). Citizen science involves local peo-
ple helping scientists carry out a scientific project at scale. Currently, thousands of such
projects have been set up all over the world, whereby volunteers help out in a number
of research areas, including biodiversity, air quality, astronomy, and environmental issues.
They do so by engaging in scientific activities such as monitoring plants and wildlife, col-
lecting air and water samples, categorizing galaxies, and analyzing DNA sequences. Citizen
engagement involves people helping governments, rather than scientists, to improve pub-
lic services and policies in their communities. Examples include setting up and oversee-
ing a website that offers local services for community disasters and creating an emergency
response team when a disaster occurs.

Why would anyone want to volunteer their time for the benefit of science or government?
Many people want to learn more about a domain, while others want to be recognized for their
contributions (Rotman et al., 2014). Some citizen science apps have developed online mecha-
nisms to support this. For example, iNaturalist (https://www.inaturalist.org/) enables volun-
teers to comment on and help classify others’ contributions.

https://www.inaturalist.org/

5 . 6 S O C I A L E N g A g E m E N T 161

DILEmmA
Is It OK to Talk with a Dead Person Using a Chatbot?

Eugenia Kuyda, an AI researcher, lost a close friend in a car accident. He was only in his 20s.
She did not want to lose his memory, so she gathered all of the texts he had sent over the
course of his life and made a chatbot from them. The chatbot is programmed to respond
automatically to text messages so that Eugenia can talk to her friend as if he were still alive.
It responds to her questions using his own words.

Do you think this kind of interaction is creepy or comforting to someone who is grieving?
Is it disrespectful of the dead, especially if the dead person has not given their consent? What
if the friend had agreed to having their texts mashed up in this way in a “pre-death digital
agreement”? Would that be more socially acceptable?

In-Depth Activity
The goal of this activity is to analyze how collaboration, coordination, and communication
are supported in online video games involving multiple players.

The video game Fortnite arrived in 2017 to much acclaim. It is an action game designed
to encourage teamwork, cooperation, and communication. Download the game from an
app store (it is free) and try it. You can also watch an introductory video about it at
https://youtu.be/_U2JbFhUPX8.

Answer the following questions.
1. Social issues

(a) What is the goal of the game?
(b) What kinds of conversations are supported?
(c) How is awareness of the others in the game supported?
(d) What kinds of social protocols and conventions are used?
(e) What types of awareness information are provided?
(f) Does the mode of communication and interaction seem natural or awkward?
(g) How do players coordinate their actions in the game?

2. Interaction design issues
(a) What form of interaction and communication is supported, for instance, text, audio,

and/or video?
(b) What other visualizations are included? What information do they convey?
(c) How do users switch between different modes of interaction, for example, exploring

and chatting? Is the switch seamless?
(d) Are there any social phenomena that occur specific to the context of the game that

wouldn’t happen in face-to-face settings?
3. Design issues

• What other features might you include in the game to improve communication, coordi-
nation, and collaboration?

5 S O C I A L I N T E R A C T I O N162

Further Reading

boyd, d. (2014) It’s Complicated: The Social Lives of Networked Teens. Yale. Based on a
series of in-depth interviews with a number of teenagers, danah boyd offers new insights into
how teenagers across the United States, who have only ever grown up in a world of apps
and media, navigate, use, and appropriate them to grow up and develop their identities. A
number of topics are covered that are central to what it means to grow up in a networked
world, including bullying, addiction, expressiveness, privacy, and inequality. It is insightful
and covers much ground.

CRUMLISH, C. and MALONE, E. (2009) Designing Social Interfaces. O’Reilly. This is a col-
lection of design patterns, principles, and advice for designing social websites, such as online
communities.

GARDNER, H. and DAVIS, K. (2013) The App Generation: How Today’s Youth Navigate
Identity, Intimacy, and Imagination in a Digital World. Yale. This book explores the impact
of apps on the young generation, examining how they affect their identity, intimacy, and
imagination. It focuses on what it means to be app-dependent versus app-empowered.

Summary
Human beings are inherently social. People will always need to collaborate, coordinate, and
communicate with one another, and the diverse range of applications, web-based services,
and technologies that have emerged enable them to do so in more extensive and diverse ways.
In this chapter, we looked at some core aspects of sociality, namely, communication and col-
laboration. We examined the main social mechanisms that people use in different conversa-
tional settings when interacting face to face and at a distance. A number of collaborative and
telepresence technologies designed to support and extend these mechanisms were discussed,
highlighting core interaction design concerns.

Key Points
• Social interaction is central to our everyday lives.
• Social mechanisms have evolved in face-to-face and remote contexts to facilitate
conversation, coordination, and awareness.

• Talk and the way it is managed are integral to coordinating social interaction.
• Many kinds of technologies have been developed to enable people to communicate remotely
with one another.

• Keeping aware of what others are doing and letting others know what you are doing are
important aspects of collaboration and socializing.

• Social media has brought about significant changes in the way people keep in touch and
manage their social lives.

163

ROBINSON, S., MARSDEN, G. and JONES, M. (2015) There’s Not an App for That:
Mobile User Experience Design for Life. Elsevier. This book offers a fresh approach for
designers, students, and researchers to dare to think differently by moving away from the
default framing of technological design in terms of yet another “looking down” app. It asks
the reader instead to look up and around them—to be inspired by how we actually live our
lives when “out there” app-less. They also explore what it means to design technologies to
be more mindful.

TURKLE, S. (2016) Reclaiming Conversation: The Power of Talk in a Digital Age. Penguin.
Sherry Turkle has written extensively about the positive and negative effects of digital tech-
nology on everyday lives—at work, at home, at school, and in relationships. This book is a
very persuasive warning about the negative impacts of perpetual use of smartphones. Her
main premise is that as people—both adults and children—become increasingly glued to
their phones instead of talking to one another, they lose the skill of empathy. She argues that
we need to reclaim conversation to relearn empathy, friendship, and creativity.

F U R T H E R R E A D I N g

Chapter 6

E M O T I O N A L I N T E R A C T I O N

6.1 Introduction

6.2 Emotions and the User Experience

6.3 Expressive Interfaces and Emotional Design

6.4 Annoying Interfaces

6.5 Affective Computing and Emotional AI

6.6 Persuasive Technologies and Behavioral Change

6.7 Anthropomorphism

Objectives
The main goals of this chapter are to accomplish the following:

• Explain how our emotions relate to behavior and the user experience.
• Explain what are expressive and annoying interfaces and the effects they can have
on people.

• Introduce the area of emotion recognition and how it is used.
• Describe how technologies can be designed to change people’s behavior.
• Provide an overview on how anthropomorphism has been applied in interaction design.

6.1 Introduction

When you receive some bad news, how does it affect you? Do you feel upset, sad, angry, or
annoyed—or all of these? Does it put you in a bad mood for the rest of the day? How might
technology help? Imagine a wearable technology that could detect how you were feeling and
provide a certain kind of information and suggestions geared toward helping to improve
your mood, especially if it detected that you were having a real downer of a day. Would you
find such a device helpful, or would you find it unnerving that a machine was trying to cheer
you up? Designing technology to detect and recognize someone’s emotions automatically
from sensing aspects of their facial expressions, body movements, gestures, and so forth,

6 E M O T I O N A L I N T E R A C T I O N166

is a growing area of research often called emotional AI or affective computing. There are
many potential applications for using automatic emotion sensing, other than those intended
to cheer someone up, including health, retail, driving, and education. These can be used to
determine if someone is happy, angry, bored, frustrated, and so on, in order to trigger an
appropriate technology intervention, such as making a suggestion to them to stop and reflect
or recommending a particular activity for them to do.

In addition, emotional design is a growing area relating to the design of technology
that can engender desired emotional states, for example, apps that enable people to reflect
on their emotions, moods, and feelings. The focus is on how to design interactive prod-
ucts to evoke certain kinds of emotional responses in people. It also examines why people
become emotionally attached to certain products (for instance, virtual pets), how social
robots might help reduce loneliness, and how to change human behavior through the use of
emotive feedback.

In this chapter, we include emotional design and affective computing using the broader
term, emotional interaction, to cover both aspects. We begin by explaining what emotions
are and how they shape behavior and everyday experiences. We then consider how and
whether an interface’s appearance affects usability and the user experience. In particu-
lar, we look at how expressive and persuasive interfaces can change people’s emotions or
behaviors. How technology can detect human emotions using voice and facial recognition
is then covered. Finally, the way anthropomorphism has been used in interaction design is
discussed.

6.2 Emotions and the User Experience

Consider the different emotions one experiences throughout a common everyday activity—
shopping online for a product, such as a new laptop, a sofa, or a vacation. First, there is the
realization of needing or wanting one and then the desire and anticipation of purchasing it.
This is followed by the joy or frustration of finding out more about what products are avail-
able and deciding which to choose from potentially hundreds or even thousands of them by
visiting numerous websites, such as comparison sites, reviews, recommendations, and social
media sites. This entails matching what is available with what you like or need and whether
you can afford it. The thrill of deciding on a purchase may be quickly followed by the shock
of how much it costs and the disappointment that it is too expensive. The process of having
to revise your decision may be accompanied by annoyance if you discover that nothing is as
good as the first choice. It can become frustrating to keep looking and revisiting sites. Finally,
when you make your decision, a sense of relief is often experienced. Then there is the process
of clicking through the various options (such as color, size, warranty, and so forth) until the
online payment form pops up. This can be tedious, and the requirement to fill in the many
details raises the possibility of making a mistake. Finally, when the order is complete, you
can let out a big sigh. However, doubts can start to creep in—maybe the other one was better
after all… .

This rollercoaster set of emotions is what many of us experience when shopping online,
especially for big-ticket items where there is a myriad of options from which to choose and
where you want to be sure that you make the right choice.

6 . 2 E M O T I O N s A N d T h E U s E R E x p E R I E N C E 167

Emotional interaction is concerned with what makes people feel happy, sad, annoyed,
anxious, frustrated, motivated, delirious, and so on, and then using this knowledge to inform
the design of different aspects of the user experience. However, it is not straightforward.
Should an interface be designed to try to keep a person happy when it detects that they are
smiling, or should it try to change them from being in a negative mood to a positive one
when it detects that they are scowling? Having detected an emotional state, a decision has to

ACTIVITY 6.1
Have you seen one of the terminals shown in Figure 6.1 at an airport after you have gone
through security? Were you drawn toward it, and did you respond? If so, which smiley button
did you press?

Comment
The act of pressing one of the buttons can be very satisfying—providing a moment for you to
reflect upon your experience. It can even be pleasurable to express how you feel in this physi-
cal manner. Happyornot designed the feedback terminals that are now used in many airports
throughout the world. The affordances of the large, colorful, slightly raised buttons laid out in
a semicircle, with distinct smileys, makes it easy to know what is being asked of the passerby,
enabling them to select among feeling happy, angry, or something in between.

The data collected from the button presses provides statistics for an airport as to when
and where people are happiest and angriest after going through security. Happyornot has
found that it also makes travelers feel valued. The happiest times to travel, from the data they
have collected at various airports, are at 8 a.m. and 9 a.m. The unhappiest times recorded are
in the early hours of the morning, presumably because people are tired and grumpier.

Figure 6.1 A Happyornot terminal located after security at Heathrow Airport
Source: https://www.rsrresearch.com/research/why-metrics-matter. Used courtesy of Retail Systems
Research

https://www.rsrresearch.com/research/why-metrics-matter. Used courtesy of Retail Systems Research

6 E M O T I O N A L I N T E R A C T I O N168

be made as to what or how to present information to the user. Should it try to “smile” back
through using various interface elements, such as emojis, feedback, and icons? How expres-
sive should it be? It depends on whether a given emotional state is viewed as desirable for the
user experience or the task at hand. A happy state of mind might be considered optimal for
when someone goes to shop online if it is assumed that this will make them more willing to
make a purchase.

Advertising agencies have developed a number of techniques to influence people’s emo-
tions. Examples include showing a picture of a cute animal or a child with hungry, big eyes
on a website that “pulls at the heartstrings.” The goal is to make people feel sad or upset at
what they observe and make them want to do something to help, such as by making a dona-
tion. Figure 6.2, for example, shows a web page that has been designed to trigger a strong
emotional response in the viewer.

Our moods and feelings are also continuously changing, making it more difficult to
predict how we feel at different times. Sometimes, an emotion can descend upon us but
disappear shortly afterward. For example, we can become startled by a sudden, unexpected
loud noise. At other times, an emotion can stay with us for a long time; for example, we can
remain annoyed for hours when staying in a hotel room that has a noisy air conditioning
unit. An emotion like jealousy can keep simmering for a long period of time, manifesting
itself on seeing or hearing something about the person or thing that triggered it.

Figure 6.2 A webpage from Crisis (a UK homelessness charity)
Source: https://www.crisis.org.uk

In a series of short videos, Kia Höök talks about affective computing, explaining
how emotion is formed and why it is important to consider when designing user
experiences with technology. See www.interaction-design.org/encyclopedia/
affective_computing.html.

http://www.interaction-design.org/encyclopedia/affective_computing.html

http://www.interaction-design.org/encyclopedia/affective_computing.html

https://www.crisis.org.uk

6 . 2 E M O T I O N s A N d T h E U s E R E x p E R I E N C E 169

A good place to start understanding how emotions affect behavior and how behavior
affects emotions is to examine how people express themselves and read each other’s expres-
sions. This includes understanding the relationship between facial expressions, body lan-
guage, gestures, and tone of voice. For example, when people are happy, they typically smile,
laugh, and relax their body posture. When they are angry, they might shout, gesticulate, tense
their hands, and screw up their face. A person’s expressions can trigger emotional responses
in others. When someone smiles, it can cause others to feel good and smile back.

Emotional skills, especially the ability to express and recognize emotions, are central to
human communication. Most people are highly skilled at detecting when someone is angry,
happy, sad, or bored by recognizing their facial expressions, way of speaking, and other body
signals. They also usually know what emotions to express in a given situation. For example,
when someone has just heard they have failed an exam, it is not a good time to smile and be
happy for them. Instead, people try to empathize and show that they feel sad, too.

There is an ongoing debate about whether and how emotion causes certain behaviors.
For example, does being angry make us concentrate better? Or does being happy make us
take more risks, such as spending too much money or vice versa or neither? It could be
that we can just feel happy, sad, or angry, and that this does not affect our behavior. Roy
Baumeister et al. (2007) argue that the role of emotion is more complicated than a simple
cause-and-effect model.

Many theorists, however, argue that emotions cause behavior, for example that fear
brings about flight and that anger initiates the fight perspective. A widely accepted expla-
nation, derived from evolutionary psychology, is that when something makes someone
frightened or angry, their emotional response is to focus on the problem at hand and try to
overcome or resolve the perceived danger. The physiological responses that accompany this
state usually include a rush of adrenalin through the body and the tensing of muscles. While
the physiological changes prepare people to fight or flee, they also give rise to unpleasant
experiences, such as sweating, butterflies in the stomach, quick breathing, heart pounding,
and even feelings of nausea.

Nervousness is a state of being that is often accompanied by several emotions, includ-
ing apprehension and fear. For example, many people get worried and some feel terrified
before speaking at a public event or a live performance. There is even a name for this kind
of nervousness—stage fright. Andreas Komninos (2017) suggests that it is the autonomous
system “telling” people to avoid these kinds of potentially humiliating or embarrassing expe-
riences. But performers or professors can’t simply run away. They have to cope with the
negative emotions associated with having to be in front of an audience. Some are able to
turn their nervous state to their advantage, using the increase in adrenalin to help them focus
on their performance. Others are only too glad when the performance is over and they can
relax again.

As mentioned earlier, emotions can be simple and short-lived or complex and long-lasting.
To distinguish between the two types of emotion, researchers have described them in terms
of being either automatic or conscious. Automatic emotions (also knowns as affect) happen
rapidly, typically within a fraction of a second and, likewise, may dissipate just as quickly.
Conscious emotions, on the other hand, tend to be slow to develop and equally slow to dis-
sipate, and they are often the result of a conscious cognitive behavior, such as weighing the
odds, reflection, or contemplation.

6 E M O T I O N A L I N T E R A C T I O N170

Understanding how emotions work provides a way of considering how to design for user
experiences that can trigger affect or reflection. For example, Don Norman (2005) suggests
that being in a positive state of mind can enable people to be more creative as they are less
focused. When someone is in a good mood, it is thought to help them make decisions more
quickly. He also suggests that when people are happy, they are more likely to overlook and
cope with minor problems that they are experiencing with a device or interface. In contrast,
when someone is anxious or angry, they are more likely to be less tolerant. He also suggests
that designers pay special attention to the information required to do the task at hand, but
especially in the case when designing apps or devices for serious tasks, such as monitor-
ing a process control plant or driving a car. The interface needs to be clearly visible with

BOx 6.1
How Does Emotion Affect Driving Behavior?

Research investigating the influence of emotions on driving behavior has been extensively
reviewed (Pêcher et al., 2011; Zhang and Chan, 2016). One major finding is that when driv-
ers are angry, their driving becomes more aggressive, they take more risks such as dangerous
overtaking, and they are prone to making more errors. Driving performance has also been
found to be negatively affected when drivers are anxious. People who are depressed are also
more prone to accidents.

What are the effects of listening to music while driving? A study by Christelle Pêcher et al.
(2009) found that people slowed down while driving in a car simulator when they listened to
either happy or sad music, as compared to neutral music. This effect is thought to be due to
the drivers focusing their attention on the emotions and lyrics of the music. Listening to happy
music was also found not only to slow drivers down, but to distract them more by reducing
their ability to stay in their lane. This did not happen with the sad music.

Source: Jonny Hawkins / Cartoon Stock

6 . 2 E M O T I O N s A N d T h E U s E R E x p E R I E N C E 171

unambiguous feedback. The bottom line is “things intended to be used under stressful situ-
ations require a lot more care, with much more attention to detail” (Norman, 2005, p. 26).

Don Norman and his colleagues (Ortony et al., 2005) have also developed a model of
emotion and behavior. It is couched in terms of different “levels” of the brain. At the lowest
level are parts of the brain that are prewired to respond automatically to events happening
in the physical world. This is called the visceral level. At the next level are the brain processes
that control everyday behavior. This is called the behavioral level. At the highest level are
brain processes involved in contemplating. This is called the reflective level (see Figure 6.3).
The visceral level responds rapidly, making judgments about what is good or bad, safe or
dangerous, pleasurable or abhorrent. It also triggers the emotional responses to stimuli (for
instance fear, joy, anger, and sadness) that are expressed through a combination of physi-
ological and behavioral responses. For example, many people will experience fear on seeing
a very large hairy spider running across the floor of the bathroom, causing them to scream
and run away. The behavioral level is where most human activities occur. Examples include
well-learned routine operations such as talking, typing, and swimming. The reflective level
entails conscious thought where people generalize across events or step back from their daily
routines. An example is switching between thinking about the narrative structure and spe-
cial effects used in a horror movie and becoming scared at the visceral level when watching
the movie.

One way of using the model is to think about how to design products in terms of the
three levels. Visceral design refers to making products look, feel, and sound good. Behavio-
ral design is about use and equates to the traditional values of usability. Reflective design
is about considering the meaning and personal value of a product in a particular culture.
For example, the design of a Swatch watch (see Figure 6.4) can be viewed in terms of the
three levels. The use of cultural images and graphical elements is designed to appeal to users

Sensory Motor

Control

Reflective

Behavioral

Visceral

Control

Figure 6.3 Anthony Ortony et al.’s (2005) model of emotional design showing three levels: visceral,
behavioral, and reflective
Source: Adapted from Norman (2005), Figure 1.1

6 E M O T I O N A L I N T E R A C T I O N172

at the reflective level; its affordances of use at the behavioral level, and the brilliant colors,
wild designs, and art attract users’ attention at the visceral level. They are combined to
create the distinctive Swatch trademark, and they are what draw people to buy and wear
their watches.

6.3 Expressive Interfaces and Emotional Design

Designers use a number of features to make an interface expressive. Emojis, sounds, colors,
shapes, icons, and virtual agents are used to (1) create an emotional connection or feel-
ing with the user (for instance, warmth or sadness) and/or (2) elicit certain kinds of emo-
tional responses in users, such as feeling at ease, comfort, and happiness. In the early days,
emotional icons were used to indicate the current state of a computer or a phone, notably
when it was waking up or being rebooted. A classic from the 1980s was the happy Mac
icon that appeared on the screen of the Apple computer whenever the machine was booted
(see Figure 6.5a). The smiling icon conveyed a sense of friendliness, inviting the user to feel
at ease and even smile back. The appearance of the icon on the screen was also meant to be

Figure 6.4 A Swatch watch called Dip in Color
Source: http://store.swatch.com/suop103-dip-in-color.html

http://store.swatch.com/suop103-dip-in-color.html

6 . 3 E x p R E s s I V E I N T E R F A C E s A N d E M O T I O N A L d E s I g N 173

reassuring, indicating that the computer was working. After being in use for nearly 20 years,
the happy and sad Mac icons were laid to rest. Apple now uses more impersonal but aestheti-
cally pleasing forms of feedback to indicate a process for which the user needs to wait, such
as “starting up,” “busy,” “not working,” or “downloading.” These include a spinning colorful
beach ball (see Figure 6.5b) and a moving clock indicator. Similarly, Android uses a spinning
circle to show when a process is loading.

Other ways of conveying expressivity include the following:

• Animated icons (for example, a recycle bin expanding when a file is placed in it and paper
disappearing in a puff of smoke when emptied)

• Sonifications indicating actions and events (such as whoosh for a window closing,
“schlook” for a file being dragged, or ding for a new email arriving)

• Vibrotactile feedback, such as distinct smartphone buzzes that represent specific messages
from friends or family

The style or brand conveyed by an interface, in terms of the shapes, fonts, colors, and
graphical elements used, and the way they are combined, also influence its emotional impact.
Use of imagery at the interface can result in more engaging and enjoyable experiences (Mullet
and Sano, 1995). A designer can also use a number of aesthetic techniques such as clean lines,
balance, simplicity, white space, and texture.

The benefits of having aesthetically pleasing interfaces have become more acknowl-
edged in interaction design. Noam Tractinsky (2013) has repeatedly shown how the aesthet-
ics of an interface can have a positive effect on people’s perception of the system’s usability.
When the look and feel of an interface is pleasing and pleasurable—for example through
beautiful graphics or a nice feel or the way that the elements have been put together—peo-
ple are likely to be more tolerant and prepared to wait a few more seconds for a website to
download. Furthermore, good-looking interfaces are generally more satisfying and pleasur-
able to use.

(a) (b)

Figure 6.5 (a) Smiling and sad Apple icons depicted on the classic Mac and (b) the spinning beach
ball shown when an app freezes

Source: (b) https://www.macobserver.com/tmo/article/frozen-how-to-force-quit-an-os-x-app-showing-a-
spinningbeachball-of-death

https://www.macobserver.com/tmo/article/frozen-how-to-force-quit-an-os-x-app-showing-a-spinningbeachball-of-death

https://www.macobserver.com/tmo/article/frozen-how-to-force-quit-an-os-x-app-showing-a-spinningbeachball-of-death

6 E M O T I O N A L I N T E R A C T I O N174

6.4 Annoying Interfaces

In many situations, interfaces may inadvertently elicit negative emotional responses, such as
anger. This typically happens when something that should be simple to use or set turns out to
be complex. The most common examples are remote controls, printers, digital alarm clocks,
and digital TV systems. Getting a printer to work with a new digital camera, trying to switch
from watching a DVD to a TV channel, and changing the time on a digital alarm clock in

For more information about the design of other Nest products, see
https://www.wired.com/story/inside-the-second-coming-of-nest/.

BOx 6.2
The Design of the Nest Thermostat Interface

The popular Nest thermostat provides an automatic way of controlling home heating that is
personalized to the habits and needs of the occupants. Where possible, it also works out how
to save money by reducing energy consumption when not needed. The wall-mounted device
does this by learning what temperature the occupants prefer and when to turn the heating on
and off in each room by learning their routines.

The Nest thermostat is more than just a smart meter, however. It was also designed to have
a minimalist and aesthetically pleasing interface (see Figure 6.6a). It elegantly shows the tempera-
ture currently on its round face and to which temperature it has been set. This is very different
from earlier generations of automatic thermostats, which were utilitarian box-shaped designs
with lots of complicated buttons and a dull screen that provided feedback about the setting and
temperature (see Figure 6.6b). It is little wonder that the Nest thermostat has been a success.

(a) (b)

Figure 6.6 (a) The Nest thermostat and (b) A traditional thermostat
Source: Nest

https://www.wired.com/story/inside-the-second-coming-of-nest/

6 . 4 A N N O Y I N g I N T E R F A C E s 175

a hotel can be very trying. Also, complex actions such as attaching the ends of cables between
smartphones and laptops, or inserting a SIM card into a smartphone, can be irksome, espe-
cially if it is not easy to see which way is correct to insert them.

This does not mean that developers are unaware of such usability problems. Several
methods have been devised to help the novice user get set up and become familiarized with
a technology. These methods include pop-up help boxes and contextual videos. Another
approach to helping users has been to make an interface appear friendlier as a way of reas-
suring users—especially those who were new to computers or online banking. One technique
that was first popularized in the 1990s was the use of cartoon-like companions. The assump-
tion was that novices would feel more at ease with a “friend” appearing on the screen and
would be encouraged to try things out after listening, watching, following, and interacting
with it. For example, Microsoft pioneered a class of agent-based software, Bob, aimed at
new computer users (many of whom were viewed as computer-phobic). The agents were pre-
sented as friendly characters, including a pet dog and a cute bunny. An interface metaphor of
a warm, cozy living room, replete with fire and furniture, was also provided (see Figure 6.7),
again intended to convey a comfortable feeling. However, Bob never became a commercial
product. Why do you think that was?

Contrary to the designers’ expectations, many people did not like the idea of Bob, finding
the interface too cute and childish. However, Microsoft did not give up on the idea of making its
interfaces friendlier and developed other kinds of agents, including the infamous Clippy (a paper
clip that had human-like qualities), as part of their Windows 98 operating environment. Clippy
typically appeared at the bottom of a user’s screen whenever the system thought the user needed
help carrying out a particular task (see Figure 6.8a). It, too, was depicted as a cartoon character,
with a warm personality. This time, Clippy was released as a commercial product, but it was not
a success. Many Microsoft users found it too intrusive, distracting them from their work.

Figure 6.7 “At home with Bob” software developed for Windows 95
Source: Microsoft Corporation

6 E M O T I O N A L I N T E R A C T I O N176

A number of online stores and travel agencies also began including automated virtual
agents in the form of cartoon characters who acted as sales agents on their websites. The agents
appeared above or next to a textbox where the user could type in their query. To make them
appear as if they were listening to the user, they were animated in a semi human-like way. An
example of this was Anna from IKEA (see Figure 6.8b) who occasionally nodded, blinked
her eyes, and opened her mouth. These virtual agents, however, have now largely disappeared
from our screens, being replaced by virtual assistants who talk in speech bubbles that have
no physical appearance, or static images of real agents who the user can talk to via LiveChat.

Interfaces, if designed poorly, can make people sometimes feel insulted, stupid, or threat-
ened. The effect can be to annoy them to the point of losing their temper. There are many
situations that cause such negative emotional responses. These include the following:

• When an application doesn’t work properly or crashes
• When a system doesn’t do what the user wants it to do
• When a user’s expectations are not met
• When a system does not provide sufficient information to let the user know what to do
• When error messages pop up that are vague or obtuse
• When the appearance of an interface is too noisy, garish, gimmicky, or patronizing

(a) (b)

Figure 6.8 Defunct virtual agents: (a) Microsoft’s Clippy and (b) IKEA’s Anna
Source: Microsoft Corporation

6 . 4 A N N O Y I N g I N T E R F A C E s 177

• When a system requires users to carry out too many steps to perform a task, only to dis-
cover a mistake was made somewhere along the line and they need to start all over again

• Websites that are overloaded with text and graphics, making it difficult to locate desired
information and resulting in sluggish performance

• Flashing animations, especially flashing banner ads and pop-up ads that cover the user
view and which require them to click in order to close them

• The overuse or automatic playing of sound effects and music, especially when selecting
options, carrying out actions, running tutorials, or watching website demos

• Featuritis—an excessive number of operations, such as an array of buttons on remote controls
• Poorly laid-out keyboards, touchpads, control panels, and other input devices that cause
users to press the wrong keys or buttons persistently

ACTIVITY 6.2
Most people are familiar with the “404 error” message that pops up now and again when a
web page does not upload for the link they have clicked or when they have typed or pasted an
incorrect URL into a browser. What does it mean and why the number 404? Is there a better
way of letting users know when a link to a website is not working? Might it be better for the
web browser to say that it was sorry rather than presenting an error message?

Comment
The number 404 comes from the HTML language. The first 4 indicates a client error. The
server is telling the user that they have done something wrong, such as misspelling the URL or
requesting a page that no longer exists. The middle 0 refers to a general syntax error, such as
a spelling mistake. The last 4 indicates the specific nature of the error. For the user, however, it
is an arbitrary number. It might even suggest that there are 403 other errors they could make!

Early research by Byron Reeves and Clifford Nass (1996) suggested that computers
should be courteous to users in the same way that people are to one another. They found that
people are more forgiving and understanding when a computer says that it’s sorry after mak-
ing a mistake. A number of companies now provide alternative and more humorous “error”
landing pages that are intended to make light of the embarrassing situation and to take the
blame away from the user (see Figure 6.9).

(Continued)

6 E M O T I O N A L I N T E R A C T I O N178

Figure 6.9 An alternative 404 error message
Source: https://www.creativebloq.com/web-design/best-404-pages-812505

dILEMMA
Should Voice Assistants Teach Kids Good Manners?

Many families now own a smart speaker, such as an Amazon Echo, with a voice assistant like
Alexa running on it. One observation is that young children will often talk to Alexa as if she
was their friend, asking her all sorts of personal questions, such as “Are you my friend?” and
“What is your favorite music?” and “What is your middle name?” They also learn that is not
necessary to say “please” when asking their questions or “thank you” on receiving a response,
similar to how they talk to other display-based voice assistants, such as Siri or Cortana. Some
parents, however, are worried that this lack of etiquette could develop into a new social norm
that could transfer over to how they talk to real human beings. Imagine the scenario where
Aunt Emma and Uncle Liam come over to visit their young niece for her 5th birthday, and
the first thing that they hear is, “Aunty Emma, get me my drink” or “Uncle Liam, where is
my birthday present?” with nary a “please” uttered. How would you feel if you were treated
like that?

One would hope that parents would continue to teach their children good manners and
the difference between a real human and a voice assistant. However, it is also possible to
configure Alexa and other voice assistants to reward children when they are polite to them,

https://www.creativebloq.com/web-design/best-404-pages-812505

6 . 5 A F F E C T I V E C O M p U T I N g A N d E M O T I O N A L A I 179

6.5 Affective Computing and Emotional AI

Affective computing is concerned with how to use computers to recognize and express
emotions in the same way as humans do (Picard, 1998). It involves designing ways for
people to communicate their emotional states, through using novel, wearable sensors and
creating new techniques to evaluate frustration, stress, and moods by analyzing people’s
expressions and conversations. It also explores how affect influences personal health
(Jacques et al., 2017). More recently, emotional AI has emerged as a research area that
seeks to automate the measurement of feelings and behaviors by using AI technologies
that can analyze facial expressions and voice in order to infer emotions. A number of sens-
ing technologies can be used to achieve this and, from the data collected, predict aspects
of a user’s behavior, for example, forecasting what someone is most likely to buy online
when feeling sad, bored, or happy. The main techniques and technologies that have been
used to do this are as follows:

• Cameras for measuring facial expressions
• Biosensors placed on fingers or palms to measure galvanic skin response (which is used to
infer how anxious or nervous someone is as indicated by an increase in their sweat)

• Affective expression in speech (voice quality, intonation, pitch, loudness, and rhythm)
• Body movement and gestures, as detected by motion capture systems or accelerometer sen-
sors placed on various parts of the body

The use of automated facial coding is gaining popularity in commercial settings, espe-
cially in marketing and e-commerce. For example, Affdex emotion analytics software from
Affectiva (www.affectiva.com) employs advanced computer vision and machine learning
algorithms to catalog a user’s emotional reactions to digital content, as captured through
a webcam, to analyze how engaged the user is with digital online content, such as movies,
online shopping sites, and advertisements.

Six fundamental emotions are classified based on the facial expressions that Aff-
dex collects.

for example, by saying “By the way, thanks for asking so nicely.” Voice assistants could also
be programmed to be much more forceful in how they teach good manners, for example,
saying, “I won’t answer you unless you say ‘please’ each time you ask me a question.” Would
this be taking the role of parenting too far? Mike Elgon (2018) cogently argues why voice
assistants should not do this. He questions whether by extending human social norms to voice
assistants, we are teaching children that technology can have sensibilities and hence should
be thought about in the same way that we consider human feelings. In particular, he wonders
whether by being polite to a voice assistant, children might begin to think that they are capa-
ble of feeling appreciated or unappreciated and that they have rights just like humans. Do
you agree with him, or do you think that there is no harm in developing virtual assistants to
teach children good manners and that children will learn? Or, do you believe that children will
instinctively know voice assistants don’t have rights or feelings?

Home

6 E M O T I O N A L I N T E R A C T I O N180

• Anger
• Contempt
• Disgust
• Fear
• Joy
• Sadness

These emotions are indicated as a percentage of what was detected beside the emotion
labels above the person’s face appearing on a display. For example, Figure 6.10 shows a label
of 100 percent happiness and 0 percent for all the other categories above the woman’s head
on the smartphone display. The white dots overlaying her face are the markers used by the
app when modeling a face. They provide the data that determines the type of facial expres-
sion being shown, in terms of detecting the presence or absence of the following:

• Smiling
• Eye widening
• Brow raising
• Brow furrowing
• Raising a cheek
• Mouth opening
• Upper-lip raising
• Wrinkling of the nose

If a user screws up their face when an ad pops up, this suggests that they feel disgust,
whereas if they start smiling, it suggests that they are feeling happy. The website can then
adapt its ad, movie storyline, or content to what it perceives the person needs at that point
in their emotional state.

Figure 6.10 Facial coding using Affdex software
Source: Affectiva, Inc.

6 . 5 A F F E C T I V E C O M p U T I N g A N d E M O T I O N A L A I 181

Affectiva has also started to analyze drivers’ facial expressions when on the road with
the goal of improving driver safety. The emotional AI software perceives if a driver is angry
and then suggests an intervention. For example, a virtual agent in the car might suggest to
the driver to take a deep breath and play soothing music to help relax them. In addition to
identifying particular emotions through facial expressions (for example, joy, anger, and sur-
prise), Affectiva uses particular markers to detect drowsiness. These are eye closure, yawning,
and blinking rate. Again, upon detecting when a threshold has been reached for these facial
expressions, the software might trigger an action, such as getting a virtual agent to suggest to
the driver that they pull over where it is safe to do so.

Other indirect methods that are used to reveal the emotional state of someone include
eye-tracking, finger pulse, speech, and the words/phrases they use when tweeting, chatting
online, or posting to Facebook (van den Broek, 2013). The level of affect expressed by users,
the language they use, and the frequency with which they express themselves when using
social media can all indicate their mental state, well-being, and aspects of their personality
(for instance, whether they are an extrovert or introvert, neurotic or calm, and so on). Some
companies may try to use a combination of these measures, such as facial expressions and
the language that people use when online, while others may focus on just one aspect, such as the
tone of their voice when answering questions over the phone. This type of indirect emotion
detection is beginning to be used to help infer or predict someone’s behavior, for example,
determining their suitability for a job or how they will vote in an election.

Another application of biometric data is being used in streaming video games where
spectators watch players, known as streamers, play video games. The most popular site is
Twitch; millions of viewers visit it each day to watch others compete in games, such as Fort-
nite. The biggest streamers have become a new breed of celebrity, like YouTubers. Some even
have millions of dedicated fans. Various tools have been developed to enhance the viewers’
experience. One is called All the Feels, which provides an overlay of biometric and webcam-
derived data of a streamer onto the screen interface (Robinson et al., 2017). A dashboard
provides a visualization of the streamer’s heart rate, skin conductance, and emotions. This
additional layer of data has been found to enhance the spectator experience and improve the
connection between the streamer and spectators. Figure 6.11 shows the emotional state of a
streamer using the All the Feels interface.

Figure 6.11 All the Feels app showing the biometric data of a streamer playing a videogame
Source: Used courtesy of Katherine Isbister

6 E M O T I O N A L I N T E R A C T I O N182

6.6 Persuasive Technologies and Behavioral Change

A diversity of techniques has been used at the interface level to draw people’s attention
to certain kinds of information in an attempt to change what they do or think. Pop-up
ads, warning messages, reminders, prompts, personalized messages, and recommendations
are some of the methods that are being deployed on a computer or smartphone interface.
Examples include Amazon’s one-click mechanism that makes it easy to buy something on its
online store and recommender systems that suggest specific books, hotels, restaurants, and so
forth, that a reader might want to try based on their previous purchases, choices, and taste.
The various techniques that have been developed have been referred to as persuasive design
(Fogg, 2009). They include enticing, cajoling, or nudging someone into doing something
through the use of persuasive technology.

Technology interventions have also been developed to change people’s behaviors in other
domains besides commerce, including safety, preventative healthcare, fitness, personal rela-
tionships, energy consumption, and learning. Here the emphasis is on changing someone’s
habits or doing something that will improve an individual’s well-being through monitoring
their behavior. An early example was Nintendo’s Pokémon Pikachu device (see Figure 6.12)
that was designed to motivate children into being more physically active on a consistent
basis. The owner of the digital pet that lives in the device was required to walk, run, or jump
each day to keep it alive. The wearer received credits for each step taken—the currency being
watts that could be used to buy Pikachu presents. Twenty steps on the pedometer rewarded
the player with 1 watt. If the owner did not exercise for a week, the virtual pet became angry
and refused to play anymore. This use of positive rewarding and sulking can be a powerful
means of persuasion, given that children often become emotionally attached to their virtual
pets, especially when they start to care for them.

BOx 6.3
Is It OK for Technology to Work Out How You Are Feeling?

Do you think it is ethical that technology is trying to read your emotions from your facial
expressions or from what you write in your tweets and, based on its analysis, filter the online
content that you are browsing, such as ads, news, or a movie to match your mood? Might
some people think it is an invasion of their privacy?

Human beings will suggest things to each other, often based on what they think the other
is feeling. For example, they might suggest a walk in the park to cheer them up. They might
also suggest a book to read or a movie to watch. However, some people may not like the idea
that an app can do the same, for example, suggesting what you should eat, watch, or do based
on how it analyzes your facial expressions.

6 . 6 p E R s U A s I V E T E C h N O L O g I E s A N d B E h A V I O R A L C h A N g E 183

HAPIfork is a device that was developed to help someone monitor and track their eating
habits (see Figure 6.13). If it detects that they are eating too quickly, it will vibrate (similar
to the way a smartphone does when on silent mode), and an ambient light will appear at the
end of the fork, providing the eater with real-time feedback intended to slow them down.
The assumption is that eating too fast results in poor digestion and poor weight control and
that making people aware that they are gobbling their food down can help them think about

Figure 6.12 Nintendo’s Pokémon Pikachu device
Source: http://nintendo.wikia.com/wiki/File:Pok%C3%A9mon_Pikachu_2_GS_(Device)

ACTIVITY 6.3
Watch these two videos:

The Piano Staircase: http://youtu.be/2lXh2n0aPyw
The Outdoor Bin: http://youtu.be/cbEKAwCoCKw
Do you think that such playful methods are effective at changing people’s behavior?

Comment
Volkswagen sponsored an open competition, called the fun theory, asking people to transform
mundane artifacts into novel enjoyable user experiences in an attempt to change people’s
behavior for the better. The idea was to encourage a desired behavior by making it more
fun. The Piano Staircase and the Outdoor Bin are the most well-known examples; the stairs
sounded like piano keys being played as they were climbed, while the bin sounded like a well
echoing when something was thrown into it. Research has shown that using these kinds of
playful methods is very engaging, and they can help people overcome their social inhibition
of taking part in an activity in a public place (Rogers et al., 2010a).

http://nintendo.wikia.com/wiki/File:Pokémon_Pikachu_2_GS_(Device)

6 E M O T I O N A L I N T E R A C T I O N184

how to eat more slowly at a conscious level. Other data is collected about how long it took
them to finish their meal, the number of fork servings per minute, and the time between them.
These are turned into a dashboard of graphs and statistics so that the user can see each week
whether their fork behavior is improving.

Nowadays, there are many kinds of mobile apps and personal tracking devices available
that are intended to help people monitor various behaviors and change them based on the
data collected and displayed back to them. These devices include fitness trackers, for exam-
ple, Fitbit, and weight trackers, such as smart scales. Similar to HAPIfork, these devices are
designed to encourage people to change their behavior by displaying dashboards of graphs
showing how much exercise they have done or weight they have lost over a day, week, or
longer period, compared with what they have done in the previous day, week, or month.
These results can also be compared, through online leaderboards and charts, with how well
they have done versus their peers and friends. Other techniques employed to encourage peo-
ple to exercise more or to move when sedentary include goal setting, reminders, and rewards
for good behavior. A survey of how people use such devices in their everyday lives revealed
that people often bought them simply to try them or were given one as a present, rather than
specifically trying to change a particular behavior (Rooksby et al., 2014). How, what, and
when they tracked depended on their interests and lifestyles; some used them as a way of
showing how fast they could run during a marathon or cycle on a course or how they could
change their lifestyle to sleep or eat better.

An alternative approach to collecting quantified data about a behavior automatically is
to ask people to write down manually how they are feeling now or to rate their mood and for
them to reflect upon how they felt about themselves in the past. A mobile app called Echo,
for example, asked people to write a subject line, rate their happiness at that moment, and
add a description, photos, and/or videos if they wanted to (Isaacs et al., 2013). Sporadically,
the app then asked them to reflect on previous entries. An assumption was that this type

Figure 6.13 Someone using the HAPIfork in a restaurant
Source: Helen Sharp

6 . 6 p E R s U A s I V E T E C h N O L O g I E s A N d B E h A V I O R A L C h A N g E 185

of technology-mediated reflection could increase well-being and happiness. Each reflection
was shown as a stacked card with the time and a smiley happiness rating. People who used
the Echo app reported on the many positive effects of doing so, including reliving positive
experiences and overcoming negative experiences by writing them down. The double act of
recording and reflecting enabled them to generalize from the positive experiences and draw
positive lessons from them.

The global concern about climate change has also led a number of HCI researchers to
design and evaluate various energy-sensing devices that display real-time feedback. One goal
is to find ways of helping people reduce their energy consumption, and it is part of a larger
research agenda called sustainable HCI: see Mankoff et al., 2008; DiSalvo et al., 2010; Hazas
et al., 2012. The focus is to persuade people to change their everyday habits with respect to
environmental concerns, such as reducing their own carbon footprint, their community’s
footprint (for example, a school or workplace), or an even larger organization’s carbon foot-
print (such as a street, town, or country).

Extensive research has shown that domestic energy use can be reduced by providing
households with feedback on their consumption (Froehlich et al., 2010). The frequency of
feedback is considered important; continuous or daily feedback on energy consumption has
been found to yield higher savings results than monthly feedback. The type of graphical
representation also has an effect. If the image used is too obvious and explicit (for instance,
a finger pointing at the user), it may be perceived as too personal, blunt, or “in your face,”
resulting in people objecting to it. In contrast, simple images (for example, an infographic
or emoticon) that are more anonymous but striking and whose function is to get people’s
attention may be more effective. They may encourage people to reflect more on their energy
use and even promote public debate about what is represented and how it affects them.
However, if the image used is too abstract and implicit, other meanings may be attributed to
it, such as simply being an art piece (such as an abstract painting with colored stripes that
change in response to the amount of energy used), resulting in people ignoring it. The ideal
may be somewhere in between. Peer pressure can also be effective, where peers, parents, or
children chide or encourage one another to turn lights off, take a shower instead of a bath,
and so on.

Another influencing factor is social norms. In a classic study by P. Wesley Schultz et al.,
(2007), households were shown how their energy consumption compared with their neigh-
borhood average. Households above the average tended to decrease their consumption, but
those using less electricity than average tended to increase their consumption. The study
found that this “boomerang” effect could be counteracted by providing households with an
emoticon along with the numerical information about their energy usage: households using
less energy than average continued to do so if they received a smiley icon; households using more
than average decreased their consumption even more if they were given a sad icon.

In contrast to the Schultz study, where each household’s energy consumption was kept
private, the Tidy Street project (Bird and Rogers, 2010) that was run in Brighton in the
United Kingdom created a large-scale visualization of the street’s electricity usage by spray-
ing a stenciled display on the road surface using chalk (see Figure 6.14). The public display
was updated each day to represent how the average electricity usage of the street compared
to the city of Brighton’s average. The goal was to provide real-time feedback that all of the
homeowner’s and the general public could see change each day over a period of three weeks.
The street graph also proved to be very effective in getting people who lived on Tidy Street

6 E M O T I O N A L I N T E R A C T I O N186

to talk to each other about their electricity consumption and habits. It also encouraged them to
talk with the many passersby who walked up and down the street. The outcome was to reduce
electricity consumption in the street by 15 percent, which was considerably more than other
projects in this area have been able to achieve.

BOx 6.4
The Darker Side: Deceptive Technology

Technology is increasingly being used to deceive people into parting with their personal
details, which allows Internet fraudsters to access their bank accounts and draw money from
them. Authentic-looking letters, appearing to be sent from eBay, PayPal, and various leading
banks, are spammed across the world, ending up in people’s email in-boxes with messages
such as “During our regular verification of accounts, we couldn’t confirm your information.
Please click here to update and verify your information.” Given that many people have an
account with one of these corporations, there is a good chance that they will be misled and
unwittingly believe what is being asked of them, only to discover a few days later that they are
several thousand dollars worse off. Similarly, letters from supposedly super-rich individuals in
far-away countries, offering a share of their assets if the email recipient provides them with
their bank details, have persistently been spammed worldwide. While many people are becom-
ing increasingly wary of what are known as phishing scams, there are still many vulnerable
individuals who are gullible to such tactics.

The term phishing is a play on the term fishing, which refers to the sophis-
ticated way of luring users’ financial information and passwords. Internet
fraudsters are becoming smarter and are constantly changing their tactics.
While the art of deception is centuries old, the increasing, pervasive, and
often ingenious use of the web to trick people into divulging personal infor-
mation can have catastrophic effects on society as a whole.

Figure 6.14 Aerial view of the Tidy Street public electricity graph
Source: Helen Sharp

6 . 7 A N T h R O p O M O R p h I s M 187

6.7 Anthropomorphism

Anthropomorphism is the propensity people have to attribute human qualities to animals
and objects. For example, people sometimes talk to their computers as if they were humans,
treat their robot cleaners as if they were their pets, and give all manner of cute names to their
mobile devices, routers, and so on. Advertisers are well aware of this phenomenon and often
create human-like and animal-like characters out of inanimate objects to promote their prod-
ucts. For example, breakfast cereals, butter, and fruit drinks have all been transmogrified into
characters with human qualities (they move, talk, have personalities, and show emotions),
enticing the viewer to buy them. Children are especially susceptible to this kind of magic, as
witnessed by their love of cartoons where all manner of inanimate objects are brought to life
with human-like qualities.

The finding that people, especially children, have a propensity to accept and enjoy objects
that have been given human-like qualities has led many designers to capitalize on it, most
notably in the design of virtual agents and interactive dolls, robots, and cuddly toys. Early
commercial products like ActiMates were designed to encourage children to learn by playing
with them. One of the first—Barney (a dinosaur)—attempted to motivate play in children
by using human-based speech and movement (Strommen, 1998). The toys were programmed
to react to the child and make comments while watching TV or working together on a
computer-based task. In particular, Barney was programmed to congratulate the child when-
ever they produced a right answer and also to react to the content on-screen with appropriate
emotions, for instance, cheering at good news and expressing concern at bad news. Interac-
tive dolls have also been designed to talk, sense, and understand the world around them,
using sensor-based technologies, speech recognition, and various mechanical servos embed-
ded in their bodies. For example, the interactive doll Luvabella exhibits facial expressions,
such as blinking, smiling, and making baby cooing noises in response to how her owner plays
and looks after her. The more a child plays with her, the doll will learn to speak, transforming
her babble into words and phrases.

Furnishing technologies with personalities and other human-like attributes can make
them more enjoyable and fun to interact with. They can also motivate people to carry out
various activities, such as learning. Being addressed in the first person (for instance, “Hello,
Noah! Nice to see you again. Welcome back. Now what were we doing last time? Oh yes,
Exercise 5. Let’s start again.”) is more appealing than being addressed in the impersonal third
person (“User 24, commence Exercise 5.”), especially for children. It can make them feel
more at ease and reduce their anxiety. Similarly, interacting with screen characters like tutors
and wizards can be more engaging than interacting with a dialog box.

A YouTube video (https://youtu.be/au2Vg9xRZZ0) shows Luvabella in action
and asks viewers to decide whether the interactive doll is creepy or cool. What do
you think?

https://youtu.be/au2VG9xRZZ0

6 E M O T I O N A L I N T E R A C T I O N188

ACTIVITY 6.4
A Robot or a Cuddly Pet?
Early robot pets, such as Sony’s AIBO, were made of hard materials that made them look shiny
and clunky. In contrast, a more recent trend has been to make them look and feel more like real
pets by covering them up in fur and making them behave in more cute, pet-like ways. Two con-
trasting examples are presented in Figure 6.15a and 6.15b. Which do you prefer and why?

Comment
Most people like stroking pets, so they may prefer a soft pet robot that they can also stroke,
such as the one shown in Figure 6.15b. A motivation for making robot pets cuddly is to
enhance the emotional experience people receive through using their sense of touch. For
example, the Haptic Creature on the right is a robot that mimics a pet that might sit in
your lap, such as a cat or a rabbit (Yohanan and MacLean, 2008). It is made up of a body,
head, and two ears, as well as mechanisms that simulate breathing, a vibrating purr, and the
warmth of a living creature. The robot “detects” the way it is touched by means of an array
of (roughly 60) touch sensors laid out across its entire body and an accelerometer. When the
Haptic Creature is stroked, it responds accordingly, using the ears, breathing, and purring to
communicate its emotional state through touch. On the other hand, the sensors are also used
by the robot to detect the human’s emotional state through touch. Note how the robot has
no eyes, nose, or mouth. Facial expressions are the most common way humans communicate
emotional states. Since the Haptic Creature communicates and senses emotional states solely
through touch, the face was deliberately left off to prevent people from trying to “read” emo-
tion from it.

(a) (b)

Figure 6.15 Robot pets: (a) Aibo and (b) The Haptic Creature
Source: (a) Jennifer Preece, (b) Used courtesy of Steve Yohanan. Photo by Martin Dee

6 . 7 A N T h R O p O M O R p h I s M 189

A number of commercial physical robots have been developed specifically to support
care giving for the elderly. Early ones were designed to be about 2 feet tall and were made
from white plastic with colored parts that represented clothing or hair. An example was Zora
(see Figure 6.16), developed in Belgium, that was marketed as a social robot for healthcare.
One was bought by a nursing home in France. Many of the patients developed an emotional
attachment to their Zora robot, holding it, cooing, and even giving it kisses on the head.
However, some people found this kind of robot care a little demeaning. Certainly, it can never
match the human touch and warmth that patients need, but there is no harm in it playing an
entertaining and motivating role alongside human caregivers.

This video demonstrates how the Zora robot was used to entertain seniors and to
help them get some exercise: https://youtu.be/jcMNY5EnQNQ.

Figure 6.16 The Zora robot
Source: http://zorarobotics.be/

In-depth Activity
This in-depth activity requires you to try one of the emotion recognition apps available and
to see how well it fares in recognizing different people’s facial expressions. Download the
AffdexMe app or Age Emotion Detector for Apple or Android. Take a photo of yourself
looking natural and see what emotion it suggests.

(Continued)

http://zorarobotics.be/

6 E M O T I O N A L I N T E R A C T I O N190

1. How many emotions does it recognize?
2. Try to make a face for each of the following: sadness, anger, joy, fear, disgust, and surprise.

After making a face for each, see how well the app detects the emotion you were expressing.
3. Ask a couple of other people to try it. See whether you can find someone with a beard and

ask them to try, too. Does facial hair make it more difficult for the app to recognize
an emotion?

4. What other application areas do you think these kinds of apps could be used for besides
advertising?

5. What ethical issues does facial recognition raise? Has the app provided sufficient informa-
tion as to what it does with the photos taken of people’s faces?

6. How well would the recognition software work when used in a more natural setting where
the user is not making a face for the camera?

summary
This chapter described the different ways that interactive products can be designed (both delib-
erately and inadvertently) to make people respond in certain ways. The extent to which users
will learn, buy a product online, quit a bad habit, or chat with others depends on the believ-
ability of the interface, how comfortable they feel when using a product, and/or how much
they can trust it. If the interactive product is frustrating to use, annoying, or patronizing, users
will easily become angry and despondent and often they stop using it. If, on the other hand,
the product is pleasurable, is enjoyable to use, and makes people feel comfortable and at ease,
then they will continue to use it, make a purchase, return to the website, or continue to learn.

This chapter also described various interaction mechanisms that can be used to elicit
positive emotional responses in users and ways of avoiding negative ones. Further, it described
how new technology has been developed to detect emotional states.

Key Points
• Emotional aspects of interaction design are concerned with how to facilitate certain states
(for example, pleasure) or avoid certain reactions (such as frustration) in user experiences.

• Well-designed interfaces can elicit good feelings in people.
• Aesthetically pleasing interfaces can be a pleasure to use.
• Expressive interfaces can provide reassuring feedback to users as well as be informative
and fun.

• Badly designed interfaces often make people frustrated, annoyed, or angry.
• Emotional AI and affective computing use AI and sensor technology for detecting people’s
emotions by analyzing their facial expressions and conversations.

• Emotional technologies can be designed to persuade people to change their behaviors
or attitudes.

• Anthropomorphism is the attribution of human qualities to objects.
• Robots are being used in a variety of settings, including households and assisted-living homes.

191

Further Reading

CALVO, R.A and PETERS, D. (2014) Positive Computing. MIT. This book discusses how
to design technology for well-being to make a happier and healthier world. As the title sug-
gests, it is positive in its outlook. It covers the psychology of well-being, including empathy,
mindfulness, joy, compassion, and altruism. It also describes the opportunities and chal-
lenges facing interaction designers who want to develop technology that can improve peo-
ple’s well-being.

HÖÖK, K. (2018) Designing with the Body. MIT. This book proposes that interaction design
should consider the experiential, felt, and aesthetic stance that encompasses the design and
use cycle. The approach suggested by the author is called soma design, where body and
movements are viewed as very much part of the design process, and where a slow, thoughtful
process is promoted that considers fundamental human values. It is argued that adopting this
stance can yield better products and create healthier, more sustainable companies.

LEDOUX, J. E. (1998) The Emotional Brain: The Mysterious Underpinnings of Emotional
Life. Simon & Schuster. This book explains what causes us to feel fear, love, hate, anger, and
joy, and it explores whether we control our emotions versus them controlling us. The book
also covers the origins of human emotions and explains that many evolved to enable us
to survive.

McDUFF, D. & CZERWINSKI, M. (2018) Designing Emotionally Sentient Agents. Com-
munications of the ACM, Vol. 61 No. 12, pages 74–83. This article provides an accessible
overview of the burgeoning area of emotional agents. It presents the challenges, opportuni-
ties, dilemmas, concerns, and current applications that are now being developed, including
bots, robots, and agents.

NORMAN, D. (2005) Emotional Design: Why We Love (or Hate) Everyday Things. Basic
Books. This book is an easy read while at the same time being thought-provoking. We get to
see inside Dan Norman’s kitchen and learn about the design aesthetics of his collection of
teapots. The book also includes essays on the emotional aspects of robots, computer games,
and a host of other pleasurable interfaces.

WALTER, A. (2011) A Book Apart: Designing for Emotion. Zeldman, Jeffrey. This short
book is targeted at web designers who want to understand how to design websites that users
will enjoy and want to return to. It covers the classic literature on emotions, and it proposes
practical approaches to emotional web design.

F U R T h E R R E A d I N g

Chapter 7

I N T E R F A C E S

7.1 Introduction

7.2 Interface Types

7.3 Natural User Interfaces and Beyond

7.4 Which Interface?

Objectives
The main goals of the chapter are to accomplish the following:

• Provide an overview of the many different kinds of interfaces.
• Highlight the main design and research considerations for each of the interfaces.
• Discuss what is meant by a natural user interface (NUI).
• Consider which interface is best for a given application or activity.

7.1 Introduction

When considering how to solve a user problem, the default solution that many developers
choose to design is an app that can run on a smartphone. Making this easier still are many
easy-to-use app developer tools that can be freely downloaded. It is hardly surprising, there-
fore, to see just how many apps there are in the world. In December 2018, Apple, for exam-
ple, had a staggering 2 million apps in its store, many of which were games.

Despite the ubiquity of the smartphone app industry, the web continues to proliferate
in offering services, content, resources, and information. A central concern is how to design
them to be interoperable across different devices and browsers, which takes into account the
varying form factors, size, and shape of smart watches, smartphones, laptops, smart TVs, and
computer screens. Besides the app and the web, many other kinds of interfaces have been
developed, including voice interfaces, touch interfaces, gesture interfaces, and multimodal
interfaces.

The proliferation of technological developments has encouraged different ways of think-
ing about interaction design and UX. For example, input can be via mice, touchpads, pens,
remote controllers, joysticks, RFID readers, gestures, and even brain-computer interaction.
Output is equally diverse, appearing in the form of graphical interfaces, speech, mixed reali-
ties, augmented realities, tangible interfaces, wearable computing, and more.

7 I N T E R F A C E S194

The goal of this chapter is to give you an overview of the diversity of interfaces that can
be developed for different environments, people, places, and activities. We present a catalog
of 20 interface types, starting with command-based and ending with smart ones. For each
interface, we present an overview and outline the key research and design considerations.
Some are only briefly touched upon, while others, which are more established in interaction
design, are described in greater depth.

7.2 Interface Types

Numerous adjectives have been used to describe the different types of interfaces that have
been developed, including graphical, command, speech, multimodal, invisible, ambient, affec-
tive, mobile, intelligent, adaptive, smart, tangible, touchless, and natural. Some of the inter-
face types are primarily concerned with a function (for example, to be intelligent, to be
adaptive, to be ambient, or to be smart), while others focus on the interaction style used
(such as command, graphical, or multimedia), the input/output device used (for instance,
pen-based, speech-based, or gesture-based), or the platform being designed for (for example,
tablet, mobile, PC, or wearable). Rather than cover every possible type that has been devel-
oped or described, we have chosen to select the main types of interfaces that have emerged
over the past 40 years. The interface types are loosely ordered in terms of when they were
developed. They are numbered to make it easier to find a particular one. (See the following
list for the complete set.) It should be noted, however, that this classification is for con-
venience of reference. The interface entries are not mutually exclusive since some products
can appear in two or more categories. For example, a smartphone can be considered to be
mobile, touch, or wearable.

The types of interfaces covered in this chapter include the following:

1. Command
2. Graphical
3. Multimedia
4. Virtual reality
5. Web
6. Mobile
7. Appliance
8. Voice
9. Pen

10. Touch
11. Gesture
12. Haptic
13. Multimodal
14. Shareable

NOTE
This chapter is not meant to be read from beginning to end; rather, it should be
dipped into as needed to find out about a particular type of interface.

7 . 2 I N T E R F A C E T y p E S 195

15. Tangible
16. Augmented reality
17. Wearables
18. Robots and drones
19. Brain-computer interaction
20. Smart

7.2.1 Command-Line Interfaces
Early interfaces required the user to type in commands that were typically abbreviations (for
example, ls) at the prompt symbol appearing on the computer display, to which the system
responded (for example, by listing current files). Another way of issuing commands is by
pressing certain combinations of keys (such as Shift+Alt+Ctrl). Some commands are also a
fixed part of the keyboard, such as delete, enter, and undo, while other function keys can be
programmed by the user as specific commands (for instance, F11 commanding print action).

Command-line interfaces were largely superseded by graphical interfaces that incor-
porated commands such as menus, icons, keyboard shortcuts, and pop-up/predictable text
commands as part of an application. Where command-line interfaces continue to have an
advantage is when users find them easier and faster to use than equivalent menu-based
systems (Raskin, 2000). Users also prefer command-line interfaces for performing certain
operations as part of a complex software package, such as for CAD environments (such as
Rhino3D and AutoCAD), to allow expert designers to interact rapidly and precisely with the
software. They also provide scripting for batch operations, and they are being increasingly
used on the web, where the search bar acts as a general-purpose command-line facility, for
example, www.yubnub.org.

System administrators, programmers, and power users often find that it is much more
efficient and quicker to use command languages such as Microsoft’s PowerShell. For exam-
ple, it is much easier to delete 10,000 files in one go by using one command rather than
scrolling through that number of files and highlighting those that need to be deleted. Com-
mand languages have also been developed for visually impaired people to allow them to
interact in virtual worlds, such as Second Life (see Box 7.1).

Here is a selection of classic HCI videos on the Internet that demonstrate pioneer-
ing interfaces:
The Sketchpad: Ivan Sutherland (1963) describes the first interactive graphical

interface: https://youtu.be/6orsmFndx_o.
The Mother of All Demos: Douglas Engelbart (1968) describes the first WIMP:

http://youtu.be/yJDv-zdhzMy.
put that there (1979): MIT demonstrates the first speech and gesture interface:

https://youtu.be/RyBEUyEtxQo.
Unveiling the genius of multitouch interface design: Jeff Han gives a TED talk

(2007): http://youtu.be/ac0E6deG4AU.
Intel’s Future Technology Vision (2012): See http://youtu.be/g_cauM3kccI.

http://www.yubnub.org

7 I N T E R F A C E S196

Watch a video demonstration of TextSL at http://youtu.be/0Ba_w7u44MM.

Research and Design Considerations
In the 1980s, much research investigated ways of optimizing command interfaces. The form of
the commands (including use of abbreviations, full names, and familiar names), syntax (such
as how best to combine different commands), and organization (for instance, how to structure
options) are examples of some of the main areas that have been investigated (Shneiderman, 1998).
A further concern was which command names would be the easiest to remember. A number of
variables were tested, including how familiar users were with the chosen names. Findings from
a number of studies, however, were inconclusive; some found specific names were better remem-
bered than general ones (Barnard et al., 1982), others showed that names selected by users them-
selves were preferable (see Ledgard et al., 1981; Scapin, 1981), while yet others demonstrated that
high-frequency words were better remembered than low-frequency ones (Gunther et al., 1986).

The most relevant design principle is consistency (see Chapter 1, “What Is Interaction
Design?”). Therefore, the method used for labeling/naming the commands should be chosen
to be as consistent as possible; for example, always use the first letters of the operation when
using abbreviations.

BOX 7.1
Command Interfaces for Virtual Worlds

Virtual worlds, such as Second Life, have become popular places for learning and social-
izing. Unfortunately, people who are visually impaired cannot interact in a visual capacity.
A command-based interface, called TextSL, was developed to enable them to participate using
a screen reader (Folmer et al., 2009). Commands can be issued to enable the user to move
their avatar around, interact with others, and find out about the environment in which they
are located. Figure 7.1 shows that the user has issued the command for their avatar to smile
and say hello to other avatars who are sitting by a log fire.

Figure 7.1 Second Life command-based interface for visually impaired users

Source: Used courtesy of Eelke Folmer

7 . 2 I N T E R F A C E T y p E S 197

7.2.2 Graphical User Interfaces
The Xerox Star interface (described in Chapter 3, “Conceptualizing Interaction”) led to the
birth of the graphical user interface (GUI), opening up new possibilities for users to inter-
act with a system and for information to be presented and represented within a graphical
interface. Specifically, new ways of visually designing the interface became possible, which
included the use of color, typography, and imagery (Mullet and Sano, 1995). The original
GUI was called a WIMP (windows, icons, menus, pointer) and consisted of the following:

• Windows: Sections of the screen that can be scrolled, stretched, overlapped, opened, closed,
and moved using a mouse

• Icons: Pictograms that represent applications, objects, commands, and tools that are
opened or activated when clicked on

• Menus: Lists of options that can be scrolled through and selected in the way a menu is used
in a restaurant

• Pointing device: A mouse controlling the cursor as a point of entry to the windows, menus,
and icons on the screen

The first generation of WIMP interfaces were primarily boxy in design; user interaction
took place through a combination of windows, scroll bars, checkboxes, panels, palettes,
and dialog boxes that appeared on the screen in various forms (see Figure 7.2). Develop-
ers were largely constrained by the set of widgets available to them, of which the dialog
box was most prominent. (A widget is a standardized display representation of a control,
like a button or scroll bar, that can be manipulated by the user.) Nowadays, GUIs have
been adapted for mobile and touchscreens. Instead of using a mouse and keyboard as
input, the default action for most users is to swipe and touch using a single finger when
browsing and interacting with digital content. (For more on this subject, see the sections on
touch and mobile interfaces.)

Figure 7.2 The boxy look of the first generation of GUIs

7 I N T E R F A C E S198

The basic building blocks of the WIMP are still part of the modern GUI used as part of
a display, but they have evolved into a number of different forms and types. For example,
there are now many different types of icons and menus, including audio icons and audio
menus, 3D animated icons, and even tiny icon-based menus that can fit onto a smartwatch
screen (see Figure 7.3). Windows have also greatly expanded in terms of how they are used
and what they are used for; for example, a variety of dialog boxes, interactive forms, and
feedback/error message boxes have become pervasive. In addition, a number of graphical
elements that were not part of the WIMP interface have been incorporated into the GUI.
These include toolbars and docks (a row or column of available applications and icons of
other objects such as open files) and rollovers (where text labels appear next to an icon or
part of the screen as the cursor is rolled over it). Here, we give an overview of the design
considerations concerning the basic building blocks of the WIMP/GUI: windows, menus,
and icons.

Window Design
Windows were invented to overcome the physical constraints of a computer display, ena-
bling more information to be viewed and tasks to be performed on the same screen. Multi-
ple windows can be opened at any one time, for example, web browsers, word processing
documents, photos, and slideshows, enabling the user to switch between them when needing
to look at or work on different documents, files, and apps. They can also enable multiple
instances of one app to be opened, such as when using a web browser.

Scrolling bars within windows also enable more information to be viewed than is possi-
ble on one screen. Scroll bars can be placed vertically and horizontally in windows to enable
upward, downward, and sideway movements through a document and can be controlled
using a touchpad, mouse, or arrow keys. Touch interfaces enable users to scroll content sim-
ply by swiping the screen to the left or right or up or down.

Figure 7.3 Simple smartwatch menus with one, two, or three options
Source: https://developer.apple.com/design/human-interface-guidelines/watchos/interface-elements/menus/

https://developer.apple.com/design/human-interface-guidelines/watchos/interface-elements/menus/

7 . 2 I N T E R F A C E T y p E S 199

One of the problems of having multiple windows open is that it can be difficult to find
specific ones. Various techniques have been developed to help users locate a particular win-
dow, a common one being to provide a list as part of an app menu. macOS also provides a
function that shrinks all windows that are open for a given application so that they can be
seen side by side on one screen. The user needs only to press one function key and then move
the cursor over each one to see what they are called in addition to a visual preview. This tech-
nique enables users to see at a glance what they have in their workspace, and it also allows
them easily to select one to bring forward. Another option is to display all of the windows
open for a particular application, for example, Microsoft Word. Web browsers, like Firefox,
also show thumbnails of the top sites visited and a selection of sites that you have saved or
visited, which are called highlights (see Figure 7.4).

A particular kind of window that is commonly used is the dialog box. Confirmations,
error messages, checklists, and forms are presented through dialog boxes. Information in the
dialog boxes is often designed to guide user interaction, with the user following the sequence
of options provided. Examples include a sequenced series of forms (such as Wizards) present-
ing the necessary and optional choices that need to be filled in when choosing a PowerPoint
presentation or an Excel spreadsheet. The downside of this style of interaction is that there
is a tendency to cram too much information or data entry fields into one box, making the
interface confusing, crowded, and difficult to read (Mullet and Sano, 1995).

Figure 7.4 Part of the home page for the Firefox browser showing thumbnails of top sites visited
and suggested highlight pages (bottom rows)

7 I N T E R F A C E S200

BOX 7.2
The Joys of Filling In Forms on the Web

For many of us, shopping on the Internet is generally an enjoyable experience. For exam-
ple, choosing a book on Amazon or flowers from Interflora can be done at our leisure and
convenience. The part that we don’t enjoy, however, is filling in the online form to give the
company the necessary details to pay for the selected items. This can often be a frustrating and
time-consuming experience, especially as there is much variability between sites. Sometimes,
it requires users to create an account and a new password. At other times, guest checkout is
enabled. However, if the site has a record of your email address in its database, it won’t allow
you to use the guest option. If you have forgotten your password, you need to reset it, and this
requires switching from the form to your email account. Once past this hurdle, different kinds
of interactive forms pop up for you to enter your mailing address and credit card details. The
form may provide the option of finding your address by allowing you to enter a postal or ZIP
code. It may also have asterisks that denote fields that must be filled in.

Having so much inconsistency can frustrate the user, as they are unable to use the same
mental model for filling in checkout forms. It is easy to overlook or miss a box that needs to
be filled in, and after submitting the page, an error message may come back from the system
saying it is incomplete. This may require the user to have to enter sensitive information again,
as it will have been removed in the data processing stage (for example, the user’s credit card
number and the three or four-digit security code on the back or front of the card, respectively).

To add to the frustration, many online forms often accept only fixed data formats, mean-
ing that, for some people whose information does not fit within its constraints, they are unable
to complete the form. For example, one kind of form will accept only a certain type of mailing
address format. The boxes are provided for: address line 1 and address line 2, providing no extra
lines for addresses that have more than two lines; a line for the town/city; and a line for the ZIP
code (if the site is based in the United States) or other postal code (if based in another country).
The format for the codes is different, making it difficult for non-U.S. residents (and U.S. residents
for other country sites) to fill in this part.

Another gripe about online registration forms is the country of residence box that opens
up as a never-ending menu, listing all of the countries in the world in alphabetical order.
Instead of typing in the country in which they reside, users are required to select the one
they are from, which is fine if you happen to live in Australia or Austria but not if you live in
Venezuela or Zambia (see Figure 7.5).

This is an example of where the design principle of recognition over recall (see Chapter 4,
“Cognitive Aspects”) does not apply and where the converse is true. A better design is to have
a predictive text option, where users need only to type in the first one or two letters of their
country to cause a narrowed-down list of choices to appear from which they can select within
the interface. Or, one smart option is for the form to preselect the user’s country of origin by
using information shared from the user’s computer or stored in the cloud. Automating the
filling in of online forms, through providing prestored information about a user (for example,
their address and credit card details), can obviously help reduce usability problems—provided
they are OK with this.

7 . 2 I N T E R F A C E T y p E S 201

Figure 7.5 A scrolling menu of country names
Source: https://www.jollyflorist.com

ACTIVITy 7.1
Go to the Interflora site in the United Kingdom, click the international delivery option, and
then click “select a country.” How are the countries ordered? Is it an improvement over the
scrolling pop-up menu?

Comment
Earlier versions of the full list of countries to which flowers could be sent by interflora.co.uk
listed eight countries at the top, starting with the United Kingdom and then the United States,
France, Germany, Italy, Switzerland, Austria, and Spain. This was followed by the remaining
set of countries listed in alphabetical order. The reason for having this particular ordering is
likely to have been because the top eight are the countries that have most customers, with the
U.K. residents using the service the most. The website has changed now to show top countries
by national flag followed by a table format, grouping all of the countries in alphabetical order
using four columns across the page (see Figure 7.6). Do you think this is an improvement over
the use of a single scrolling list of country names shown in Figure 7.5? The use of letter head-
ings and shading makes searching quicker.

https://www.jollyflorist.com

7 I N T E R F A C E S202

Menu Design
Interface menus are typically ordered across the top row or down the side of a screen using
category headers as part of a menu bar. The contents of the menus are also for the large
part invisible, only dropping down when the header is selected or rolled over with a mouse.

Research and Design Considerations
A key research concern is window management—finding ways of enabling users to move flu-
idly between different windows (and displays) and to be able to switch their attention rapidly
between windows to find the information they need or to work on the document/task within
each window without getting distracted. Studies of how people use windows and multiple dis-
plays have shown that window activation time (that is, the time a window is open and with
which the user interacts with it) is relatively short—an average of 20 seconds—suggesting
that people switch frequently between different documents and applications (Hutchings et al.,
2004). Widgets like the taskbar are often used for switching between windows.

Another technique is the use of tabs that appear at the top of the web browser that show
the name and logo of the web pages that have been visited. This mechanism enables users
to rapidly scan and switch among the web pages they have visited. However, the tabs can
quickly multiply if a user visits a number of sites. To accommodate new ones, the web browser
reduces the size of the tabs by shortening the information that appears on each. The downside
of doing this, however, is it can make it more difficult to read and recognize web pages when
looking at the smaller tabs. It is possible to reverse this shrinking by removing unwanted tabs
by clicking the delete icon for each one. This has the effect of making more space available
for the remaining tabs.

There are multiple ways that an online form can be designed to obtain details from
someone. It is not surprising, therefore, that there are so many different types that are in
use. Design guidelines are available to help decide which format and widgets are best to use.
For example, see https://www.smashingmagazine.com/printed-books/form-design-patterns/.
Another option is to automate form completion by asking the user to store their personal
details on their machine or in a company’s database, requiring them only to enter security
information. However, many people are becoming leery of storing their personal data in this
way—fearful because of the number of data breaches that are often reported in the news.

Figure 7.6 An excerpt of the listing of countries in alphabetical order from interflora.co.uk
Source: https://www.interflora.co.uk

https://www.interflora.co.uk

7 . 2 I N T E R F A C E T y p E S 203

The various options under each menu are typically ordered from top to bottom in terms of
most frequently used options and grouped in terms of their similarity with one another; for
example, all formatting commands are placed together.

There are numerous menu interface styles, including flat lists, drop-down, pop-up, contex-
tual, collapsible, mega, and expanding ones, such as cascading menus. Flat menus are good at
displaying a small number of options at the same time or where the size of the display is small,
for example on smartphones, cameras, and smartwatches. However, they often have to nest the
lists of options within each, requiring several steps to be taken by a user to get to the list with
the desired option. Once deep down in a nested menu, the user then has to take the same number
of steps to get back to the top of the menu. Moving through previous screens can be tedious.

Expanding menus enable more options to be shown on a single screen than is possible with
a single flat menu list. This makes navigation more flexible, allowing for the selection of options
to be done in the same window. An example is the cascading menu, which provides secondary
and even tertiary menus to appear alongside the primary active drop-down menu, enabling
further related options to be selected, such as when selecting track changes from the tools menu
leads to a secondary menu of three options by which to track changes in a Word document. The
downside of using expanding menus, however, is that they require precise control. Users can
often end up making errors, namely, overshooting or selecting the wrong options. In particular,
cascading menus require users to move their cursor over the menu item, while holding the
mouse or touchpad down, and then to move their cursor over to the next menu list when
the cascading menu appears and select the next desired option. This can result in the user under
or overshooting a menu option, or sometimes accidentally closing the entire menu. Another
example of an expandable menu is a mega menu, in which many options can be displayed
using a 2D drop-down layout (see Figure 7.7). This type of menu is popular with online shop-
ping sites, where lots of items can be viewed at a glance on the same screen without the need to
scroll. Hovering, tapping, or clicking is used to reveal more details for a selected item.

Figure 7.7 A mega menu
Source: https://www.johnlewis.com

https://www.johnlewis.com

7 I N T E R F A C E S204

Collapsible menus provide an alternative approach to expanding menus in that they
allow further options to be made visible by selecting a header. The headings appear adjacent
to each other, providing the user with an overview of the content available (see Figure 7.8).
This reduces the amount of scrolling needed. Contextual menus provide access to often-used
commands associated with a particular item, for example, an icon. They provide appropri-
ate commands that make sense in the context of a current task. They appear when the user
presses the Control key while clicking an interface element. For example, clicking a photo
on a website together with holding down the Ctrl key results in a small set of relevant menu
options appearing in an overlapping window, such as open it in a new window, save it, or
copy it. The advantage of contextual menus is that they provide a limited number of options
associated with an interface element, overcoming some of the navigation problems associated
with cascading and expanding menus.

Figure 7.8 A template for a collapsible menu
Source: https://inclusive-components.design/collapsible-sections/. Reproduced with permission of Smashing
Magazine

ACTIVITy 7.2
Open an application that you use frequently (for instance, a word processor, email client,
or web browser) on a PC/laptop or tablet and look at the menu header names (but do not
open them just yet). For each menu header—File, Edit, Tools, and so on—write down what
options you think are listed under each. Then look at the contents under each header. How
many options were you able to remember, and how many did you put in the wrong category?
Now try to select the correct menu header for the following options (assuming that they
are included in the application): Replace, Save, Spelling, and Sort. Did you select the correct
header each time, or did you have to browse through a number of them?

Comment
Popular everyday applications, like word processors, have grown enormously in terms of the
functions they offer. The current version (2019) of Microsoft Word, for example, has 8 menu
headers and numerous toolbars. Under each menu header there are on average 15 options,

https://inclusive-components.design/collapsible-sections/

7 . 2 I N T E R F A C E T y p E S 205

Icon Design
The appearance of icons in an interface came about following the Xerox Star project. They
were used to represent objects as part of the desktop metaphor, namely, folders, documents,
trashcans, inboxes, and outboxes. The assumption behind using icons instead of text labels
is that they are easier to learn and remember, especially for non-expert computer users. They
can also be designed to be compact and variably positioned on a screen.

some of which are hidden under subheadings and appear only when they are rolled over with
the mouse. Likewise, for each toolbar, there is a set of tools available, be it for Drawing, For-
matting, Web, Table, or Borders. Remembering the location of frequently used commands like
Spelling and Replace is often achieved by remembering their spatial location. For infrequently
used commands, like sorting a list of references into alphabetical order, users can spend time
flicking through the menus to find the command Sort. It is difficult to remember that the com-
mand Sort should be under the Table heading, since what it is doing is not a table operation,
but a tool to organize a section of a document. It would be more intuitive if the command was
under the Tool header along with similar tools like Spelling. What this example illustrates is
just how difficult it can be to group menu options into clearly defined and obvious categories.
Some fit into several categories, while it can be difficult to group others. The placement of
options in menus can also change between different versions of an application as more func-
tions are added.

Research and Design Considerations
An important design consideration is to decide which terms to use for menu options. Short
phrases like “bring all to front” can be more informative than single words like “front.” How-
ever, the space for listing menu items is often restricted, such that menu names need to be
short. They also need to be distinguishable, that is, not easily confused with one another so
that the user does not choose the wrong one by mistake. Operations such as Quit and Save
should also be clearly separated to avoid the accidental loss of work.

The choice of which type of menu to use will often be determined by the application and
type of device for which is being designed. Which is best will also depend on the number of
menu options and the size of the display available in which to present them. Flat menus are
best for displaying a small number of options at one time, while expanding and collapsible
menus are good for showing a large number of options, such as those available in file and
document creation/editing applications. Usability testing comparing drop-down menus with
mega menus has shown the latter to be more effective and easier to navigate. The main reason
is that megamenus enable users to readily scan many items at a glance on the same page, and
in doing so find what they are looking for (Nielsen and Li, 2017).

7 I N T E R F A C E S206

Icons have become a pervasive feature of the interface. They now populate every app
and operating system and are used for all manner of functions besides representing desktop
objects. These include depicting tools (for example, Paint 3D), status (such as, Wi-Fi strength),
categories of apps (for instance, health or personal finance), and a diversity of abstract opera-
tions (including cut, paste, next, accept, and change). They have also gone through many
changes in their look and feel—black and white, color, shadowing, photorealistic images, 3D
rendering, and animation have all been used.

Whereas early icon designers were constrained by the graphical display technology of the
day, current interface developers have much more flexibility. For example, the use of
anti-aliasing techniques enables curves and non-rectilinear lines to be drawn, enabling more
photo-illustrative styles to be developed (anti-aliasing means adding pixels around a jagged
border of an object to smooth its outline visually). App icons are often designed to be both
visually attractive and informative. The goal is to make them inviting, emotionally appealing,
memorable, and distinctive.

Different graphical genres have been used to group and identify different categories of
icons. Figure 7.9 shows how colorful photorealistic images were used in the original Apple
Aqua set, each slanting slightly to the left, for the category of user applications (such as email)
whereas monochrome straight on and simple images were used for the class of utility applica-
tions (for instance, printer setup). The former has a fun feel to them, whereas the latter has a
more serious look about them. While a number of other styles have since been developed, the
use of slanting versus straight facing icons to signify different icon categories is still in use.

Icons can be designed to represent objects and operations in the interface using concrete
objects and/or abstract symbols. The mapping between the icon and underlying object or opera-
tion to which it refers can be similar (such as a picture of a file to represent the object file), ana-
logical (for instance, a picture of a pair of scissors to represent cut), or arbitrary (for example, the
use of an X to represent delete). The most effective icons are generally those that are isomorphic
since they have a direct mapping between what is being represented and how it is represented.
Many operations in an interface, however, are of actions to be performed on objects, making it
more difficult to represent them using direct mapping. Instead, an effective technique is to use
a combination of objects and symbols that capture the salient part of an action by using anal-
ogy, association, or convention (Rogers, 1989). For example, using a picture of a pair of scissors
to represent cut in a word-processing application provides a sufficient clue as long as the user
understands the convention of cut for deleting text.

Figure 7.9 Two styles of Apple icons used to represent different kinds of functions

7 . 2 I N T E R F A C E T y p E S 207

Another approach that many smartphone designers use is flat 2D icons. These are simple
and use strong colors and pictograms or symbols. The effect is to make them easily recogniz-
able and distinctive. Examples shown in Figure 7.10a include the white ghost on a yellow
background (Snapchat), a white line bubble with a solid white phone handset in a speech
bubble on a lime-green background (WhatsApp), and the sun next to a cloud (weather).

Icons that appear on toolbars or palettes as part of an application or presented on small
device displays (such as digital cameras or smartwatches) have much less screen real estate
available. Because of this, they have been designed to be simple, emphasizing the outline form
of an object or symbol and using only grayscale or one or two colors (see Figure 7.10b). They
tend to convey the status, tool, or action using a concrete object (for example, the airplane
symbol signaling whether the airplane mode is on or off) and abstract symbols (such as three
waves that light up from none to all to convey the strength/power of the area’s Wi-Fi).

(a) (b)

Figure 7.10 2D icons designed for (a) a smartphone and (b) a smartwatch
Source: (a) Helen Sharp (b) https://support.apple.com/en-ca/HT205550

ACTIVITy 7.3
Sketch simple icons to represent the following operations to appear on a digital camera screen:
• Turn image 90-degrees sideways.
• Crop the image.
• Auto-enhance the image.
• More options.

Show them to someone else, tell them that they are icons for a new digital camera
intended to be really simple to use, and see whether they can understand what each represents.

https://support.apple.com/en-ca/HT205550

7 I N T E R F A C E S208

Research and Design Considerations
There are many icon libraries available that developers can download for free (for instance,
https://thenounproject.com/ or https://fontawesome.com/). Various online tutorials and books
on how to design icons are also available (see Hicks, 2012) together with sets of proprietary
guidelines and style guides. For example, Apple provides its developers with style guides,
explaining why certain designs are preferable to others and how to design icon sets. Style

Comment
Figure 7.11 shows the basic Edit Photo icons on an iPhone that appear at the bottom of the
screen when a user selects the edit function. The box with extended lines and two arrows is
the icon for cropping an image; the three overlapping translucent circles represents “differ-
ent lenses” that can be used, the wand in the top-right corner means “auto-enhance,” and the
circle with three dots in it means more functions.

Figure 7.11 The basic Edit Photo icons that appear at the top and bottom of an iPhone display

7 . 2 I N T E R F A C E T y p E S 209

7.2.3 Multimedia
Multimedia, as the name implies, combines different media within a single interface, namely,
graphics, text, video, sound, and animation, and links them together with various forms of
interactivity. Users can click links in an image or text that triggers another media such as an
animation or a video. From there they can return to where they were previously or jump to
another media source. The assumption is that a combination of media and interactivity can
provide better ways of presenting information than can a single media, for example, just text
or video alone. The added value of multimedia is that it can be easier for learning, better for
understanding, more engaging, and more pleasant (Scaife and Rogers, 1996).

Another distinctive feature of multimedia is its ability to facilitate rapid access to mul-
tiple representations of information. Many multimedia encyclopedias and digital libraries
have been designed based on this multiplicity principle, providing an assortment of audio and
visual materials on a given topic. For example, when looking to find information about the
heart, a typical multimedia-based encyclopedia will provide the following:

• One or two video clips of a real live heart pumping and possibly a heart transplant operation
• Audio recordings of the heart beating and perhaps an eminent physician talking about the
cause of heart disease

• Static diagrams and animations of the circulatory system, sometimes with narration
• Several columns of hypertext, describing the structure and function of the heart

Hands-on interactive simulations have also been incorporated as part of multimedia
learning environments. An early example was the Cardiac Tutor, developed to teach
students about cardiac resuscitation. It required students to save patients by selecting
the correct set of procedures in the correct order from various options displayed on the
computer screen (Eliot and Woolf, 1994). Other kinds of multimedia narratives and

guides are also covered in more depth in Chapter 13, “Interaction Design in Practice.” On its
developers’ website (developer.apple.com), advice is given on how and why certain graphical
elements should be used when developing different types of icon. Among the various guide-
lines, it suggests that different categories of application (for example, Business, Utilities, Enter-
tainment, and so on) should be represented by a different genre, and it recommends displaying
a tool to communicate the nature of a task, such as a magnifying glass for searching or a
camera for a photo editing tool. Android and Microsoft also provide extensive guidance and
step-by-step procedures on how to design icons for its applications on its website.

To help disambiguate the meaning of icons, text labels can be used under, above, or to the
side of their icons. This method is effective for toolbars that have small icon sets, such as those
appearing as part of a web browser, but it is not as good for applications that have large icon
sets, for example, photo editing or word processing, since the screen can get cluttered making
it sometimes harder and longer to find an icon. To prevent text/icon clutter on the interface, a
hover function can be used where a text label appears adjacent to or above an icon after the
user holds the cursor over it for a second and for as long as the user keeps the cursor on it. This
method allows identifying information to be temporarily displayed when needed.

http://www.developer.apple.com

7 I N T E R F A C E S210

games have also been developed to support discovery learning by encouraging children
to explore different parts of the display by noticing a hotspot or other kind of link. For
example, https://KidsDiscover.com/apps/ has many tablet apps that use a combination
of animations, photos, interactive 3D models, and audio to teach kids about science and
social studies topics. Using swiping and touching, kids can reveal, scroll through, select
audio narration, and watch video tours. Figure 7.12, for example, has a “slide” mecha-
nism as part of a tablet interface that enables the child to do a side-by-side comparison
of what Roman ruins looks like now and in ancient Roman times.

Another example of a learning app with an interesting UI can be seen at https://www
.abcmouse.com/apps.

Multimedia has largely been developed for training, educational, and entertainment pur-
poses. But to what extent is the assumption that learning (such as reading and scientific
inquiry skills) and playing can be enhanced through interacting with engaging multimedia
interfaces true? What actually happens when users are given unlimited, easy access to mul-
tiple media and simulations? Do they systematically switch between the various media and
“read” all of the multiple representations on a particular subject, or are they more selective
in what they look at and listen to?

Figure 7.12 An example of a multimedia learning app designed for tablets
Source: KidsDiscover app “Roman Empire for iPad”

https://KidsDiscover.com/apps/

https://www.abcmouse.com/apps

https://www.abcmouse.com/apps

7 . 2 I N T E R F A C E T y p E S 211

ACTIVITy 7.4
Watch this video of Don Norman appearing in his first multimedia CD-ROM book (1994),
where he pops up every now and again in boxes or at the side of the page to illustrate the
points being discussed on that page: http://vimeo.com/18687931.

How do you think students used this kind of interactive e-textbook?

Comment
Anyone who has interacted with educational multimedia knows just how tempting it is to
play the video clips and animations while skimming through accompanying text or static
diagrams. The former is dynamic, easy, and enjoyable to watch, while the latter is viewed as
static and difficult to read from the screen. In an evaluation of the original Voyager’s “First
Person: Donald Norman, Defending Human Attributes in the Age of the Machine,” students
consistently admitted to ignoring the text on the interface in search of clickable icons of the
author, which when selected would present an animated video of him explaining some aspect
of design (Rogers and Aldrich, 1996). Given the choice to explore multimedia material in
numerous ways, ironically, users tend to be highly selective as to what they actually pay atten-
tion to, adopting a channel-hopping mode of interaction. While enabling the users to select
the information they want to view or features to explore for themselves, there is the danger
that multimedia environments may in fact promote fragmented interactions where only part
of the media is ever viewed. In a review of research comparing reading from screens versus
paper, Lauren Singer and Patricia Alexandra (2017) found that despite students saying they
preferred reading from screens, their actual performance was worse than when using paper-
based textbooks.

Hence, online multimedia material may be good for supporting certain kinds of activities,
such as browsing, but less optimal for others, for instance reading at length about a topic. One
way to encourage more systematic and extensive interactions (when it is considered important
for the activity at hand) is to require certain activities to be completed that entail the reading
of accompanying text, before the user is allowed to move on to the next level or task.

Research and Design Considerations
A core research question is how to encourage users to interact with all aspects of a multime-
dia app, especially given the tendency to select videos to watch rather than text to read. One
technique is to provide a diversity of hands-on interactivities and simulations that require the
user to complete a task, solve a problem, or explore different aspects of a topic that involves
reading some accompanying text. Specific examples include electronic notebooks that are
integrated as part of the interface, where users can type in their own material; multiple-choice
quizzes that provide feedback about how well they have done; interactive puzzles where they have
to select and position different pieces in the right combination; and simulation-type games

(Continued)

7 I N T E R F A C E S212

7.2.4 Virtual Reality
Virtual reality (VR) has been around since the 1970s when researchers first began devel-
oping computer-generated graphical simulations to create “the illusion of participation
in a synthetic environment rather than external observation of such an environment”
(Gigante, 1993, p. 3). The goal was to create user experiences that feel virtually real when
interacting with an artificial environment. Images are displayed stereoscopically to the
users—most commonly through VR headsets—and objects within the field of vision can
be interacted with via an input device like a joystick.

The 3D graphics can be projected onto Cave Automatic Virtual Environment (CAVE)
floor and wall surfaces, desktops, 3D TV, headsets, or large shared displays, for instance,
IMAX screens. One of the main attractions of VR is that it can provide opportunities for new
kinds of immersive experiences, enabling users to interact with objects and navigate in 3D
space in ways not possible in the physical world or a 2D graphical interface. Besides looking
at and navigating through a 360-degree visual landscape, auditory and haptic feedback can
be added to make the experience feel even more like the real world. The resulting user experi-
ence can be highly engaging; it can feel as if one really is flying around a virtual world. Peo-
ple can become completely absorbed by the experience. The sense of presence can make the
virtual setting seem convincing. Presence, in this case, means “a state of consciousness, the
(psychological) sense of being in the virtual environment” (Slater and Wilbur, 1997, p. 605),
where someone behaves in a similar way to how they would if at an equivalent real event.

VR simulations of the world can be constructed to have a higher level of fidelity with
the objects they represent compared to other forms of graphical interfaces, for example, mul-
timedia. The illusion afforded by the technology can make virtual objects appear to be very
life-like and behave according to the laws of physics. For example, landing and take-off ter-
rains developed for flight simulators can appear to be very realistic. Moreover, it is assumed
that learning and training applications can be improved through having a greater fidelity to
the represented world.

where they have to follow a set of procedures to achieve some goal for a given scenario.
Another approach is to employ dynalinking, where information depicted in one window
explicitly changes in relation to what happens in another. This can help users keep track of
multiple representations and see the relationship between them (Scaife and Rogers, 1996).

Specific guidelines are available that recommend how best to combine multiple media in
relation to different kinds of task, for example, when to use audio with graphics, sound with
animations, and so on, for different learning tasks. As a rule of thumb, audio is good for stim-
ulating the imagination, movies for depicting action, text for conveying details, and diagrams
for conveying ideas. From such generalizations, it is possible to devise a presentation strategy for
online learning. This could be along the lines of the following:

1. Stimulate the imagination through playing an audio clip.
2. Present an idea in diagrammatic form.
3. Display further details about the concept through hypertext.

7 . 2 I N T E R F A C E T y p E S 213

Another distinguishing feature of VR is the different viewpoints it can offer. Players can
have a first-person perspective, where their view of the game or environment is through their
own eyes, or a third-person perspective, where they see the world through an avatar visually
represented on the screen. An example of a first-person perspective is that experienced in
first-person shooter games such as DOOM, where the player moves through the environ-
ment without seeing a representation of themselves. It requires the user to imagine what they
might look like and decide how best to move around. An example of a third-person perspec-
tive is that experienced in Tomb Raider, where the player sees the virtual world above and
behind the avatar of Lara Croft. The user controls Lara’s interactions with the environment
by controlling her movements, for example, making her jump, run, or crouch. Avatars can
be represented from behind or from the front, depending on how the user controls its move-
ments. First-person perspectives are typically used for flying/driving simulations and games,
for instance, car racing, where it is important to have direct and immediate control to steer
the virtual vehicle. Third-person perspectives are more commonly used in games, learning
environments, and simulations, where it is important to see a representation of self with
respect to the environment and others in it. In some virtual environments, it is possible to
switch between the two perspectives, enabling the user to experience different viewpoints on
the same game or training environment.

In the beginning, head-mounted displays were used to present VR experiences. However,
the visuals were often clunky, the headset uncomfortable to wear, and the immersive experi-
ence sometimes resulting in motion sickness and disorientation. Since then, VR technology
has come of age and improved greatly. There are now many off-the-shelf VR headsets (for
example Oculus Go, HTC Vive, and Samsung Gear VR) that are affordable and comfortable.
They also have more accurate head tracking that allow developers to create more compelling
games, movies, and virtual environments.

“Out of Home Entertainment” and VR arcades have also become popular worldwide
and provide a range of social VR experiences, targeted at the general public. For example,
Hyper-Reality has developed a number of spooky games, for 1–4 players, such as Japanese
Adventures, Escape the Lost Pyramid, and the Void. Each game lasts for about 40 minutes,
where players have to carry out a set of tasks, such as finding a lost friend in a realm. The
immersive entertainment is full of surprises at every turn. One moment a player might be on
solid ground and the next in complete darkness. The pleasure is often in not knowing what
is going to happen next and being able to recount the experiences afterward with friends
and family.

Another application area is how VR can enrich the experience of reporting and wit-
nessing current affairs and news, especially feelings of empathy and compassion to real-
life experiences (Aronson-Rath et al., 2016). For example, the BBC together with Aardman
Interactive and University College London researchers developed a VR experience called “We
Wait,” where they put the viewer in a place that few foreign reporters have been, namely, on
a boat with a group of refugees crossing the Mediterranean Sea (Steed et al., 2018). The goal
was to let news reporters and other participants experience how it felt to be there on the
boat with the refugees. They used a particular artistic polygon style rather than realism to
create the characters sitting on the boat (see Figure 7.13). The characters had expressive eyes
intended to convey human emotion in response to gaze interaction. The avatars were found
to generate an empathic response from participants.

7 I N T E R F A C E S214

VR is also starting to be used by airlines and travel companies to enrich someone’s plan-
ning experience of their travel destinations. For example, the airline KLM has developed a
platform called iFly VR (https://360.iflymagazine.com/) that provides an immersive experi-
ence intended to inspire people to discover more about the world. A potential danger of this
approach, however, is that if the VR experience is too lifelike it might make people feel they
have ‘been there, done that’ and hence don’t need to visit the actual place. However, KLM’s
rationale is quite the opposite; if you make the virtual experience so compelling people will
want to go there even more. Their first foray into this adventure follows the famous “Fear-
less Chef” Kiran Jethwa into a jungle in Thailand to look for the world’s most remarkable
coffee beans.

MagicLeap has pushed the envelope even further into new realms of virtual real-
ity; combining cameras, sensors, and speakers in a headset that provides quite a different
experience—one where the user can create their own worlds using various virtual tools, for
example, painting a forest or building a castle—that then come alive in the actual physical
space in which they reside. In this sense, it is not strictly VR, as it allows the wearer to see
the virtual world and virtual objects they have created, or curated, blend with the physical
objects in their living room or other space in which they are located. It is as if by magic the
two are in the same world. In some ways it is a form of augmented reality (AR) – described
in section 7.2.16.

Figure 7.13 Snapshot of polygon graphics used to represent avatars for the “We Wait” VR experience
Source: Steed, Pan, Watson and Slater, https://www.frontiersin.org/articles/10.3389/frobt.2018.00112/full.
Licensed Under CC-BY 4.0

https://www.frontiersin.org/articles/10.3389/frobt.2018.00112/full

7 . 2 I N T E R F A C E T y p E S 215

Watch this video of MagicLeap’s Create World where the virtual world meets the
physical world in magical ways: https://youtu.be/K5246156rcQ.

Research and Design Considerations
VR has been developed to support learning and training for numerous skills. Researchers have
designed apps to help people learn to drive a vehicle, fly a plane, and perform delicate surgical
operations—where it is very expensive and potentially dangerous to start learning with the
real thing. Others have investigated whether people can learn to find their way around a real
building/place before visiting it by first navigating a virtual representation of it, see Gabrielli
et al. (2000).

An early example of VR was the Virtual Zoo project. Allison et al. (1997) found that
people were highly engaged and very much enjoyed the experience of adopting the role of a
gorilla, navigating the environment, and watching other gorillas respond to their movements
and presence.

Virtual environments (VE) have also been designed to help people practice social and
speaking skills and confront their social phobias, see Cobb et  al. (2002) and Slater et  al.
(1999). An underlying assumption is that the environment can be designed as a safe place
to help people gently overcome their fears (for example, spiders, talking in public, and so
forth) by confronting them through different levels of closeness and unpleasantness (such as
by seeing a small virtual spider move far away, seeing a medium one sitting nearby, and then
finally touching a large one). Studies have shown that people can readily suspend their disbe-
lief, imagining a virtual spider to be a real one or a virtual audience to be a real audience. For
example, Slater et al. (1999) found that people rated themselves as being less anxious after
speaking to a virtual audience that was programmed to respond to them in a positive fashion
than after speaking to virtual audiences programmed to respond to them negatively.

Core design considerations include the importance of having a virtual self-body as part of
a VR experience to enhance the feeling of presence; how to prevent users from experiencing
simulator sickness through experimenting with galvanic stimulation; determining the most
effective ways of enabling users to navigate through them, for instance, first person versus
third person; how to control their interactions and movements, for example, use of head and
body movements; how best to enable users to interact with information in VR, for example,
use of keypads, pointing, joystick buttons; and how to enable users to collaborate and com-
municate with others in the virtual environment.

(Continued)

Peter Rubin’s (2018) guide to VR published in Wired magazine provides a sum-
mary and speculation about its future: https://www.wired.com/story/wired-
guide-to-virtual-reality/.

7 I N T E R F A C E S216

7.2.5 Website Design
Early websites were largely text-based, providing hyperlinks to different places or pages of
text. Much of the design effort was concerned with the information architecture, that is, how
best to structure information at the interface level to enable users to navigate and access it
easily and quickly. For example, Jakob Nielsen (2000) adapted his and Rolf Molich’s usabil-
ity guidelines (Nielsen and Molich, 1990) to make them applicable to website design, focus-
ing on simplicity, feedback, speed, legibility, and ease of use. He also stressed how critical
download time was to the success of a website. Simply, users who have to wait too long for
a page to appear are likely to move on somewhere else.

Since then, the goal of web design has been to develop sites that are not only usable but
also aesthetically pleasing. Getting the graphical design right, therefore, is critical. The use of
graphical elements (such as background images, color, bold text, and icons) can make a web-
site look distinctive, striking, and pleasurable for the user when they first view it and also to
make it readily recognizable on their return. However, there is the danger that designers can
get carried away with the appearance at the expense of making it difficult to find something
and navigate through it.

Steve Krug (2014) discusses this usability versus attractiveness dilemma in terms of the
difference between how designers create websites and how users actually view them. He
argues that many web designers create sites as if the user was going to pore over each page,
reading the finely crafted text word for word; looking at the use of images, color, icons, and
so forth; examining how the various items have been organized on the site; and then con-
templating their options before they finally select a link. Users, however, often behave quite
differently. They will glance at a new page, scan part of it, and click the first link that catches
their interest or looks like it might lead them to what they want.

Much of the content on a web page is not read. In Krug’s words, web designers are
“thinking great literature” (or at least “product brochure”), while the user’s reality is much
closer to a “billboard going by at 60 miles an hour” (Krug, 2014, p. 21). While somewhat of
a caricature of web designers and users, his depiction highlights the discrepancy between the

A central concern is the level of realism to target. Is it necessary to design avatars and the
environments that they inhabit to be life-like, using rich graphics, or can simpler and more
abstract forms be used, but which nonetheless are equally capable of engendering a sense of
presence? Do you need to provide a visual representation of the arm and hands for holding
objects for a self-avatar, or is it enough to have continuous movement of the object? Research
has shown that it is possible for objects to appear to be moving with invisible hands as if
they were present. This has been coined as the “tomato presence,” that is, where presence is
maintained using a stand-in object in VR (for instance, a tomato). (See https://owlchemylabs
.com/tomatopresence/.)

3D software toolkits are also available, making it much easier for developers and
researchers to create virtual environments. The most popular is Unity. 3D worlds can be cre-
ated using their APIs, toolkits, and physics engines to run on multiple platforms, for example,
mobile, desktop, console, TV, VR, AR, and the Web.

https://owlchemylabs.com/tomatopresence/

https://owlchemylabs.com/tomatopresence/

7 . 2 I N T E R F A C E T y p E S 217

meticulous ways that designers create their websites and the rapid and less than systematic
approach that users take to view them. To help navigate their way through the many choices
that web developers have to make, Jason Beaird and James George (2014) have come up with
a number of guidelines intended to help web developers achieve a balance between using
color, layout and composition, texture, typography, and imagery. They also cover mobile and
responsive web design. Other website guidelines are mentioned in Chapter 16.

Web designers now have a number of languages available to design websites, such as
Ruby and Python. HTML5 and web development tools, such as JavaScript and CSS, are also
used. Libraries, such as React, and open source toolkits, such as Bootstrap, enable developers
to get started quickly when prototyping their ideas for a website. WordPress also provides
users with an easy-to-use interface and hundreds of free templates to use as a basis when
creating their own website. In addition, built-in optimization and responsive, mobile-ready
themes are available. Customized web pages are available for smartphone browsers that pro-
vide scrolling lists of articles, games, tunes, and so on, rather than hyperlinked pages.

Another interface element that has become an integral part of any website is breadcrumb
navigation. Breadcrumbs are category labels that appear on a web page that enable users to
peruse other pages without losing track of where they have come from (see Figure 7.14). The
term comes from the way-finding technique that Hansel used in the Brothers Grimm fairy
tale Hansel and Gretel. The metaphor conjures up the idea of leaving a path to follow back.
Breadcrumbs are also used by search engine optimization tools that match up a user’s search
terms with relevant web pages using the breadcrumbs. Breadcrumbs also extol usability in a
number of ways, including helping users know where they are relative to the rest of the web-
site, enabling one-click access to higher site levels, attracting first time visitors to continue to
browse a website after having viewed the landing page (Mifsud, 2011). Therefore, using them
is good practice for other web applications besides websites.

With the arrival of tablets and smartphones, web designers needed to rethink how to
design web browsers and websites for them, as they realized the touchscreen affords a differ-
ent interaction style than PC/laptops. The standard desktop interface was found to not work
as well on a tablet or smartphone. In particular, the typical fonts, buttons, and menu tabs
were too small and awkward to select when using a finger. Instead of double-clicking inter-
face elements, as users do with a mouse or trackpad, tablet and smartphone screens enable
finger tapping. The main methods of navigation are by swiping and pinching. A new style
of website emerged that mapped better to this kind of interaction style but also one that the

Figure 7.14 A breadcrumb trail on the BestBuy website showing three choices made by the user
to get to Smart Lights
Source: https://www.bestbuy.ca

https://www.bestbuy.ca

7 I N T E R F A C E S218

user could interact with easily when using a mouse and trackpad. Responsive websites were
developed that change their layout, graphic design, font, and appearance depending on the
screen size (smartphone, tablet, or PC) on which it was being displayed.

If you look at the design of many websites, you will see that the front page presents a
banner at the top, a short promotional video about the company/product/service, arrows to
the left or right to indicate where to flick to move through pages, and further details appear-
ing beneath the home page that the user can scroll through. Navigation is largely done by
swiping pages horizontally or scrolling up and down.

Tips on designing websites for tablets versus mobile phones can be found here:

A Couple of Best Practices for Tablet-Friendly Design

BOX 7.3
In-Your-Face Web Ads

Web advertising has become pervasive and invasive. Advertisers realized how effective flash-
ing and animated ads were for promoting their products, taking inspiration from the animated
neon light advertisements used in city centers, such as London’s Piccadilly Circus. But since
banner ads emerged in the 1990s, advertisers have become even more cunning in their tactics.
In addition to designing even flashier banner ads, more intrusive kinds of web ads have begun
to appear on our screens. Short movies and garish cartoon animations, often with audio, now
pop up in floating windows that zoom into view or are tagged on at the front end of an online
newspaper or video clip. Moreover, this new breed of in-your-face, often personalized web ads
frequently requires the user either to wait until they end or to find a check box to close the
window down. Sites that provide free services, such as Facebook, YouTube, and Gmail, are
also populated with web ads. The problem for users is that advertisers pay significant revenues
to online companies to have their advertisements placed on their websites, entitling them to
say where, what, and how they should appear. One way users can avoid them is to set up ad
blockers when browsing the web.

Research and Design Considerations
There are numerous classic books on web design and usability (for example, Krug, 2014;
Cooper et al., 2014). In addition, there are many good online sites offering guidelines and
tips. For example, the BBC provides online guidance specifically for how to design responsive
websites that includes topics such as context, accessibility, and modular design. See https://
www.bbc.co.uk/gel/guidelines/how-to-design-for-the-web. Key design considerations for all
websites are captured well by three core questions proposed by Keith Instone (quoted in Veen,
2001): Where am I? What’s here? Where can I go?

7 . 2 I N T E R F A C E T y p E S 219

7.2.6 Mobile Devices
Mobile devices have become pervasive, with people increasingly using them in all aspects of
their everyday and working lives—including phones, fitness trackers, and watches. Custom-
ized mobile devices are also used by people in a diversity of work settings where they need
access to real-time data or information while walking around. For example, they are now
commonly used in restaurants to take orders, at car rental agencies to check in car returns, in
supermarkets for checking stock, and on the streets for multiplayer gaming.

Larger-sized tablets are also used in mobile settings. For example, many airlines provide
their flight attendants with one so that they can use their customized flight apps while air-
borne and at airports; sales and marketing professionals also use them to demonstrate their
products or to collect public opinions. Tablets and smartphones are also commonly used in
classrooms that can be stored in special “tabcabbies” provided by schools for safe keeping
and recharging.

Smartphones and smartwatches have an assortment of sensors embedded in them, such
as an accelerometer to detect movement, a thermometer to measure temperature, and gal-
vanic skin response to measure changes in sweat level on one’s skin. Other apps may be
designed for fun. An example of an early app developed by magician Steve Sheraton simply
for a moment of pleasure is iBeer (see Figure 7.15). Part of its success was due to the ingen-
ious use of the accelerometer inside the phone. It detects the tilting of the iPhone and uses
this information to mimic a glass of beer being consumed. The graphics and sounds are also
very enticing; the color of the beer together with frothy bubbles and accompanying sound
effects gives the illusion of virtual beer being swished around a virtual glass. The beer can be
drained if the phone is tilted enough, followed by a belch sound when it has been finished.

ACTIVITy 7.5
Look at a fashion brand’s website, such as Nike, and describe the kind of interface used. How
does it contravene the design principles outlined by Jeffrey Veen? Does it matter? For what
type of user experience is it providing? What was your experience in engaging with it?

Comment
Fashion companies’ sites, like Nike, are often designed to be more like a cinematic experience
and use rich multimedia elements, including videos, sounds, music, animations, and interactiv-
ity. Branding is central. In this sense, it contravenes what are considered core usability guide-
lines. Specifically, the site has been designed to entice the visitor to enter the virtual store and
watch high-quality and innovative movies that show cool dudes wearing their products.
Often, multimedia interactivities are embedded into the sites to help the viewer move to other
parts of the site, for example by clicking on parts of an image or a video playing. Screen widg-
ets are also provided, such as menus, skip over, and next buttons. It is easy to become immersed
in the experience and forget that it is a commercial store. It is also easy to get lost and not to
know—Where am I? What’s here? Where can I go? But this is precisely what companies such
as Nike want their visitors to do and to enjoy: the experience.

7 I N T E R F A C E S220

Smartphones can also be used to download contextual information by scanning barcodes
in the physical world. Consumers can instantly download product information by scanning
barcodes using their iPhone when walking around a supermarket, including allergens, such
as nuts, gluten, and dairy. For example, the GoodGuide app enables shoppers to scan prod-
ucts in a store by taking a photo of their barcode to see how they rate for healthiness and
impact on the environment. Others include concert tickets and location-based notifications.

Another method that provides quick access to relevant information is the use of quick
response (QR) codes that store URLs and look like black-and-white checkered squares (see
Figure 7.16). They work by people taking a picture using their camera phone that then takes
them to a particular website. However, despite their universal appeal to companies as a way
of providing additional information or special offers, not many people actually use them in
practice. One of the reasons is that they can be slow, tricky, and cumbersome to use in situ.
People have to download a QR reader app first, open it, and then try to hold it over the QR
code to take a photo, which can take time to open up a webpage.

Figure 7.15 The iBeer smartphone app
Source: Hottrix

Figure 7.16 QR code appearing on a magazine page

7 . 2 I N T E R F A C E T y p E S 221

ACTIVITy 7.6
Smartwatches, such those made by Google, Apple, and Samsung, provide a multitude of func-
tions including fitness tracking, streaming music, texts, email, and the latest tweets. They are
also context and location aware. For example, on detecting the user’s presence, promotional
offers may be pinged to them from nearby stores, tempting them in to buy. How do you feel
about this? Do you think it is the same or worse compared to the way advertisements appear
on a user’s smartphone? Is this kind of context-based advertising ethical?

Comment
Smartwatches are similar to smartphones in that they, too, get pinged with promotions and
ads for nearby restaurants and stores. However, the main difference is that when worn on a
wrist, smartwatches are ever-present; the user only needs to glance down at it to notice a new
notification, whereas they have to take their phones out of their pockets and purses to see
what new item has been pinged (although some people hold their smartphone permanently in
their hands). This means that their attention is always being given to the device, which could
make them susceptible to responding to notifications and spending more money. While some
people might like to get 10 percent off on coffee if they walk into the cafe that has just sent
them a digital voucher, for others such notifications may be seen as very annoying as they are
constantly bombarded with promotions. Worse still, it could tempt children and vulnerable
people who are wearing such a watch to spend money when perhaps they shouldn’t or to nag
their parents or caretakers to buy it for them. However, smartwatch companies are aware of
this potential problem, and they provide settings that the user can change in terms of the level
and type of notifications they want to receive.

Research and Design Considerations
Mobile interfaces typically have a small screen and limited control space. Designers have to
think carefully about what type of dedicated hardware controls to include, where to place
them on the device, and then how to map them to the software. Apps designed for mobile
interfaces need to take into account the ability to navigate through content when using a
mobile display is constrained, whether using touch, pen, or keypad input. The use of vertical
and horizontal scrolling provides a rapid way of scanning though images, menus, and lists. A
number of mobile browsers have also been developed that allow users to view and navigate
the Internet, magazines, or other media in a more streamlined way. For example, Microsoft’s
Edge browser was one of the first mobile browsers that was designed to make it easier to find,
view, and manage content on the go. It provides a customized reading view that enables the
user to re-organize the content of a web page to make it easier for them to focus on what they
want to read. The trade-off, however, is that it makes it less obvious how to perform other
functions that are no longer visible on the screen.

(Continued)

7 I N T E R F A C E S222

7.2.7 Appliances
Appliances include machines for everyday use in the home (for example, washing machines,
microwave ovens, refrigerators, toasters, bread makers, and smoothie makers). What they
have in common is that most people using them will be trying to get something specific done
in a short period of time, such as starting a wash, watching a program, buying a ticket, or
making a drink. They are unlikely to be interested in spending time exploring the interface
or looking through a manual to see how to use the appliance. Many of them now have LED
displays that provide multiple functions and feedback about a process (such as temperature,
minutes remaining, and so on). Some have begun to be connected to the Internet with com-
panion devices, enabling them to be controlled by remote apps. An example is a coffee maker
that can be controlled to come on at a certain time from an app running on a smartphone or
controlled by voice.

Another key concern for mobile display design is the size of the area on the display that
the user touches to make something happen, such as a key, icon, button, or app. The space
needs to be big enough for “all fingers” to press accurately. If the space is too small, the
user may accidentally press the wrong key, which can be annoying. The average fingertip is
between one and two centimeters wide, so target areas should be at least 7 mm to 10 mm so
that they can be accurately tapped with a fingertip. Fitts’ law (see Chapter 16) is often used to
help with evaluating hit area. In their developer design guidelines, Apple also suggests provid-
ing ample touch targets for interactive elements, with a minimum tappable area of 44 pts. ×
44 pts. for all controls.

A number of other guidelines exist providing advice on how to design interfaces for
mobile devices (for instance, see Babich, 2018). An example is avoiding clutter by prioritizing
one primary action per screen.

Research and Design Considerations
Alan Cooper et al. (2014) suggest that appliance interfaces require the designer to view them
as transient interfaces, where the interaction is short. All too often, however, designers pro-
vide full-screen control panels or an unnecessary array of physical buttons that serve to frus-
trate and confuse the user where only a few, presented in a structured way, would be much
better. Here the two fundamental design principles of simplicity and visibility are paramount.
Status information, such as what the photocopier is doing, what the ticket machine is doing,
and how much longer the wash is going to take should be provided in a simple form and
at a prominent place on the interface. A key design question is: as soft displays increasingly
become part of an appliance interface, for example, LCD and touchscreens, what are the
trade-offs with replacing the traditional physical controls, such as dials, buttons, and knobs,
with these soft display controls?

7 . 2 I N T E R F A C E T y p E S 223

ACTIVITy 7.7
Look at the controls on your toaster (or the one in Figure 7.17 if you don’t have one nearby)
and describe what each does. Consider how these might be replaced with an LCD screen.
What would be gained and lost from changing the interface in this way?

Comment
Standard toasters have two main controls, the lever to press down to start the toasting and
a knob to set the amount of time for the toasting. Many come with a small eject button that
can be pressed if the toast starts to burn. Some also come with a range of settings for different
ways of toasting (such as one side, frozen, and so forth), selected by moving a dial or press-
ing buttons.

Designing the controls to appear on an LCD screen would enable more information and
options to be provided, for example, only toast one slice, keep the toast warm, or automati-
cally pop up when the toast is burning. It would also allow precise timing of the toasting in
minutes and seconds. However, it is likely to increase the complexity of what previously was
a set of logical and very simple actions. This has happened in the evolution of microwaves,
washing machines, and tea kettles that have digital interfaces. They also offer many more
options for warming food up, washing clothes, or the temperature to heat water. The down-
side of increasing the number of choices, especially when the interface is not designed well to
support this, is that it can make for a more difficult user experience for mundane tasks.

Figure 7.17 A typical toaster with basic physical controls
Source: https://uk.russellhobbs.com/product/brushed-stainless-steel-toaster-2-slice

https://uk.russellhobbs.com/product/brushed-stainless-steel-toaster-2-slice

7 I N T E R F A C E S224

7.2.8 Voice User Interfaces
A voice user interface (VUI) involves a person talking with a spoken language app, such as
a search engine, a train timetable, a travel planner, or a phone service. It is commonly used
for inquiring about specific information (for instance, flight times or the weather) or issuing
a command to a machine (such as asking a smart TV to select an Action movie or asking a
smart speaker to play some upbeat music). Hence, VUIs use an interaction type of command
or conversation (see Chapter 3), where users speak and listen to an interface rather than click
on, touch, or point to it. Sometimes, the interaction style can involve the user responding
where the system is proactive and initiates the conversation, for example, asking the user if
they would like to stop watching a movie or listen to the latest breaking news.

The first generation of speech systems earned a reputation for mishearing all too often
what a person said (see cartoon). However, they are now much more sophisticated and have
higher levels of recognition accuracy. Machine learning algorithms have been developed that
are continuing to improve their ability to recognize what someone is saying. For speech out-
put, actors are often used to record answers, messages, and prompts, which are much friend-
lier, more convincing, and more pleasant than the artificially-sounding synthesized speech
that was typically used in the early systems.

VUIs have become popular for a range of apps. Speech-to-text systems, such as Dragon,
enable people to dictate rather than have to type, whether it is entering data into a spread-
sheet, using a search engine, or writing a document. The words spoken appear on the screen.
For some people, this mode of interaction is more efficient, especially when they are on the
move. Dragon claims on their website that it is three times faster than typing and it is 99
percent accurate. Speech technology is also used by people with visual impairments, includ-
ing speech recognition word processors, page scanners, web readers, and VUIs for operating
home control systems, including lights, TV, stereo, and other home appliances.

One of the most popular applications of speech technology is call routing, where com-
panies use an automated speech system to enable users to reach one of their services during a
phone call. Callers voice their needs in their own words, for example, “I’m having problems
with my Wi-Fi router,” and in response are automatically forwarded to the appropriate service
(Cohen et al., 2004). This is useful for companies, as it can reduce operating costs. It can also
increase revenue by reducing the number of lost calls. The callers may be happier, as their call
can be routed to an available agent (real or virtual) rather than being lost or sent to voicemail.

Source: Reproduced with permission of King
Features Syndicate

7 . 2 I N T E R F A C E T y p E S 225

In human conversations, people often interrupt each other, especially if they know what
they want, rather than waiting for someone to go through a series of options. For example, they
may stop the waitress at a restaurant in midflow when describing the specials if they know
what they want, rather than let her go through the entire list. Similarly, speech technology has
been designed with a feature called barge-in that allows callers to interrupt a system message
and provide their request or response before the message has finished playing. This can be
useful if the system has numerous options from which the caller may choose, and the chooser
knows already what they want.

There are several ways that a VUI dialog can be structured. The most common is a
directed dialogue where the system is in control of the conversation, asking specific questions
and requiring specific responses, similar to filling in a form (Cohen et al., 2004):

System: Which city do you want to fly to?
Caller: London
System: Which airport: Gatwick, Heathrow, Luton, Stansted, or City?
Caller: Gatwick
System: What day do you want to depart?
Caller: Monday next week.
System: Is that Monday, May 5?
Caller: Yes

Other systems are more flexible, allowing the user to take more initiative and specify
more information in one sentence (for example, “I’d like to go to Paris next Monday for two
weeks”). The problem with this approach is that there is more chance for error, since the
caller might assume that the system can follow all of their needs in one pass as a real travel
agent would (for example, “I’d like to go to Paris next Monday for two weeks, and would
like the cheapest possible flight, preferably leaving from Gatwick airport and definitely with
no stop-overs …”). The list is simply too long and would overwhelm the system’s parser.
Carefully guided prompts can be used to get callers back on track and help them speak
appropriately (for instance, “Sorry, I did not get all that. Did you say you wanted to fly next
Monday?”).

A number of speech-based phone apps exist that enable people to use them while mobile,
making them more convenient to use than text-based entry. For example, people can voice
queries into their phone using Google Voice or Apple Siri rather than entering text manually.
Mobile translators allow people to communicate in real time with others who speak a dif-
ferent language by letting a software app on their phone do the talking (for example, Google
Translate). People speak in their own language using their phone while the software trans-
lates what each person is saying into the language of the other one. Potentially, this means
people from all over the world (there are more than 6,000 languages) can talk to one another
without having to learn another language.

Voice assistants, like Amazon’s Alexa and Google Home, can be instructed by users to
entertain in the home by telling jokes, playing music, keeping track of time, and enabling
users to play games. Alexa also offers a range of “skills,” which are voice-driven capabilities
intended to provide a more personalized experience. For example, “Open the Magic Door” is
an interactive story skill that allows users to choose their path in a story by selecting differ-
ent options through the narrative. Another one, “Kids court,” allows families to settle argu-
ments in an Alexa-run court while learning about the law. Many of the skills are designed

7 I N T E R F A C E S226

to support multiple users taking part at the same time, offering the potential for families to
play together. Social interaction is encouraged by the smart speaker that houses Alexa or
Home. Smart speakers sit in a common space for all to use (similar to a toaster or refrigera-
tor). In contrast, handheld devices, such as a smartphone or tablet, support only single use
and ownership.

Despite advances in speech recognition, conversational interaction is limited mainly to
answering questions and responding to requests. It can be difficult for VUIs to recognize
children’s speech, which is not as articulate as adults. For example, Druga et al. (2017) found
that young children (3–4 years old) experienced difficulty interacting with conversational
and chat agents, resulting in them becoming frustrated. Also, voice assistants don’t always
recognize who is talking in a group, such as a family, and always need to be called by their
name each time someone wants to interact with it. There is still a way to go before voice
assistant interaction resembles human conversation.

7.2.9 Pen-Based Devices
Pen-based devices enable people to write, draw, select, and move objects on an interface using
light pens or styluses that capitalize on the well-honed drawing and writing skills that are devel-
oped from childhood. They have been used to interact with tablets and large displays, instead
of mouse, touch, or keyboard input, for selecting items and supporting freehand sketching.
Digital ink, such as Anoto, uses a combination of an ordinary ink pen with a digital camera that

Research and Design Considerations
Key research questions are what conversational mechanisms to use to structure the voice user
interface and how human-like they should be. Some researchers focus on how to make it
appear natural (that is, like human conversation), while others are concerned more with how
to help people navigate efficiently through a menu system by enabling them to recover easily
from errors (their own or the system’s), to be able to escape and go back to the main menu
(similar to the undo button of a GUI), and to guide those who are vague or ambiguous in
their requests for information or services using prompts. The type of voice actor, male, female,
neutral, or dialect, and form of pronunciation are also topics of research. Do people prefer to
listen to and are more patient with a female or male voice? What about one that is jolly versus
one that is serious?

Michael Cohen et al. (2004) discuss the pros and cons of using different techniques for
structuring the dialogue and managing the flow of voice interactions, the different ways of
expressing errors, and the use of conversational etiquette—all still relevant for today’s VUIs.
A number of commercial guidelines are available for voice interfaces. For example, Cathy
Pearl (2016) has written a practical book that provides a number of VUI design principles and
topics, including which speech recognition engine to use, how to measure the performance of
VUIs, and how to design VUIs for different interfaces, for example, a mobile app, toy, or voice
assistant.

7 . 2 I N T E R F A C E T y p E S 227

digitally records everything written with the pen on special paper (see Figure 7.18). The pen
works by recognizing a special nonrepeating dot pattern that is printed on the paper. The non-
repeating nature of the pattern means that the pen is able to determine which page is being
written on and where on the page the pen is pointing. When writing on digital paper with a
digital pen, infrared light from the pen illuminates the dot pattern, which is then picked up by
a tiny sensor. The pen decodes the dot pattern as the pen moves across the paper and stores the
data temporarily in the pen. The digital pen can transfer data that has been stored in the pen
via Bluetooth or a USB port to a computer. Handwritten notes can also be converted and saved
as standard typeface text. This can be useful for applications that require people to fill in paper-
based forms and also for taking notes during meetings.

Another advantage of digital pens is that they allow users to annotate existing docu-
ments, such as spreadsheets, presentations, and diagrams quickly and easily in a similar
way to how they would do this when using paper-based versions. This is useful for a team
who is working together and communicating to each other from different locations. One
problem with using pen-based interactions on small screens, however, is that sometimes
it can be difficult to see options on the screen because a user’s hand can obscure part of it
when writing.

Figure 7.18 The Anoto pen being used to fill in a paper form and a schematic showing its internal
components
Source: www.grafichewanda.it/anoto.php?language=EN

http://www.grafichewanda.it/anoto.php?language=EN

7 I N T E R F A C E S228

7.2.10 Touchscreens
Single touchscreens, used in walk-up kiosks such as ticket machines or museum guides, ATMs,
and cash registers (for instance, restaurants), have been around for a while. They work by
detecting the presence and location of a person’s touch on the display; options are selected by
tapping on the screen. Multitouch surfaces, on the other hand, support a much wider range
of more dynamic fingertip actions, such as swiping, flicking, pinching, pushing, and tapping.
They do this by registering touches at multiple locations using a grid (see Figure 7.19). This
multitouch method enables devices, such as smartphones and tabletops, to recognize and
respond to more than one touch at the same time. This enables users to use multiple digits to
perform a variety of actions, such as zooming in and out of maps, moving photos, selecting
letters from a virtual keyboard when writing, and scrolling through lists. Two hands can also
be used together to stretch and move objects on a tabletop surface, similar to how both hands
are used to stretch an elastic band or scoop together a set of objects.

BOX 7.4
Electronic Ink

Digital ink is not to be confused with the term electronic ink (or e-ink). Electronic ink is a dis-
play technology designed to mimic the appearance of ordinary ink on paper used in e-readers,
such as the Kindle. The display used reflects light like ordinary paper.

Figure 7.19 A multitouch interface
Source: www.sky-technology.eu/en/blog/article/item/multi-touch-technology-how-it-works.html

http://www.sky-technology.eu/en/blog/article/item/multi-touch-technology-how-it-works.html

7 . 2 I N T E R F A C E T y p E S 229

The flexibility of interacting with digital content afforded by finger gestures has resulted
in many ways of experiencing digital content. This includes reading, scanning, zooming, and
searching interactive content on tablets, as well as creating new digital content.

7.2.11 Gesture-Based Systems
Gestures involve moving arms and hands to communicate (for instance, waving to say good-
bye or raising an arm to speak in class) or to provide information to someone (for example,
holding two hands apart to show the size of something). There has been much interest in how
technology can be used to capture and recognize a user’s gestures for input by tracking them
using cameras and then analyzing them using machine learning algorithms.

David Rose (2018) created a video that depicts many sources of inspiration for where
gesture is used in a variety of contexts, including those made by cricket umpires, live concert
signers for the deaf, rappers, Charlie Chaplin, mime artists, and Italians. His team at IDEO
developed a gesture system to recognize a small set of gestures and used these to control a
Philips HUE light set and a Spotify station. They found that gestures need to be sequential
to be understood in the way a sentence is composed of a noun, then verb, and object plus
operation. For example, for “speaker, on,” they used a gesture on one hand to designate the

Research and Design Considerations
Touchscreens have become pervasive, increasingly becoming the main interface that many
people use on a daily basis. However, they are different from GUIs, and a central design con-
cern is what types of interaction techniques to use to best support different activities. For
example, what is the optimal way to enable users to choose from menu options, find files, save
documents, and so forth, when using a touch interface? These operations are well mapped to
interaction styles available in a GUI, but it is not as obvious how to support them on a touch
interface. Alternative conceptual models have been developed for the user to carry out these
actions on the interface, such as the use of cards, carousels, and stacks (see Chapter 3). The
use of these objects enables users to swipe and move through digital content quickly. However,
it is also easy to swipe too far when using a carousel. Typing on a virtual keyboard with two
thumbs or one fingertip is also not as fast and efficient as using both hands when using a con-
ventional keyboard, although many people have learned to be very adept at pecking at virtual
keys on a smartphone. Predictive text can also be used to help people type faster.

Both hands may be used on multitouch tabletops to enable users to make digital objects
larger and smaller or to rotate them. Dwelling touches (pressing and holding a finger down)
can also be used to enable a user to perform dragging actions and to bring up pop-up menus.
One or more fingers can also be used together with a dwell action to provide a wider range
of gestures. However, these can be quite arbitrary, requiring users to learn them rather than
being intuitive. Another limitation of touchscreens is that they do not provide tactile feedback
in the same way that keys or mice do when pressed. To compensate, visual, audio, and haptic
feedback can be used. See also the section on shareable interfaces for more background on
multitouch design considerations.

7 I N T E R F A C E S230

noun, and another on the other hand to designate the verb. So, to change the volume, the
user needs to point to a speaker with their left hand while raising their right hand to signal
turn the volume up.

One area where gesture interaction has been developed is in the operating room. Sur-
geons need to keep their hands sterile during operations but also need to be able to look at
X-rays and scans during an operation. However, after being scrubbed and gloved, they need
to avoid touching any keyboards, phones, and other nonsterile surfaces. A far from ideal
workaround is to pull their surgical gown over their hands and manipulate a mouse through
the gown. As an alternative, Kenton O’Hara et al. (2013) developed a touchless gesture-based
system, using Microsoft’s Kinect technology, which recognized a range of gestures that sur-
geons could use to interact with and manipulate MRI or CT images, including single-handed
gestures for moving forward or backward through images, and two-handed gestures for
zooming and panning (see Figure 7.20).

Figure 7.20 Touchless gesturing in the operating theater
Source: Used courtesy of Kenton O’Hara

Watch David Rose’s inspirations for gesture video at https://vimeo.com/224522900.

7 . 2 I N T E R F A C E T y p E S 231

7.2.12 Haptic Interfaces
Haptic interfaces provide tactile feedback, by applying vibration and forces to the person,
using actuators that are embedded in their clothing or a device that they are carrying, such
as a smartphone or smartwatch. Gaming consoles have also employed vibration to enrich the
experience. For example, car steering wheels that are used with driving simulators can vibrate
in various ways to provide the feel of the road. As the driver makes a turn, the steering wheel
can be programmed to feel like it is resisting—in the way that a real steering wheel does.

Vibrotactile feedback can also be used to simulate the sense of touch between remote
people who want to communicate. Actuators embedded in clothing can be designed to re-
create the sensation of a hug or a squeeze by being buzzed on various parts of the body.
Another use of haptics is to provide real-time feedback to guide people when learning a
musical instrument, such as a violin or drums. For example, the MusicJacket (van der Linden
et al., 2011) was developed to help novice violin players learn how to hold their instrument
correctly and develop good bowing action. Vibrotactile feedback was provided via the jacket
to give nudges at key places on the arm and torso to inform the student when they were either
holding their violin incorrectly or their bowing trajectory had deviated from a desired path
(see Figure 7.21). A user study with novice players showed that they were able to react to the
vibrotactile feedback and adjust their bowing or their posture in response.

Another form of feedback is called ultrahaptics, which creates the illusion of touch in
midair. It does this by using ultrasound to make three-dimensional shapes and textures that
can be felt but not seen by the user (www.ultrahaptics.com). This technique can be used
to create the illusion of having buttons and sliders that appear in midair. One potential
use is in the automotive industry to replace existing physical buttons and knobs or touch-
screens. The ultra-haptic buttons and knobs can be designed to appear next to the driver
when needed, for example, when detecting the driver wants to turn down the volume or
change the radio station.

Research and Design Considerations
A key design concern for using gestural input is to consider how a computer system recognizes
and delineates the user’s gestures. In particular, how does it determine the start and end point
of a hand or arm movement, and how does it know the difference between a deictic gesture (a
deliberate pointing movement) and hand waving (an unconscious gesticulation) that is used
to emphasize what is being said verbally?

In addition to being used as a form of input, gestures can be represented as output to
show real-time avatar movement or someone’s own arm movements. Smartphones, laptops,
and some smart speakers (for example, Facebook’s Portal) have cameras that can perceive
three dimensions and record a depth for every pixel. This can be used to create a represen-
tation of someone in a scene, for example, how they are posing and moving, and also to
respond to their gestures. One design question that this raises is how realistic must the mir-
rored graphical representation of the user be in order for them to be believable and for the
user to connect their gestures with what they are seeing on the screen.

http://www.ultrahaptics.com

7 I N T E R F A C E S232

Haptics are also being embedded into clothing, sometimes called exoskeletons. Inspired
by the “right trousers” in the Wallace and Gromit animated short movie, Jonathan Rossiter
and his team (2018) developed a new kind of exoskeleton that can help people stand up and
move around using artificial muscles that consist of air bubbles which are activated using
tiny electric motors (see Figure 7.22). These are stiffened or relaxed using grapheme parts to
make the trousers move. One application area is to help people who have walking difficulties
and those who need to exercise but find it difficult to do so.

7.2.13 Multimodal Interfaces
Multimodal interfaces are intended to enrich user experiences by multiplying the way infor-
mation is experienced and controlled at the interface through using different modalities, such
as touch, sight, sound, and speech (Bouchet and Nigay, 2004). Interface techniques that have
been combined for this purpose include speech and gesture, eye-gaze and gesture, haptic
and audio output, and pen input and speech (Dumas et al., 2009). The assumption is that
multimodal interfaces can support more flexible, efficient, and expressive means of human–
computer interaction that are more akin to the multimodal experiences that humans encoun-
ter in the physical world (Oviatt, 2017). Different input/outputs may be used at the same
time, for example, using voice commands and gestures simultaneously to move through a
virtual environment, or alternately using speech commands followed by gesturing. The most
common combination of technologies used for multimodal interfaces is speech and vision
processing (Deng and Huang, 2004). Multimodal interfaces can also be combined with mul-
tisensor input to enable other aspects of the human body to be tracked. For example, eye
gaze, facial expressions, and lip movements can also be tracked to provide data about a user’s

Figure 7.21 The MusicJacket with embedded actuators that nudge the player to move their arm up
to be in the correct position
Source: Helen Sharp

7 . 2 I N T E R F A C E T y p E S 233

Figure 7.22 Trousers with artificial muscles that use a new kind of bubble haptic feedback
Source: Used courtesy of The Right Trousers Project: Wearable Soft Robotics for Independent Living

Research and Design Considerations
Haptics are now commonly used in gaming consoles, smartphones, and controllers to alert
or heighten a user experience. Haptic feedback is also being developed in clothing and other
wearables as a way of simulating being touched, stroked, prodded, or buzzed. A promising
application area is sensory-motor skills, such as in sports training and learning to play a musi-
cal instrument. For example, patterns of vibrations have been placed across snowboarders’
bodies to indicate which moves to take while snowboarding. A study reported faster reaction
times than when the same instructions were given verbally (Spelmezan et al., 2009). Other
uses are posture trainers that buzz when a user slouches and fitness trackers that also buzz
when they detect that their users have not taken enough steps in the past hour.

(Continued)

7 I N T E R F A C E S234

attention or other behavior. This kind of sensing can provide input for customizing user inter-
faces and experiences to the perceived need, desire, or level of interest.

A person’s body movement can also be tracked so that it can be represented back to them
on a screen in the form of an avatar that appears to move just like them. For example, the
Kinect was developed as a gesture and body movement gaming input system for the Xbox.
Although now defunct in the gaming industry, it proved effective at detecting multimodal
input in real time. It consisted of an RGB camera for facial and gesture recognition, a depth
sensor (an infrared projector paired with a monochrome camera) for movement tracking,
and downward-facing mics for voice recognition (see Figure 7.23). The Kinect looked for

Figure 7.23 Microsoft’s Xbox Kinect
Source: Stephen Brashear / Invision for Microsoft / AP Images

A key design question is where best to place the actuators on the body, whether to use a
single or a sequence of touches, when to activate, and at what intensity and how often to use
them to make the feeling of being touched convincing (e.g., Jones and Sarter, 2008). Providing
continuous haptic feedback would be simply too annoying. People would also habituate too
quickly to the feedback. Intermittent buzzes can be effective at key moments when a person
needs to attend to something but not necessarily tell them what to do. For example, a study by
Johnson et al. (2010) of a commercially available haptic device, intended to improve posture
by giving people a vibrotactile buzz when they slouched, found that while the buzzing did not
show them how to improve their posture, it did improve their body awareness.

Different kinds of buzzes can also be used to indicate different tactile experiences that
map to events; for example, a smartphone could transmit feelings of slow tapping to feel like
water dropping, which is meant to indicate it is about to rain and transmit the sensation of
heavy tapping to indicate a thunderstorm is looming.

7 . 2 I N T E R F A C E T y p E S 235

someone’s body. On finding it, it locked onto it and measured the three-dimensional position-
ing of the key joints in their body. This information was converted into a graphical avatar of
the user that could be programmed to move just like them. Many people readily saw them-
selves as the avatar and learnt how to play games in this manner.

7.2.14 Shareable Interfaces
Shareable interfaces are designed for more than one person to use. Unlike PCs, laptops, and
mobile devices, which are aimed at single users, shareable interfaces typically provide mul-
tiple inputs and sometimes allow simultaneous input by collocated groups. These include
large wall displays, for example SmartBoards (see Figure 7.24a), where people use their own
pens or gestures, and interactive tabletops, where small groups can interact with informa-
tion being displayed on the surface using their fingertips. Examples of interactive tabletops
include Smart’s SmartTable and Circle Twelve’s DiamondTouch (Dietz and Leigh, 2001; see
Figure 7.24b). The DiamondTouch tabletop is unique in that it can distinguish between dif-
ferent users touching the surface concurrently. An array of antennae is embedded in the touch
surface and each one transmits a unique signal. Each user has their own receiver embedded in
a mat on which they’re standing or a chair in which they’re sitting. When a user touches the
tabletop, very small signals are sent through the user’s body to their receiver that identifies
which antenna has been touched and sends this to the computer. Multiple users can interact
simultaneously with digital content using their fingertips.

An advantage of shareable interfaces is that they provide a large interactional space that
can support flexible group working, enabling groups to create content together at the same
time. Compared with a co-located group trying to work around a single-user PC or laptop,
where typically one person takes control, making it more difficult for others to take part,

Research and Design Considerations
Multimodal systems rely on recognizing aspects of a user’s behavior, including handwriting,
speech, gestures, eye movements, or other body movements. In many ways, this is much harder
to accomplish and calibrate than single modality systems that are programmed to recognize
one aspect of a user’s behavior. The most researched modes of interaction are speech, gesture,
and eye-gaze tracking. A key research question is what is actually gained from combining dif-
ferent input and outputs and whether talking and gesturing as humans do with other humans
is a natural way of interacting with a computer (see Chapter 4). Guidelines for multimodal
design can be found in Reeves et al. (2004) and Oviatt et al. (2017).

Watch this video of Circle Twelve’s demonstration of the Diamond Touch tabletop:
http://youtu.be/S9QRdXlTndU.

7 I N T E R F A C E S236

multiple users can interact with large display. Users can point to and touch the informa-
tion being displayed, while simultaneously viewing the interactions and having the same
shared point of reference (Rogers et al., 2009). There are now a number of tabletop apps that
have been developed for museums and galleries which enable visitors to learn about various
aspects of the environment (see Clegg et al., 2019).

(a)

(b)

Figure 7.24 (a) A SmartBoard in use during a meeting and (b) Mitsubishi’s interactive tabletop
interface
Source: (a) Used courtesy of SMART Technologies Inc. (b) Mitsubishi Electric Research Labs

7 . 2 I N T E R F A C E T y p E S 237

Another type of shareable interface is software platforms that enable groups of people
to work together simultaneously even when geographically apart. Early examples included
shared editing tools developed in the 1980s (for example, ShRedit). Various commercial
products now exist that enable multiple remote people to work on the same document at
the same time (such as Google Docs and Microsoft Excel). Some enable up to 50 people to
edit the same document at the same time with more watching on. These software programs
provide various functions, such as synchronous editing, tracking changes, annotating, and
commenting. Another collaborative tool is the Balsamiq Wireframes editor, which provides a
range of shared functions, including collaborative editing, threaded comments with callouts,
and project history.

Research and Design Considerations
Early research on shareable interfaces focused largely on interactional issues, such as how
to support electronically based handwriting and drawing, and the selecting and moving of
objects around the display (Elrod et al., 1992). The PARCTAB system (Schilit et al., 1993)
investigated how information could be communicated between palm-sized, A4-sized, and
whiteboard-sized displays using shared software tools, such as Tivoli (Rønby-Pedersen et al.,
1993). Another concern was how to develop fluid and direct styles of interaction with large
displays, both wall-based and tabletop, involving freehand and pen-based gestures (see Shen
et al., 2003). Current research is concerned with how to support ecologies of devices so that
groups can share and create content across multiple devices, such as tabletops and wall dis-
plays (see Brudy et al., 2016).

A key research issue is whether shareable surfaces can facilitate new and enhanced forms
of collaborative interaction compared with what is possible when groups work together using
their own devices, like laptops and PCs (see Chapter 5, “Social Interaction”). One benefit is
easier sharing and more equitable participation. For example, tabletops have been designed to
support more effective joint browsing, sharing, and manipulation of images during decision-
making and design activities (Shen et al., 2002; Yuill and Rogers, 2012). Core design concerns
include whether size, orientation, and shape of the display have an effect on collaboration.
User studies have shown that horizontal surfaces compared with vertical ones support more
turn-taking and collaborative working in co-located groups (Rogers and Lindley, 2004), while
providing larger-sized tabletops does not necessarily improve group working but can encour-
age a greater division of labor (Ryall et al., 2004).

The need for both personal and shared spaces has been investigated to see how best
to enable users to move between working on their own and together as a group. Several
researchers have designed cross-device systems, where a variety of devices, such as tablets,
smartphones, and digital pens can be used in conjunction with a shareable surface. For
example, SurfaceConstellations was developed for linking mobile devices to create novel
cross-device workspace environments (Marquardt et al., 2018). Design guidelines and sum-
maries of empirical research on tabletops and multitouch devices can be found in Müller-
Tomfelde (2010).

7 I N T E R F A C E S238

7.2.15 Tangible Interfaces
Tangible interfaces use sensor-based interaction, where physical objects, such as bricks, balls,
and cubes, are coupled with digital representations (Ishii and Ullmer, 1997). When a person
manipulates the physical object(s), it is detected by a computer system via the sensing mecha-
nism embedded in the physical object, causing a digital effect to occur, such as a sound, ani-
mation, or vibration (Fishkin, 2004). The digital effects can take place in a number of media
and places, or they can be embedded in the physical object itself. For example, Oren Zucker-
man and Mitchel Resnick’s (2005) early Flow Blocks prototype depicted changing numbers
and lights that were embedded in the blocks, depending on how they were connected. The
flow blocks were designed to simulate real-life dynamic behavior and react when arranged
in certain sequences.

Another type of tangible interface is where a physical model, for example, a puck, a
piece of clay, or a model, is superimposed on a digital desktop. Moving one of the physical
pieces around the tabletop causes digital events to take place on the tabletop. One of the
earliest tangible interfaces, Urp, was built to facilitate urban planning; miniature physical
models of buildings could be moved around on the tabletop and used in combination with
tokens for wind and shadow-generating tools, causing digital shadows surrounding them to
change over time and visualizations of airflow to vary. Tangible interfaces differ from other
approaches, such as mobile, insofar as the representations are artifacts in their own right that
the user can directly act upon, lift up, rearrange, sort, and manipulate.

The technologies that have been used to create tangibles include RFID tags and sensors
embedded in physical objects and digital tabletops that sense the movements of objects and
subsequently provide visualizations surrounding the physical objects. Many tangible systems
have been built with the goal of encouraging learning, design activities, playfulness, and col-
laboration. These include planning tools for landscape and urban planning (see Hornecker,
2005; Underkoffler and Ishii, 1998). Another example is Tinkersheets, which combine tangi-
ble models of shelving with paper forms for exploring and solving warehouse logistics prob-
lems (Zufferey, et al., 2009). The underlying simulation allows students to set parameters by
placing small magnets on the form.

Tangible computing has been described as having no single locus of control or interac-
tion (Dourish, 2001). Instead of just one input device, such as a mouse, there is a coordinated
interplay of different devices and objects. There is also no enforced sequencing of actions and
no modal interaction. Moreover, the design of the interface objects exploits their affordances
to guide the user in how to interact with them. A benefit of tangibility is that physical objects
and digital representations can be positioned, combined, and explored in creative ways, ena-
bling dynamic information to be presented in different ways. Physical objects can also be held
in both hands and combined and manipulated in ways not possible using other interfaces.
This allows for more than one person to explore the interface together and for objects to be
placed on top of each other, beside each other, and inside each other; the different configura-
tions encourage different ways of representing and exploring a problem space. In so doing,
people are able to see and understand situations differently, which can lead to greater insight,
learning, and problem-solving than with other kinds of interfaces (Marshall et al., 2003).

A number of toolkits have been developed to encourage children to learn coding,
electronics, and STEM subjects. These include littleBits (https://littlebits.com/), MicroBit
(https://microbit.org/), and MagicCubes (https://uclmagiccube.weebly.com/). The toolkits

https://littlebits.com/

https://microbit.org/

https://uclmagiccube.weebly.com/

7 . 2 I N T E R F A C E T y p E S 239

provide children with opportunities to connect physical electronic components and sen-
sors to make digital events occur. For example, the MagicCubes can be programmed to
change color depending on the speed at which they are shaken; slow is blue and very fast is
multicolor. Research has shown that the tangible toolkits provide many opportunities for
discovery learning, exploration, and collaboration (Lechelt et al., 2018). The Cubes have
been found to encourage a diversity of children, between the ages of 6 and 16, and those
with cognitive disabilities, to learn through collaborating, frequently showing and telling
each other and their instructors about their discoveries. These moments are facilitated by
the cube’s form factor, making it easy to show off to others, for example, by waving a cube
in the air (see Figure 7.25).

Tangible toolkits have also been developed for the visually impaired. For example, Torino
(renamed by Microsoft to Code Jumper) was developed as a programming language for
teaching programming concepts to children age 7–11, regardless of level of vision (Morrison
et al., 2018). It consists of a set of beads that can be connected and manipulated to create
physical strings of code that play stories or music.

Figure 7.25 Learning to code with the MagicCubes; sharing, showing, and telling
Source: Elpida Makriyannis

7 I N T E R F A C E S240

BOX 7.5
VoxBox—A Tangible Questionnaire Machine

Traditional methods for gathering public opinions, such as surveys, involve approaching peo-
ple in situ, but it can disrupt the positive experience they are having. VoxBox (see Figure 7.26)
is a tangible system designed to gather opinions on a range of topics in situ at an event
through playful and engaging interaction (Golsteijn et al., 2015). It is intended to encour-
age wider participation by grouping similar questions, encouraging completion, gathering
answers to open and closed questions, and connecting answers and results. It was designed
as a large physical system that provides a range of tangible input mechanisms through which
people give their opinions, instead of using, for example, text messages or social media input.
The various input mechanisms include sliders, buttons, knobs, and spinners about which peo-
ple are all familiar. In addition, the system has a transparent tube at the side that drops a ball
step by step as sets of questions are completed to act as an incentive for completion and as a
progress indicator. The results of the selections are aggregated and presented as simple digital
visualizations on the other side (for example, 95 percent are engaged; 5 percent are bored).
VoxBox has been used at a number of events drawing in the crowds, who become completely
absorbed in answering questions in this tangible format.

Figure 7.26 VoxBox—front and back of the tangible machine questionnaire
Source: Yvonne Rogers

7 . 2 I N T E R F A C E T y p E S 241

7.2.16 Augmented Reality
Augmented reality (AR) became an overnight success with the arrival of Pokémon Go in
2016. The smartphone app became an instant hit worldwide. Using a player’s smartphone
camera and GPS signal, the AR game makes it seem as if virtual Pokémon characters are
appearing in the real world—popping up all over the place, such as on buildings, on streets,
and in parks. As players walk around a given place, they may be greeted with rustling bits
of grass that signal a Pokémon nearby. If they walk closer, a Pokémon may pop up on their
smartphone screen, as if by magic, and look as if they are actually in front of them. For exam-
ple, one might be spotted sitting on a branch of a tree or a garden fence.

AR works by superimposing digital elements, like Pokémons, onto physical devices and
objects. Closely related to AR is the concept of mixed reality, where views of the real world
are combined with views of a virtual environment (Drascic and Milgram, 1996). To begin,
augmented reality was mostly a subject of experimentation within medicine, where virtual
objects, for example X-rays and scans, were overlaid on part of a patient’s body to aid the
physician’s understanding of what was being examined or operated on.

AR was then used to aid controllers and operators in rapid decision-making. One exam-
ple is air traffic control, where controllers are provided with dynamic information about the
aircraft in their section that is overlaid on a video screen showing real planes landing, taking
off, and taxiing. The additional information enables the controllers to identify planes easily,

Research and Design Considerations
Researchers have developed conceptual frameworks that identify the novel and specific fea-
tures of a tangible interface (see Fishkin, 2004; Ullmar et al., 2005; Shaer and Hornecker,
2010). A key design concern is what kind of coupling to use between the physical action
and digital effect. This includes determining where the digital feedback is provided in rela-
tion to the physical artifact that has been manipulated. For example, should it appear on top
of the object, beside it, or in some other place? The type and placement of the digital media
will depend to a large extent on the purpose of using a tangible interface. If it is to support
learning, then an explicit mapping between action and effect is critical. In contrast, if it is for
entertainment purposes, for example, playing music or storytelling, then it may be better to
design them to be more implicit and unexpected. Another key design question is what kind
of physical artifact to use to enable the user to carry out an activity in a natural way. Bricks,
cubes, and other component sets are most commonly used because of their flexibility and
simplicity, enabling people to hold them in both hands and to construct new structures that
can be easily added to or changed. Sticky notes and cardboard tokens can also be used for
placing material onto a surface that is transformed or attached to digital content (Klemmer
et al. 2001; Rogers et al., 2006).

Another research question is with what types of digital outputs should tangible interfaces
be combined? Overlaying physical objects with graphical feedback that changes in response
to how the object is manipulated has been the main approach. In addition, audio and haptic
feedback has also been used. Tangibles can also be designed to be an integral part of a multi-
modal interface.

7 I N T E R F A C E S242

which were difficult to make out—something especially useful in poor weather conditions.
Similarly, head-up displays (HUDs) are used in military and civil planes to aid pilots when
landing during poor weather conditions. A HUD provides electronic directional markers on
a fold-down display that appears directly in the field of view of the pilot. A number of high-
end cars now provide AR windshield technology, where navigation directions can literally
look like they are painted on the road ahead of the driver (see Chapter 2, “The Process of
Interaction Design”).

Instructions for building or repairing complex equipment, such as photocopiers and car
engines, have also been designed to replace paper-based manuals, where drawings are super-
imposed upon the machinery itself, telling the mechanic what to do and where to do it.
There are also many AR apps available now for a range of contexts, from education to car
navigation, where digital content is overlaid on geographic locations and objects. To reveal
the digital information, users open the AR app on a smartphone or tablet and the content
appears superimposed on what is viewed through the screen.

Other AR apps have been developed to aid people walking in a city or town. Directions
(in the form of a pointing hand or arrow) and local information (for instance, the nearest
bakery) are overlaid on the image of the street ahead that appears on someone’s smart-
phone screen. These change as the person walks up the street. Virtual objects and infor-
mation are also being combined to make more complex augmented realities. Figure 7.27
shows a weather alert with animated virtual lightning effects alongside information about
a nearby café and the price of properties for sale or rent on a street. Holograms of people
and other objects are also being introduced into AR environments that can appear to move

Figure 7.27 Augmented reality overlay used on a car windshield
Source: https://wayray.com

7 . 2 I N T E R F A C E T y p E S 243

and/or talk. For example, virtual tour guides are beginning to appear in museums, cities,
and theme parks, which can appear to by moving, talking, or gesturing to visitors who are
using an AR app.

The availability of mapping platforms, such as those provided by Niantics and
Google, together with Apple’s ARKit, SparkAR Studio, and Google’s ARCore, has made it
easier for developers and students alike to develop new kinds of AR games and AR apps.
Another popular AR game that has emerged since Pokémon Go is Jurassic World Alive,
where players walk around in the real world to find as many virtual dinosaurs as they can.
It is similar to Pokémon Go but with different gaming mechanisms. For example, players
have to study the dinosaurs they come across by collecting their DNA and then re-creating
it. Microsoft’s Hololens toolkit has also enabled new mixed reality user experiences to be
created, allowing users to create or interact with virtual elements in their surroundings.

Most AR apps use the backward-facing camera on a smartphone or tablet to overlay
the virtual content onto the real world. Another approach is to use the forward-facing
camera (used for selfies) to superimpose digital content onto the user’s face or body. The
most popular app that has used this technique is SnapChat, which provides numerous
filters with which people can experiment plus the opportunity to create their own filters.
Adding accessories such as ears, hair, moving lips, and headgear enables people to trans-
form their physical appearance in all sorts of fun ways.

These kinds of virtual try-ons work by analyzing the user’s facial features and build-
ing a 2-D or 3-D model in real time. So, when they move their head, the make-up or acces-
sories appear to move with them as if they are really on their face. Several AR mirrors now
exist in retail that allow shoppers to try on sunglasses, jewelry, and make-up. The goal is
to let them try on as many different products as they like to see how they look with them
on. Clearly, there are advantages to virtual try-ons: it can be more convenient, engaging,
and easier compared to trying on the real thing. There are disadvantages too, however,
in that they only give an impression of what you look like. For example, the user cannot
feel the weight of a virtual accessory on their head or the texture of virtual make-up on
their face.

The same technology can be used to enable people to step into historical, famous,
film, or stage characters (for instance, David Bowie or Queen Victoria). For example, a
virtual try-on app that was developed as part of a cultural experience was the Magic-
Face (Javornik, et al., 2017). The goal was to enable audiences to experience firsthand
what it was like to try on the make-up of a character from an opera. The opera chosen
was Philip Glass’s Akhnaten, set in ancient Egyptian time (see Figure 7.28a). The vir-
tual make-up developed were for a Pharaoh and his wife. The app was developed by
University College London researchers alongside the English National Opera and AR
company, Holition. To provide a real-world context, the app was designed to run on
a tablet display that was disguised as a real mirror and placed in an actor’s dressing
room (see Figure 7.28b). On encountering the mirror in situ, visiting school children
were fascinated by the way the virtual make-up made them look like Akhnaten and his
wife, Nefertiti. The singers and make-up artists who were in the production also tried
it out and saw great potential for using the app to enhance their existing repertoire of
rehearsal and make-up tools.

7 I N T E R F A C E S244

(a) (b)

Figure 7.28 (a) A principal singer trying on the virtual look of Akhnaten and (b) a framed AR mirror
in the ENO dressing room
Source: Used courtesy of Ana Javornik

Research and Design Considerations
A key research concern when designing augmented reality is what form the digital augmenta-
tion should take and when and where it should appear in the physical environment (Rogers
et al., 2005). The information (such as navigation cues) needs to stand out but not distract the
person from their ongoing activity in the physical world. It also needs to be simple and align
with the real-world objects taking into account that the user will be moving. Another concern
is how much digital content to overlay on the physical world and how to attract the user’s
attention to it. There is the danger that the physical world becomes overloaded with digital
ads and information “polluting” it to the extent that people will turn the AR app off.

One of the limitations of current AR technology is that sometimes the modeling can be
slightly off so that the overlaying of the digital information appears in the wrong place or is
out of sync with what is being overlaid. This may not be critical for fun applications, but it
may be disconcerting if eye shadow appears on someone’s ear. It may also break the magic
of the AR experience. Ambiguity and uncertainty may be exploited to good effect in mixed
reality games, but it could be disastrous in a more serious context, such as the military or
medical setting.

7 . 2 I N T E R F A C E T y p E S 245

7.2.17 Wearables
Wearables are a broad category of devices that are worn on the body. These include smart-
watches, fitness trackers, fashion tech, and smart glasses. Since the early experimental days
of wearable computing, where Steve Mann (1997) donned head and eye cameras to enable
him to record what he saw while also accessing digital information on the move, there have
been many innovations and inventions, including Google Glass.

New flexible display technologies, e-textiles, and physical computing (for example,
Arduino) provide opportunities to design wearables that people will actually want to wear.
Jewelry, caps, glasses, shoes, and jackets have all been the subject of experimentation designed
to provide the user with a means of interacting with digital information while on the move
in the physical world. Early wearables focused on convenience, enabling people to carry out
a task (for example, selecting music) without having to take out and control a handheld
device. Examples included a ski jacket with integrated music player controls that enabled
the wearer to simply touch a button on their arm with their glove to change a music track.
More recent applications have focused on how to combine textiles, electronics, and haptic
technologies to promote new forms of communication. For example, CuteCircuit developed
the KineticDress, which was embedded with sensors that followed the body of the wearer
to capture their movements and interaction with others. These were then displayed through
electroluminescent embroidery that covered the external skirt section of the dress. Depend-
ing on the amount and speed of the wearer’s movement, it changed patterns, displaying the
wearer’s mood to the audience and creating a magic halo around her.

Exoskeleton clothing (see section 7.2.12) is also an area where fashion meets technology
in order to augment and assist people who have problems with walking by literally walk-
ing or exercising the person wearing them. In this way, it combines haptics with a wear-
able. Within the construction industry, exoskeleton suits have also been developed to provide
additional power to workers—a bit like Superman—where metal frameworks are fitted with
motorized muscles to multiply the wearer’s strength. It can make lifting objects feel lighter
and in doing so protect the worker from physical injuries.

DILEMMA
Google Glass: Seeing Too Much?

Google Glass was a wearable that went on sale in 2014 in various fashion styles (see
Figure 7.29). It was designed to look like a pair of glasses, but with one lens of the glass being
an interactive display with an embedded camera that could be controlled with speech input.
It allowed the wearer to take photos and videos on the move and look at digital content, such
as email, texts, and maps. The wearer could also search the web using voice commands, and
the results would appear on the screen. A number of applications were developed beyond
those for everyday use, including WatchMeTalk, which provided live captions to help the

(Continued)

7 I N T E R F A C E S246

Watch the interesting video of London through Google Glass at http://youtu.be/
Z3AIdnzZUsE and the Talking Shoe concept at http://youtu.be/VcaSwxbRkcE.

hearing-impaired in their day-to-day conversations and Preview for Glass that enabled the
wearer to watch a movie trailer the moment they looked at a movie poster.

However, being in the company of someone wearing a Google Glass was felt by many to
be unnerving, as the wearer looked up and to the right to view what was on the glass screen
rather than looking at you and into your eyes. One of the criticisms of wearers of Google
Glass was that it made them appear to be staring into the distance. Others were worried that
those wearing Google Glass were recording everything that was happening in front of them.
As a reaction, a few bars and restaurants in the United States implemented a “no Glass” policy
to prevent customers from recording other patrons.

The original Google Glass was retired after a couple of years. Since then, other types of
smart glasses have come onto the market that synch a user’s smartphone with the display and
camera on the glasses via Bluetooth. These include Vuzic Blade, which has a camera onboard
and voice control that is connected to Amazon Echo devices, along with the provision of
turn-by-turn navigation and location-based alerts; and Snap’s Spectacles, which simply allows
the wearer to share photos and video with their friends on Snapchat they take when wearing
these glasses.

Figure 7.29 Google Glass
Source: Google Inc.

http://youtu.be/Z3AIdnzZUsE

http://youtu.be/Z3AIdnzZUsE

7 . 2 I N T E R F A C E T y p E S 247

7.2.18 Robots and Drones
Robots have been around for some time, most notably as characters in science-fiction mov-
ies, but they also play an important role as part of manufacturing assembly lines, as remote
investigators of hazardous locations (for example, nuclear power stations and bomb dis-
posal), and as search and rescue helpers in disasters (for instance in forest fires) or faraway
places (like Mars). Console interfaces have been developed to enable humans to control and
navigate robots in remote terrains, using a combination of joysticks and keyboard controls
together with cameras and sensor-based interactions (Baker et al., 2004). The focus has been
on designing interfaces that enable users to steer and move a remote robot effectively with
the aid of live video and dynamic maps.

Domestic robots that help with the cleaning and gardening have become popular. Robots
are also being developed to help the elderly and disabled with certain activities, such as
picking up objects and cooking meals. Pet robots, in the guise of human companions, have
been commercialized. Several research teams have taken the “cute and cuddly” approach to
designing robots, signaling to humans that the robots are more pet-like than human-like. For
example, Mitsubishi developed Mel the penguin (Sidner and Lee, 2005) whose role was to
host events, while the Japanese inventor Takanori Shibata developed Paro in 2004, a baby
harp seal that looks like a cute furry cartoon animal, and whose role was as a companion
(see Figure 7.30). Sensors were embedded in the pet robots, enabling them to detect certain
human behaviors and respond accordingly. For example, they can open, close, and move
their eyes, giggle, and raise their flippers. The robots encourage being cuddled or spoken to,

Research and Design Considerations
A core design concern specific to wearable interfaces is comfort. Users need to feel comfort-
able wearing clothing that is embedded with technology. It needs to be light, small, not get
in the way, fashionable, and (with the exception of the displays) preferably hidden in the
clothing. Another related issue is hygiene. Is it possible to wash or clean the clothing once
worn? How easy is it to remove the electronic gadgetry and replace it? Where are the bat-
teries going to be placed, and how long is their lifetime? A key usability concern is how does
the user control the devices that are embedded in their clothing. Are touch, speech, or more
conventional buttons and dials preferable?

A number of technologies can be developed and combined to create wearables including
LEDs, sensors, actuators, tangibles, and AR. There is abundant scope for thinking creatively
about when and whether to make something wearable as opposed to mobile. In Chapter 1,
“What Is Interaction Design?” we mentioned how assistive technology can be designed to be
fashionable in order to overcome stigmas of having to wear a monitoring device (for instance,
for glucose levels), substitution (for example, a prosthetic) or amplifying device (for example,
hearing aids).

7 I N T E R F A C E S248

as if they were real pets or animals. The appeal of pet robots is thought to be partially due to
their therapeutic qualities, being able to reduce stress and loneliness among the elderly and
infirm (see Chapter 6, “Emotional Interaction,” for more on cuddly robot pets). Paro has
since been used to help patients with dementia to make them feel more at ease and comforted
(Griffiths, 2014). Specifically, it has been used to encourage social behavior among patients
who often anthropomorphize it. For example, they might say as a joke “it’s farted on me!”
which makes them and others around them laugh, leading to further laughter and joking.
This form of encouraging of social interaction is thought to be therapeutic.

Drones are a form of unmanned aircraft that are controlled remotely. They were first used
by hobbyists and then by the military. Since then, they have become more affordable, acces-
sible, and easier to fly. As a result, they have begun to be used in a wider range of contexts.
These include entertainment, such as carrying drinks and food to people at festivals and par-
ties; agricultural applications, such as flying them over vineyards and fields to collect data that
is useful to farmers (see Figure 7.31); and helping to track poachers in wildlife parks in Africa
(Preece, 2016). Compared with other forms of data collection, they can fly low and stream
photos to a ground station where the images can be stitched together into maps and then used
to determine the health of a crop or when is the best time to harvest the crop.

Figure 7.30 (a) Mel, the penguin robot, designed to host activities; (b) Japan’s Paro, an interactive
seal, designed as a companion, primarily for the elderly and sick children
Source: (a) Mitsubishi Electric Research Labs (b) Parorobots.com

Watch the video of Robot Pets of the Future at http://youtu.be/wBFws1lhuv0.

Watch the video of Rakuten delivering beer via drone to golfers on a golf course at
https://youtu.be/ZameOVS2Skw.

http://www.Parorobots.com

7 . 2 I N T E R F A C E T y p E S 249

Figure 7.31 A drone being used to survey the state of a vineyard
Source: Drone inspecting vineyard / Shutterstock

Research and Design Considerations
An ethical concern is whether it is acceptable to create robots that exhibit behaviors that
humans will consider to be human- or animal-like. While this form of attribution also occurs
for agent interfaces (see Chapter 3), having a physical embodiment—as robots do—can make
people suspend their disbelief even more, viewing the robots as pets or humans.

This raises the moral question as to whether such anthropomorphism should be encour-
aged. Should robots be designed to be as human-like as possible, looking like us with human
features, such as eyes and a mouth, behaving like us, communicating like us, and emotionally
responding like us? Or, should they be designed to look like robots and behave like robots,
for instance, vacuum cleaner robots that serve a clearly defined purpose? Likewise, should
the interaction be designed to enable people to interact with the robot as if it were another
human being, for example, by talking to it, gesturing at it, holding its hand, and smiling at it?
Or, should the interaction be designed to be more like human–computer interaction, in other
words, by pressing buttons, knobs, and dials to issue commands?

For many people, the cute pet approach to robotic interfaces seems preferable to one that
seeks to design them to be more like fully fledged human beings. Humans know where they
stand with pets and are less likely to be unnerved by them and, paradoxically, are more likely
to suspend their disbelief in the companionship they provide.

Another ethical concern is whether it is acceptable to use unmanned drones to take a
series of images or videos of fields, towns, and private property without permission or people
knowing what is happening. They are banned from certain areas such as airports, where they
present a real danger. Another potential problem is the noise they make when flying. Having a
drone constantly buzzing past your house or delivering drinks to golf players or festival goers
nearby can be very annoying.

7 I N T E R F A C E S250

7.2.19 Brain–Computer Interfaces
Brain–computer interfaces (BCI) provide a communication pathway between a person’s brain
waves and an external device, such as a cursor on a screen or a tangible puck that moves via
airflow. The person is trained to concentrate on the task (for example, moving the cursor or
the puck). Several research projects have investigated how this technique can be used to assist
and augment human cognitive or sensory-motor functions. The way BCIs work is by detect-
ing changes in the neural functioning of the brain. Our brains are filled with neurons that
comprise individual nerve cells connected to one another by dendrites and axons. Every time
we think, move, feel, or remember something, these neurons become active. Small electric
signals rapidly move from neuron to neuron, which to a certain extent can be detected by
electrodes that are placed on a person’s scalp. The electrodes are embedded in specialized
headsets, hairnets, or caps.

Brain–computer interfaces have also been developed to control various games. For
example, Brainball was developed as a game to be controlled by players’ brain waves in
which they compete to control a ball’s movement across a table by becoming more relaxed
and focused. Other possibilities include controlling a robot and being able to fly a virtual
plane. Pioneering medical research, conducted by the BrainGate research group at Brown
University, has started using brain-computer interfaces to enable people who are paralyzed
to control robots (see Figure 7.32). For example, a robotic arm controlled by a tethered
BCI has enabled patients who are paralyzed to feed themselves (see video mentioned next).
Another startup company, NextMind, is developing a noninvasive brain-sensing device
intended for the mass market to enable users to play games and control electronic and
mobile devices in real time using just their thoughts. It is researching how to combine
brain-sensing technology with innovative machine-learning algorithms that can translate
brain waves into digital commands.

Watch a video of a woman who is paralyzed moving a robot with her mind at
http://youtu.be/ogBX18maUiM.

Source: Tim Cordell / Cartoon Stock

7 . 2 I N T E R F A C E T y p E S 251

7.2.20 Smart Interfaces
The motivation for many new technologies is to make them smart, whether it is a smartphone,
smartwatch, smart building, smart home, or smart appliance (for example smart lighting,
smart speakers, or virtual assistants). The adjective is often used to suggest that the device
has some intelligence and it is connected to the Internet. More generally, smart devices are
designed to interact with users and other devices connected to a network, many of which are auto-
mated, not requiring users to interact with them directly (Silverio-Fernández et al., 2018).
The goal is to make them context-aware, that is, to understand what is happening around
them and execute appropriate actions. To achieve this, some have been programmed with
AI so that they can learn the context and a user’s behavior. Using this intelligence, they then
change settings or switch things on according to the user’s assumed preferences. An example
is the smart Nest thermostat that is designed to learn from a householder’s behavior. Rather
than make the interface invisible, the designers chose to turn it into an aesthetically pleasing
one that could be easily viewed (see Box 6.2).

Smart buildings have been designed to be more energy efficient, efficient, and cost effec-
tive. Architects are motivated to use state-of-the-art sensor technology to control building
systems, such as ventilation, lighting, security, and heating. Often, the inhabitants of such
buildings are considered to be the ones at fault for wasting energy, as they may leave the
lights and heating on overnight when not needed, or they forget to lock a door or window.
One benefit of having automated systems take control of building services is to reduce these
kinds of human errors—a phrase often used by engineers is to take the human “out of the
loop.” While some smart buildings and homes have improved how they are managed and cut
costs, they can also be frustrating to the user, who sometimes would like to be able to open a
window to let fresh air in or raise a blind to let in natural lighting. Taking the human out of

Figure 7.32 A brain-computer interface being used by a woman who is paralyzed to select letters
on a screen (developed by BrownGate)
Source: Brown University

7 I N T E R F A C E S252

the loop means that these operations are no longer available. Windows are locked or sealed,
and heating is controlled centrally.

Instead of simply introducing ever more automation that takes the human out of the
loop further, another approach is to consider the needs of the inhabitants in conjunction
with introducing smart technology. For example, a new approach that focuses on inhabitants
is called human–building interaction (HBI). It is concerned with understanding and shap-
ing people’s experiences with, and within, built environments (Alavi et al., 2019). The focus
is on human values, needs, and priorities in addressing people’s interactions with “smart”
environments.

7.3 Natural User Interfaces and Beyond

As we have seen, there are many kinds of interfaces that can be used to design for user expe-
riences. The staple for many years was the GUI, then the mobile device interface, followed
by touch, and now wearables and smart interfaces. Without question, they have been able to
support all manner of user activities. What comes next? Will other kinds of interfaces that are
projected to be more natural become more mainstream?

A natural user interface (NUI) is designed to allow people to interact with a computer
in the same way that they interact with the physical world—using their voice, hands, and
bodies. Instead of using a keyboard, mouse, or touchpad (as is the case with GUIs), NUIs
enable users to speak to machines, stroke their surfaces, gesture at them in the air, dance on
mats that detect feet movements, smile at them to get a reaction, and so on. The naturalness
refers to the use of everyday skills humans have developed and learned, such as talking, writ-
ing, gesturing, walking, and picking up objects. In theory, they should be easier to learn and
map more readily onto how people interact with the world than compared with learning to
use a GUI.

Instead of having to remember which function keys to press to open a file, a NUI means a
person only has to raise their arm or say “open.” But how natural are NUIs? Is it more natural
to say “open” than to flick a switch when you want to open a door? And is it more natural to
raise both arms to change a channel on the TV than to press a button on a remote device or
tell it what to do by speaking to it? Whether a NUI is natural depends on a number of fac-
tors, including how much learning is required, the complexity of the app or device’s interface,
and whether accuracy and speed are needed (Norman, 2010). Sometimes a gesture is worth a
thousand words. Other times, a word is worth a thousand gestures. It depends on how many
functions the system supports.

Consider the sensor-based faucets that were described in Chapter 1. The gesture-based
interface works mostly (with the exception of people wearing black clothing that cannot be
detected) because there are only two functions: (1) turning on the water by waving one’s
hands under the tap, and (2) turning off the water by removing them from the sink. Now
think about other functions that faucets usually provide, such as controlling water tem-
perature and flow. What kind of a gesture would be most appropriate for changing the
temperature and then the flow? Would one decide on the temperature first by raising one’s
left arm and the flow by raising one’s right arm? How would someone know when to stop
raising their arm to get the right temperature? Would they need to put a hand under the tap

7 . 4 W h I C h I N T E R F A C E ? 253

to check? But if they put their right hand under the tap, that might that have the effect of
decreasing the flow? And when does the system know that the desired temperature and flow
has been reached? Would it require having both arms suspended in midair for a few seconds
to register that was the desired state? It is a difficult problem on how to provide these choices,
and it is probably why sensor-based faucets in public bathrooms all have their temperature
and flow set to a default.

Our overview of different interface types in this chapter has highlighted how gestural,
voice, and other kinds of NUIs have made controlling input and interacting with digital
content easier and more enjoyable, even though sometimes they can be less than perfect. For
example, using gestures and whole-body movements have proven to be highly enjoyable as a
form of input for computer games and physical exercises. Furthermore, new kinds of gesture,
voice, and touch interfaces have made the web and online tools more accessible to those who
are visually impaired. For example, the iPhone’s VoiceOver control features have empowered
visually impaired individuals to be able to easily send email, use the web, play music, and so
on, without having to buy an expensive customized phone or screen reader. Moreover, being
able to purchase a regular phone means not being singled out for special treatment. And
while some gestures may feel cumbersome for sighted people to learn and use, they may not
be so for blind or visually impaired people. The iPhone VoiceOver press and guess feature
that reads out what you tap on the screen (for example, “messages,” “calendar,” “mail: 5
new items”) can open up new ways of exploring an application while a three-finger tap can
become a natural way to turn the screen off.

An emerging class of human–computer interfaces are those that rely largely on subtle,
gradual, and continuous changes triggered by information obtained implicitly from the user
together with the use of AI algorithms that are coded to learn about the user’s behavior
and preferences. These are connected with lightweight, ambient, context-aware, affective,
and augmented cognition interfaces (Solovey et al., 2014). Using brain, body, behavioral, and
environmental sensors, it is now possible to capture subtle changes in people’s cognitive and emo-
tional states in real time. This opens up new doors in human–computer interaction. In par-
ticular, it allows for information to be used as both continuous and discrete input, potentially
enabling new outputs to match and be updated with what people might want and need at any
given time. Adding AI to the mix will also enable a new type of interface to emerge that goes
beyond simply being natural and smart—one that allows people to develop new superpow-
ers that will enable them to work synergistically with technology to solve ever-more complex
problems and undertake unimaginable feats.

7.4 Which Interface?

This chapter presented an overview of the diversity of interfaces that is now available or
currently being researched. There are many opportunities to design for user experiences
that are a far cry from those originally developed using the command-based interfaces of
the 1980s. An obvious question this raises is which one and how do you design it? In many
contexts, the requirements for the user experience that have been identified will determine
what kind of interface might be appropriate and what features to include. For example, if
a healthcare app is being developed to enable patients to monitor their dietary intake, then a

7 I N T E R F A C E S254

mobile device that has the ability to scan barcodes and/or take pictures of food items that
can be compared with a database would be a good interface to use, enabling mobility,
effective object recognition, and ease of use. If the goal is to design a work environment to
support collocated group decision-making activities, then combining shareable technolo-
gies and personal devices that enable people to move fluidly among them would be good
to consider using.

But how to decide which interface is preferable for a given task or activity? For example,
is multimedia better than tangible interfaces for learning? Is voice effective as a command-
based interface? Is a multimodal interface more effective than a single media interface? Are
wearable interfaces better than mobile interfaces for helping people find information in for-
eign cities? How does VR differ from AR, and which is the ultimate interface for playing
games? In what way are tangible environments more challenging and captivating than vir-
tual worlds? Will shareable interfaces, such as interactive furniture, be better at supporting
communication and collaboration compared with using networked desktop technologies?
And so forth. These questions are currently being researched. In practice, which interface
is most appropriate, most useful, most efficient, most engaging, most supportive, and so on
will depend on the interplay of a number of factors, including reliability, social acceptability,
privacy, ethical, and location concerns.

In-Depth Activity
Choose a game that you or someone you know plays a lot on a smartphone (for example,
Candy Crush Saga, Fortnite, or Minecraft). Consider how the game could be played using
different interfaces other than the smartphone’s. Select three different interfaces (for instance,
tangible, wearable, and shareable) and describe how the game could be redesigned for each of
these, taking into account the user group being targeted. For example, the tangible game could
be designed for children, the wearable interface for young adults, and the shareable interface
for older people.
1. Go through the research and design considerations for each interface and consider whether

they are relevant for the game setting and what considerations they raise.
2. Describe a hypothetical scenario of how the game would be played for each of the three

interfaces.
3. Consider specific design issues that will need to be addressed. For example, for the share-

able surface would it be best to have a tabletop or a wall-based surface? How will the users
interact with the game elements for each of the different interfaces—by using a pen, finger-
tips, voice, or other input device? How do you turn a single-player game into a multiple
player one? What rules would you need to add?

4. Compare the pros and cons of designing the game using the three different interfaces with
respect to how it is played on the smartphone.

F U R T h E R R E A D I N G 255

Further Reading

Many practical books have been published on interface design. Some have been revised into
second editions. Publishers like New Riders and O’Reilly frequently offer up-to-date books
for a specific interface area (for example web or voice). Some are updated on a regular basis
while others are published when a new area emerges. There are also a number of excellent
online resources, sets of guidelines, and thoughtful blogs and articles.

DASGUPTA, R. (2019) Voice User Interface Design: Moving from GUI to Mixed Modal
Interaction. Apress. This is a guide that covers the challenges of moving from GUI design to
mixed-modal interactions. It describes how our interactions with devices are rapidly chang-
ing, illustrating this through a number of case studies and design principles of VUI design.

Summary
This chapter provided an overview of the diversity of interfaces that can be designed for user
experiences, identifying key design issues and research questions that need to be addressed. It
has highlighted the opportunities and challenges that lie ahead for designers and researchers
who are experimenting with and developing innovative interfaces. It also explained some of
the assumptions behind the benefits of different interfaces—some that are currently supported
and others that are still unsubstantiated. The chapter presented a number of interaction tech-
niques that are particularly suited (or not) for a given interface type. It also discussed the
dilemmas facing designers when using a particular kind of interface, for example, abstract
versus realism, menu selection versus free-form text input, and human-like versus non-human-like.
Finally, it presented pointers to specific design guidelines and exemplary systems that have
been designed using a given interface.

Key Points
• Many interfaces have emerged post the WIMP/GUI era, including voice, wearable, mobile,
tangible, brain-computer, smart, robots, and drones.

• A range of design and research questions need to be considered when deciding which inter-
face to use and what features to include.

• Natural user interfaces may not be as natural as graphical user interfaces—it depends on
the task, user, and context.

• An important concern that underlies the design of any kind of interface is how information
is represented to the user (be it speech, multimedia, virtual reality, augmented reality), so
that they can make sense of it with respect to their ongoing activity, for example, playing a
game, shopping online, or interacting with a pet robot.

• Increasingly, new interfaces that are context-aware or monitor people raise ethical issues
concerned with what data is being collected and for what is it being used.

7 I N T E R F A C E S256

ROWLAND, C., GOODMAN, E., CHARLIER, M., LIGHT, A. and LUI, A. (2015) Designing
Connected Products. O’Reilly. This collection of chapters covers the challenges of designing
connected products that go beyond the traditional scope of interaction design and software
development. It provides a road map and covers a range of aspects, including pairing devices,
new business models, and flow of data in products.

GOOGLE Material Design https://material.io/design/ This online resource provides a living
online document that visually illustrates essential interface design principles. It is beautifully
laid out and very informative to click through all of the interactive examples that it provides.
It shows how to add some physical properties to the digital world to make it feel more intui-
tive to use across platforms.

KRISHNA, G. (2015) The Best Interfaces Are No Interfaces. New Riders. This polemical and
funny book challenges the reader to think beyond the screen when designing new interfaces.

KRUG, S. (2014) Don’t Make Me Think! (3rd edn). New Riders Press. The third edition of
this very accessible classic guide on web design presents up-to-date principles and examples
on web design with a focus on mobile usability. It is highly entertaining with lots of great
illustrations.

NORMAN, D. (2010) Natural interfaces are not natural, interactions, May/June, 6–10. This
is a thought-provoking essay by Don Norman about what is natural may not appear to be
natural, which is still very relevant today.

https://material.io/design/

257

INTERVIEW with
Leah Buechley

Leah Buechley is an independent designer,
engineer, and educator. She has a PhD in
computer science and a degree in physics.
She began her studies as a dance major and
has also been deeply engaged in theater,
art, and design over the years. She was the
founder and director of the high-low tech
group at the MIT media lab from 2009
to 2014. She has always blended the sci-
ences and the arts in her education and her
career, as witnessed by her current work,
which consists of computer science, indus-
trial design, interaction design, art, and
electrical engineering.

What is the focus of your work?
I’m most interested in changing the culture
of technology and engineering to make it
more diverse and inclusive. To achieve that
goal, I blend computation and electronics
with a range of different materials and em-
ploy techniques drawn from art, craft, and
design. This approach leads to technol-
ogies and learning experiences that appeal
to a diverse group of people.

Can you give me some examples of how
you mesh the digital with physical materi-
als?
My creative focus for the last several years
has been computational design—a process
in which objects are designed via an algo-
rithm and then constructed with a combi-
nation of fabrication and hand building.
I’m especially excited about computational
ceramics and have been developing a set of

tools and techniques that enable people to
integrate programming and hand building
with clay.

I’ve also been working on a project
called LilyPad Arduino (or LilyPad) for
over 10 years. LilyPad is a construction
kit that enables people to embed com-
puters and electronics into fabric. It’s a set
of sewable electronic pieces, including mi-
crocontrollers, sensors, and LEDs, that are
stitched together with conductive thread.
People can use the kit to make singing pil-
lows, glow-in-the-dark handbags, and in-
teractive ball gowns.

Another example is the work my for-
mer students and I have done in paper-
based computing. My former student Jie
Qi developed a kit called Chibitronics
circuit stickers that lets you build inter-
active paper-based projects. Based on her
years of research in my group at MIT, the
kit is a set of flexible peel-and-stick elec-
tronic stickers. You can connect ultra-thin
LEDs, microcontrollers, and sensors with
conductive ink, tape, or thread to quickly
make beautiful electronic sketches.

The LilyPad and Chibitronics kits are
now used by people around the world to
learn computing and electronics. It’s been
fascinating and exciting to see this research
have a tangible impact.

Why would anyone want to wear a com-
puter in their clothing?
Computers open up new creative possibil-
ities for designers. Computers are simply

(Continued)

I N T E R V I E W W I T h L E A h B U E C h L E y

7 I N T E R F A C E S258

a new tool, albeit an especially powerful
one, in a designer’s toolbox. They allow
clothing designers to make garments that
are dynamic and interactive. Clothing that
can, for example, change color in response
to pollution levels, sparkle when a loved
one calls you on the phone, or notify you
when you blood pressure increases.

How do you involve people in your research?
I engage with people in a few different
ways. First, I design hardware and soft-
ware tools to help people build new and
different kinds of technology. The LilyPad
is a good example of this kind of work. I
hone these designs by teaching workshops
to different groups of people. And once a
tool is stable, I work hard to disseminate it
to users in the real world. The LilyPad has
been commercially available since 2007,
and it has been fascinating and exciting to
see how a group of real-world designers—
who are predominantly female—is using it
to build things like smart sportswear, plush
video game controllers, soft robots, and in-
teractive embroideries.

I also strive to be as open as possi-
ble with my own design and engineering
explorations. I document and publish as
much information as I can about the mate-
rials, tools, and processes I use. I apply an
open source approach not only to the soft-
ware and hardware I create but, as much
as I can, to the entire creative process. I
develop and share tutorials, classroom and
workshop curricula, materials references,
and engineering techniques.

What excites you most about your work?
I am infatuated with materials. There is
nothing more inspiring than a sheet of

heavy paper, a length of wool felt, a slab of
clay, or a box of old motors. My thinking
about design and technology is largely
driven by explorations of materials and
their affordances. So, materials are always
delightful. For example, the shape and sur-
face pattern of the cup in Figure 7.33 were
computationally designed. A template of
the design was then laser cut and pressed
into a flat sheet or “slab” of clay. Finally,
the clay was folded into shape and then
fired and glazed using traditional ceramic
techniques. But the real-world adoption of
tools I’ve designed and the prospect this
presents for changing technology culture
is perhaps what’s most exciting. My most
dearly held goal is to expand and diversify
technology culture, and it’s tremendously
rewarding to see evidence that my work is
doing that.

Figure 7.33 An example of a computational cup
Source: Used courtesy of Leah Buechley

Chapter 8

D A T A G A T H E R I N G

Objectives
The main goals of the chapter are to accomplish the following:

• Discuss how to plan and run a successful data gathering program.
• Enable you to plan and run an interview.
• Empower you to design a simple questionnaire.
• Enable you to plan and carry out an observation.

8.1 Introduction

Data is everywhere. Indeed, it is common to hear people say that we are drowning in data
because there is so much of it. So, what is data? Data can be numbers, words, measurements,
descriptions, comments, photos, sketches, films, videos, or almost anything that is useful for
understanding a particular design, user needs, and user behavior. Data can be quantitative or
qualitative. For example, the time it takes a user to find information on a web page and the
number of clicks to get to the information are forms of quantitative data. What the user says
about the web page is a form of qualitative data. But what does it mean to collect these and
other kinds of data? What techniques can be used, and how useful and reliable is the data
that is collected?

This chapter presents some techniques for data gathering that are commonly used in
interaction design activities. In particular, data gathering is a central part of discovering
requirements and evaluation. Within the requirements activity, data gathering is conducted

8.1 Introduction

8.2 Five Key Issues

8.3 Data Recording

8.4 Interviews

8.5 Questionnaires

8.6 Observation

8.7 Choosing and Combining Techniques

8 D ATA G AT H E R I N G260

to collect sufficient, accurate, and relevant data so that design can proceed. Within evalua-
tion, data gathering captures user reactions and their performance with a system or proto-
type. All of the techniques that we will discuss can be done with little to no programming or
technical skills. Recently, techniques for scraping large volumes of data from online activi-
ties, such as Twitter posts, have become available. These and other techniques for managing
huge amounts of data, and the implications of their use, are discussed in Chapter 10, “Data
at Scale.”

Three main techniques for gathering data are introduced in this chapter: interviews,
questionnaires, and observation. The next chapter discusses how to analyze and interpret
the data collected. Interviews involve an interviewer asking one or more interviewees a
set of questions, which may be highly structured or unstructured; interviews are usually
synchronous and are often face-to-face, but they don’t have to be. Increasingly, interviews
are conducted remotely using one of the many teleconferencing systems, such as Skype or
Zoom, or on the phone. Questionnaires are a series of questions designed to be answered
asynchronously, that is, without the presence of the investigator. These questionnaires may
be paper-based or available online. Observation may be direct or indirect. Direct obser-
vation involves spending time with individuals observing their activities as they happen.
Indirect observation involves making a record of the user’s activity as it happens, to be
studied at a later date. All three techniques may be used to collect qualitative or quanti-
tative data.

Although this is a small set of basic techniques, they are flexible and can be combined
and extended in many ways. Indeed, it is important not to focus on just one data gathering
technique, if possible, but to use them in combination so as to avoid biases that are inherent
in any one approach.

8.2 Five Key Issues

Five key issues require attention for any data gathering session to be successful: goal setting,
identifying participants, the relationship between the data collector and the data provider,
triangulation, and pilot studies.

8.2.1 Setting Goals
The main reason for gathering data is to glean information about users, their behavior,
or their reaction to technology. Examples include understanding how technology fits into
family life, identifying which of two icons representing “send message” is easier to use,
and finding out whether the planned redesign for a handheld meter reader is headed in the
right direction. There are many different reasons for gathering data, and before beginning,
it is important to set specific goals for the study. These goals will influence the nature of
data gathering sessions, the data gathering techniques to be used, and the analysis to be
performed (Robson and McCartan, 2016).

The goals may be expressed more or less formally, for instance, using some structured
or even mathematical format or using a simple description such as the ones in the previous
paragraph. Whatever the format, however, they should be clear and concise. In interaction
design, it is more common to express goals for data gathering informally.

8 . 2 F I v E K E y I s s u E s 261

8.2.2 Identifying Participants
The goals developed for the data gathering session will indicate the types of people from
whom data is to be gathered. Those people who fit this profile are called the population
or study population. In some cases, the people from whom to gather data may be clearly
identifiable—maybe because there is a small group of users and access to each one is easy.
However, it is more likely that the participants to be included in data gathering need to be
chosen, and this is called sampling. The situation where all members of the target population
are accessible is called saturation sampling, but this is quite rare. Assuming that only a por-
tion of the population will be involved in data gathering, then there are two options: prob-
ability sampling or nonprobability sampling. In the former case, the most commonly used
approaches are simple random sampling or stratified sampling; in the latter case, the most
common approaches are convenience sampling or volunteer panels.

Random sampling can be achieved by using a random number generator or by choosing
every nth person in a list. Stratified sampling relies on being able to divide the population into
groups (for example, classes in a secondary school) and then applying random sampling. Both
convenience sampling and volunteer panels rely less on choosing the participants and more on
the participants being prepared to take part. The term convenience sampling is used to describe
a situation where the sample includes those who were available rather than those specifically
selected. Another form of convenience sampling is snowball sampling, in which a current par-
ticipant finds another participant and that participant finds another, and so on. Much like a
snowball adds more snow as it gets bigger, the population is gathered up as the study progresses.

The crucial difference between probability and nonprobability methods is that in the
former you can apply statistical tests and generalize to the whole population, while in the
latter such generalizations are not robust. Using statistics also requires a sufficient number of
participants. Vera Toepoel (2016) provides a more detailed treatment of sampling, particu-
larly in relation to survey data.

BOX 8.1
How Many Participants Are Needed?

A common question is, how many participants are needed for a study? In general, having
more participants is better because interpretations of statistical test results can be stated with
higher confidence. What this means is that any differences found among conditions are more
likely to be caused by a genuine effect rather than being due to chance.

More formally, there are many ways to determine how many participants are needed.
Four of these are saturation, cost and feasibility analysis, guidelines, and prospective power
analysis (Caine, 2016).
• Saturation relies on data being collected until no new relevant information emerges, and

so it is not possible to know the number in advance of the saturation point being reached.
• Choosing the number of participants based on cost and feasibility constraints is a prac-

tical approach and is justifiable; this kind of pragmatic decision is common in industrial
projects but rarely reported in academic research.

(Continued)

8 D ATA G AT H E R I N G262

8.2.3 Relationship with Participants
One significant aspect of any data gathering is the relationship between the person (people)
doing the gathering and the person (people) providing the data. Making sure that this rela-
tionship is clear and professional will help to clarify the nature of the study. How this is
achieved varies in different countries and different settings. In the United States and United
Kingdom, for example, it is achieved by asking participants to sign an informed consent
form, while in Scandinavia such a form is not required. The details of this form will vary, but
it usually asks the participants to confirm that the purpose of the data gathering and how the
data will be used has been explained to them and that they are willing to continue. It usually
explains that their data will be private and kept in a secure place. It also often includes a
statement that participants may withdraw at any time and that in this case none of their data
will be used in the study.

The informed consent form is intended to protect the interests of both the data gatherer
and the data provider. The gatherer wants to know that the data they collect can be used in
their analysis, presented to interested parties, and published in reports. The data provider
wants reassurance that the information they give will not be used for other purposes or in
any context that would be detrimental to them. For example, they want to be sure that per-
sonal contact information and other personal details are not made public. This is especially
true when people with disabilities or children are being interviewed. In the case of children,
using an informed consent form reassures parents that their children will not be asked threat-
ening, inappropriate, or embarrassing questions, or be asked to look at disturbing or violent
images. In these cases, parents are asked to sign the form. Figure 8.1 shows an example of a
typical informed consent form.

This kind of consent is also not generally required when gathering requirements data
for a commercial company where a contract usually exists between the data collector and
the data provider. An example is where a consultant is hired to gather data from company
staff during the course of discovering requirements for a new interactive system to support
timesheet entry. The employees of this company would be the users of the system, and the
consultant would therefore expect to have access to the employees to gather data about
the timesheet activity. In addition, the company would expect its employees to cooper-
ate in this exercise. In this case, there is already a contract in place that covers the data

• Guidelines may come from experts or from “local standards,” for instance, from an accepted
norm in the field.

• Prospective power analysis is a rigorous method used in statistics that relies on existing
quantitative data about the topic; in interaction design, this data is often unavailable, mak-
ing this approach infeasible, such as when a new technology is being developed.

Kelly Caine (2016) investigated the sample size (number of participants) for papers published
at the international Computer-Human Interaction (CHI) conference in 2014. She found that
several factors affected the sample size, including the method being used and whether the data
was collected in person or remotely. In this set of papers, the sample size varied from 1 to
916,000, with the most common size being 12. So, a “local standard” for interaction design
would therefore suggest 12 as a rule of thumb.

8 . 2 F I v E K E y I s s u E s 263

gathering activity, and therefore an informed consent form is less likely to be required. As
with most ethical issues, the important thing is to consider the situation and make a judg-
ment based on the specific circumstances. Increasingly, projects and organizations that
collect personal data from people need to demonstrate that it is protected from unauthorized

You are invited to participate in a research project being conducted by the researchers listed on the
bottom of the page. In order for us to be allowed to use any data you wish to provide, we must have
your consent.
In the simplest terms, we hope you will use the mobile phone, tabletop, and project website at
the University of Maryland to

• Take pictures
• Share observations about the sights you see on campus
• Share ideas that you have to improve the design of the phone or tabletop application or website
• Comment on pictures, observations, and design ideas of others

The researchers and others using CampusNet will be able to look at your comments and pictures
on the tabletop and/or website, and we may ask if you are willing to answer a few more questions
(either on paper, by phone, or face-to-face) about your whole experience. You may stop participat-
ing at any time.

A long version of this consent form is available for your review and signature, or you may opt to
sign this shorter one by checking off all the boxes that re�ect your wishes and signing and dating
the form below.
___I agree that any photos I take using the CampusNet application may be uploaded to the tabletop

at the University of Maryland and/or a website now under development.
___I agree to allow any comments, observations, and pro�le information that I choose to share with

others via the online application to be visible to others who use the application at the same time
or after me.

___I agree to be videotaped/audiotaped during my participation in this study.
___I agree to complete a short questionnaire during or after my participation in this study.

NAME
[Please print]

SIGNATURE

DATE

[Contact information of Senior Researcher responsible for the project]

Crowdsourcing Design for Citizen Science Organizations

SHORT VERSION OF CONSENT FORM for participants at the University of Maryland –
18 YEARS AND OLDER

Figure 8.1 Example of an informed consent form

8 D ATA G AT H E R I N G264

access. For example, the European Union’s General Data Protection Regulation (GDPR)
came into force in May 2018. It applies to all EU organizations and offers the individual
unprecedented control over their personal data.

Incentives to take part in data gathering sessions may also be needed. For example, if
there is no clear advantage to the respondents, incentives may persuade them to take part; in
other circumstances, respondents may see it as part of their job or as a course requirement
to take part. For example, if support sales executives are asked to complete a questionnaire
about a new mobile sales application, then they are likely to agree if the new device will
impact their day-to-day lives. In this case, the motivation for providing the required informa-
tion is clear. However, when collecting data to understand how appealing a new interactive
app is for school children, different incentives would be appropriate. Here, the advantage for
individuals to take part is not so obvious.

8.2.4 Triangulation
Triangulation is a term used to refer to the investigation of a phenomenon from (at least)
two different perspectives (Denzin, 2006; Jupp, 2006). Four types of triangulation have been
defined (Jupp, 2006).

• Triangulation of data means that data is drawn from different sources at different times, in
different places, or from different people (possibly by using a different sampling technique).

• Investigator triangulation means that different researchers (observers, interviewers, and so
on) have been involved in collecting and interpreting the data.

• Triangulation of theories means the use of different theoretical frameworks through which
to view the data or findings.

• Methodological triangulation means to employ different data gathering techniques.

The last of these is the most common form of triangulation—to validate the results of
some inquiry by pointing to similar results yielded through different perspectives. However,
validation through true triangulation is difficult to achieve. Different data gathering methods
result in different kinds of data, which may or may not be compatible. Using different theo-
retical frameworks may or may not result in complementary findings, but to achieve theo-
retical triangulation would require the theories to have similar philosophical underpinnings.
Using more than one data gathering technique, and more than one data analysis approach, is
good practice because it leads to insights from the different approaches even though it may
not be achieving true triangulation.

Triangulation has sometimes been used to make up for the limitations of another type of
data collection (Mackay and Anne-Laure Fayard, 1997). This is a different rationale from the
original idea, which has more to do with the verification and reliability of data. Furthermore,

For more information about GDPR and data protection law in Europe and the
United Kingdom, see:
https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-
regulation-gdpr/

https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/

https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/

8 . 2 F I v E K E y I s s u E s 265

a kind of triangulation is being used increasingly in crowd sourcing and other studies involv-
ing large amounts of data to check that the data collected from the original study is real and
reliable. This is known as checking for “ground truth.”

8.2.5 Pilot Studies
A pilot study is a small trial run of the main study. The aim is to make sure that the proposed
method is viable before embarking on the real study. For example, the equipment and instruc-
tions can be checked, the questions for an interview or in a questionnaire can be tested for
clarity, and an experimental procedure can be confirmed as viable. This can identify potential
problems in advance so that they can be corrected. Distributing 500 questionnaires and then
being told that two of the questions were very confusing wastes time, annoys participants,
and is an expensive error that could be avoided by doing a pilot study.

If it is difficult to find participants or access to them is limited, asking colleagues or peers
to participate can work as an alternative for a pilot study. Note that anyone involved in a
pilot study cannot be involved in the main study itself. Why? Because they will know more
about the study and this can distort the results.

For an example of methodological triangulation, see:
https://medium.com/design-voices/the-power-of-triangulation-in-design-
research-64a0957d47d2

For more information about ground truth and how ground truth databases are
used to check data obtained in autonomous driving, see “The HCI Bench Mark
Suite: Stereo and Flow Ground Truth with Uncertainties for Urban Autonomous
Driving” at https://ieeexplore.ieee.org/document/7789500/.

BOX 8.2
Data, Information, and Conclusions

There is an important difference between raw data, information, and conclusions. Data is
what you collect; this is then analyzed and interpreted and conclusions drawn. Information
is gained from analyzing and interpreting the data and conclusions represent the actions to
be taken based on the information. For example, consider a study to determine whether a
new screen layout for a local leisure center has improved the user’s experience when booking
a swimming lesson. In this case, the data collected might include a set of times to complete
the booking, user comments regarding the new screen layout, biometric readings of the user’s

(Continued)

https://medium.com/design-voices/the-power-of-triangulation-in-design-research-64a0957d47d2

https://medium.com/design-voices/the-power-of-triangulation-in-design-research-64a0957d47d2

https://ieeexplore.ieee.org/document/7789500/

8 D ATA G AT H E R I N G266

8.3 Data Recording

Capturing data is necessary so that the results of a data gathering session can be analyzed
and shared. Some forms of data gathering, such as questionnaires, diaries, interaction
logging, scraping, and collecting work artifacts, are self-documenting and no further
recording is necessary. For other techniques, however, there is a choice in recording
approaches. The most common of these are taking notes, photographs, or recording audio
or video. Often, several data recording approaches are used together. For example, an
interview may be voice recorded, and then to help the interviewer in later analysis, a
photograph of the interviewee may be taken to remind the interviewer about the context
of the discussion.

Which data recording approaches are used will depend on the goal of the study and
how the data will be used, the context, the time and resources available, and the sensitivity
of the situation; the choice of data recording approach will affect the level of detail collected
and how intrusive the data gathering will be. In most settings, audio recording, photographs, and
notes will be sufficient. In others, it is essential to collect video data so as to record in detail
the intricacies of the activity and its context. Three common data recording approaches are
discussed next.

8.3.1 Notes Plus Photographs
Taking notes (by hand or by typing) is the least technical and most flexible way of record-
ing data, even if it seems old-fashioned. Handwritten notes may be transcribed in whole or
in part, and while this may seem tedious, it is usually the first step in analysis, and it gives
the analyst a good overview of the quality and contents of the data collected. Tools exist for
supporting data collection and analysis, but the advantages of handwritten notes include
that using pen and paper can be less intrusive than typing and is more flexible, for example,
for drawing diagrams of work layouts. Furthermore, researchers often comment that writ-
ing notes helps them to focus on what is important and starts them thinking about what the
data is telling them. The disadvantages of notes include that it can be difficult to capture
the right highlights, and it can be tiring to write and listen or observe at the same time.
It is easy to lose concentration, biases creep in, handwriting can be difficult to decipher, and
the speed of writing is limited. Working with a colleague can reduce some of these problems
while also providing another perspective.

heart rate while booking a lesson, and so on. At this stage, the data is raw. Information will
emerge once this raw data has been analyzed and the results interpreted. For example, analyz-
ing the data might indicate that users who have been using the leisure center for more than
five years find the new layout frustrating and take longer to book, while those who have been
using it for less than two years find the new layout helpful and can book lessons more quickly.
This indicates that the new layout is good for newcomers but not so good for long-term users
of the leisure center; this is information. A conclusion from this might be that a more extensive
help system is needed for more experienced users to become used to the changes.

8 . 3 D ATA R E c O R D I N G 267

If appropriate, photograph(s) and short videos (captured via smartphones or other hand-
held devices) of artifacts, events, and the environment can supplement notes and hand-drawn
sketches, providing that permission has been given to collect data using these approaches.

8.3.2 Audio Plus Photographs
Audio recording is a useful alternative to note-taking and is less intrusive than video. Dur-
ing observation, it allows observers to focus on the activity rather than on trying to capture
every spoken word. In an interview, it allows the interviewer to pay more attention to the
interviewee rather than trying to take notes as well as listening. It isn’t always necessary to
transcribe all of the data collected—often only sections are needed, depending on the goals
of the study. Many studies do not need a great level of detail, and instead recordings are used
as a reminder and as a source of anecdotes for reports. It is surprising how evocative audio
recordings of people or places from the data session can be, and those memories provide
added context to the analysis. If audio recording is the main or only data collection tech-
nique, then the quality needs to be good; performing interviews remotely, for example using
Skype, can be compromised because of poor connections and acoustics. Audio recordings are
often supplemented with photographs.

8.3.3 Video
Smartphones can be used to collect short video clips of activity. They are easy to use and
less obtrusive than setting up sophisticated cameras. But there are occasions when a video is
needed for long periods of time or when holding a phone is unreliable, for example, recording
how designers collaborate together in a workshop or how teens interact in a “makerspace,”
in which people can work on projects while sharing ideas, equipment, and knowledge. For
these kinds of sessions, more professional video equipment that clearly captures both visual
and audio data is more appropriate. Other ways of recording facial expressions together with
verbal comments are also being used, such as GoToMeeting, which can be operated both in-
person and remotely. Using such systems can create additional planning issues that have to
be addressed to minimize how intrusive the recording is, while at the same time making sure
that the data is of good quality (Denzin and Lincoln, 2011). When considering whether to use
a camera, Heath et al. (2010) suggest the following issues to consider:

• Deciding whether to fix the camera’s position or use a roving recorder. This decision
depends on the activity being recorded and the purpose to which the video data will be put,
for example, for illustrative purposes only or for detailed data analysis. In some cases, such
as pervasive games, a roving camera is the only way to capture the required action. For
some studies, the video on a smartphone may be adequate and require less effort to set up.

• Deciding where to point the camera in order to capture what is required. Heath and his
colleagues suggest carrying out fieldwork for a short time before starting to video record
in order to become familiar with the environment and be able to identify suitable recording
locations. Involving the participants themselves in deciding what and where to record also
helps to capture relevant action.

• Understanding the impact of the recording on participants. It is often assumed that video
recording will have an impact on participants and their behavior. However, it is worth taking
an empirical approach to this issue and examining the data itself to see whether there is any
evidence of people changing their behavior such as orienting themselves toward the camera.

8 D ATA G AT H E R I N G268

8.4 Interviews

Interviews can be thought of as a “conversation with a purpose” (Kahn and Cannell, 1957). How
much like an ordinary conversation the interview will be depends on the type of interview. There
are four main types of interviews: open-ended or unstructured, structured, semi-structured, and
group interviews (Fontana and Frey, 2005). The first three types are named according to how
much control the interviewer imposes on the conversation by following a predetermined set of
questions. The fourth type, which is often called a focus group, involves a small group guided by
a facilitator. The facilitation may be quite informal or follow a structured format.

The most appropriate approach to interviewing depends on the purpose of the interview,
the questions to be addressed, and the interaction design activity. For example, if the goal is first
to gain impressions about users’ reactions to a new design concept, then an informal, open-
ended interview is often the best approach. But if the goal is to get feedback about a particular
design feature, such as the layout of a new web browser, then a structured interview or question-
naire is often better. This is because the goals and questions are more specific in the latter case.

8.4.1 Unstructured Interviews
Open-ended or unstructured interviews are at one end of a spectrum of how much control the
interviewer has over the interview process. They are exploratory and are similar to conversa-
tions around a particular topic; they often go into considerable depth. Questions posed by

AcTIvITy 8.1
Imagine that you are a consultant who is employed to help develop a new augmented reality
garden planning tool to be used by amateur and professional garden designers. The goal is
to find out how garden designers use an early prototype as they walk around their clients’
gardens sketching design ideas, taking notes, and asking the clients about what they like and
how they and their families use the garden. What are the advantages and disadvantages of the
three approaches (note-taking, audio recording with photographs, and video) for data record-
ing in this environment?

Comment
Handwritten notes do not require specialized equipment. They are unobtrusive and flexible
but difficult to do while walking around a garden. If it starts to rain, there is no equipment to
get wet, but notes may get soggy and difficult to read (and write!). Garden planning is a highly
visual, aesthetic activity, so supplementing notes with photographs would be appropriate.

Video captures more information, for example, continuous panoramas of the landscape,
what the designers are seeing, sketches, comments, and so on, but it is more intrusive and will
also be affected by the weather. Short video sequences recorded on a smartphone may be suf-
ficient as the video is unlikely to be used for detailed analysis. Audio may be a good compro-
mise, but synchronizing audio with activities such as looking at sketches and other artifacts
later can be tricky and error prone.

8 . 4 I N T E R v I E w s 269

the interviewer are open, meaning that there is no particular expectation about the format or
content of answers. For example, the first question asked of all participants might be: “What
are the pros and cons of having a wearable?” Here, the interviewee is free to answer as fully
or as briefly as they want, and both the interviewer and interviewee can steer the interview.
For example, often the interviewer will say: “Can you tell me a bit more about . . .” This is
referred to as probing.

Despite being unstructured and open, the interviewer needs a plan of the main topics to
be covered so that they can make sure that all of the topics are discussed. Going into an inter-
view without an agenda should not be confused with being open to hearing new ideas (see
section 8.4.5, “Planning and Conducting an Interview”). One of the skills needed to conduct
an unstructured interview is getting the balance right between obtaining answers to relevant
questions and being prepared to follow unanticipated lines of inquiry.

A benefit of unstructured interviews is that they generate rich data that is often interre-
lated and complex, that is, data that provides a deep understanding of the topic. In addition,
interviewees may mention issues that the interviewer has not considered. A lot of unstruc-
tured data is generated, and the interviews will not be consistent across participants since
each interview takes on its own format. Unstructured interviews can be time-consuming
to analyze, but they can also produce rich insights. Themes can be identified across inter-
views using techniques from grounded theory and other analytic approaches, as discussed in
Chapter 9, “Data Analysis, Interpretation, and Presentation.”

8.4.2 Structured Interviews
In structured interviews, the interviewer asks predetermined questions similar to those in
a questionnaire (see section 8.5, “Questionnaires”), and the same questions are used with
each participant so that the study is standardized. The questions need to be short and clearly
worded, and they are typically closed questions, which means that they require an answer
from a predetermined set of alternatives. (This may include an “other” option, but ideally
this would not be chosen often.) Closed questions work well if the range of possible answers
is known or if participants don’t have much time. Structured interviews are useful only when
the goals are clearly understood and specific questions can be identified. Example questions
for a structured interview might be the following:

• “Which of the following websites do you visit most frequently: Amazon.com, Google.com,
or msn.com?”

• “How often do you visit this website: every day, once a week, once a month, less often than
once a month?”

• “Do you ever purchase anything online: Yes/No? If your answer is Yes, how often do you
purchase things online: every day, once a week, once a month, less frequently than once
a month?”

Questions in a structured interview are worded the same for each participant and are
asked in the same order.

8.4.3 Semi-structured Interviews
Semi-structured interviews combine features of structured and unstructured interviews and
use both closed and open questions. The interviewer has a basic script for guidance so that
the same topics are covered with each interviewee. The interviewer starts with preplanned

Http://www.Amazon.com

http://www.Google.com

http://www.msn.com

8 D ATA G AT H E R I N G270

questions and then probes the interviewee to say more until no new relevant information is
forthcoming. Here’s an example:

Which music websites do you visit most frequently?
Answer: Mentions several but stresses that they prefer hottestmusic.com
Why?
Answer: Says that they like the site layout
Tell me more about the site layout.
Answer: Silence, followed by an answer describing the site’s layout
Anything else that you like about the site?
Answer: Describes the animations
Thanks. Are there any other reasons for visiting this site so often that you haven’t mentioned?

It is important not to pre-empt an answer by phrasing a question to suggest that a par-
ticular answer is expected. For example, “You seemed to like this use of color . . .” assumes
that this is the case and will probably encourage the interviewee to answer that this is true
so as not to offend the interviewer. Children are particularly prone to behave in this way (see
Box 8.3, “Working with different kinds of users.”) The body language of the interviewer, for
example whether they are smiling, scowling, looking disapproving, and so forth, can have a
strong influence on whether the interviewee will agree with a question, and the interviewee
needs to have time to speak and not be rushed.

Probes are a useful device for getting more information, especially neutral probes such as “Do
you want to tell me anything else?” and prompts that remind interviewees if they forget terms or
names help to move the interview along. Semi-structured interviews are intended to be broadly
replicable, so probing and prompting aim to move the interview along without introducing bias.

BOX 8.3
Working with Different Kinds of Users

Focusing on the needs of users and including users in the design process is a central theme of
this book. But users vary considerably based on their age, educational, life, and cultural experi-
ences, and physical and cognitive abilities. For example, children think and react to situations
differently than adults. Therefore, if children are to be included in data gathering sessions, then
child-friendly methods are needed to make them feel at ease so that they will communicate with
you. For very young children of pre-reading or early reading age, data gathering sessions need
to rely on images and chat rather than written instructions or questionnaires. Researchers who
work with children have developed sets of “smileys,” such as those shown in Figure 8.2, so that
children can select the one that most closely represents their feelings (see Read et al., 2002).

Awful Not very good Really good BrilliantGood

Figure 8.2 A smileyometer gauge for early readers
Source: Read et al. (2002)

http;//www.hottestmusic.com

8 . 4 I N T E R v I E w s 271

The examples in Box 8.3 demonstrate that technology developers need to adapt their
data collection techniques to suit the participants with whom they work. As the saying goes,
“One size doesn’t fit all.”

8.4.4 Focus Groups
Interviews are often conducted with one interviewer and one interviewee, but it is also com-
mon to interview people in groups. One form of group interview that is sometimes used in
interaction design activities is the focus group. Normally, three to ten people are involved, and
the discussion is led by a trained facilitator. Participants are selected to provide a representa-
tive sample of the target population. For example, in the evaluation of a university website,
a group of administrators, faculty, and students may form three separate focus groups because
they use the web for different purposes. In requirements activities, a focus group may be held
in order to identify conflicts in expectations or terminology from different stakeholders.

Similarly, different approaches are needed when working with users from different cul-
tures (Winschiers-Theophilus et al., 2012). In their work with local communities in Namibia,
Heike Winschiers-Theophilus and Nicola Bidwell (2013) had to find ways of communicating
with local participants, which included developing a variety of visual and other techniques to
communicate ideas and collect data about the collective understanding and feelings inherent
in the local cultures of the people with whom they worked.

Laurianne Sitbon and Shanjana Farhin (2017) report a study in which researchers inter-
acted with people with intellectual disabilities, where they involved caregivers who knew each
participant well and could appropriately make the researchers’ questions more concrete. This
made it more understandable for the participants. An example of this was when the interviewer
assumed that the participant understood the concept of a phone app to provide information
about bus times. The caregiver made their questions more concrete for the participant by relat-
ing the concept of the phone app to familiar people and circumstances and bringing in a per-
sonal example (for instance, “So you don’t have to ring your mom to say ‘Mom, I am lost’”).

Another group of technology users are studied by the field of Animal-Computer Interac-
tion (Mancini et al., 2017). Data gathering with animals poses additional and different chal-
lenges. For example, in their study of dogs’ attention to TV screens, Ilyena Hirskyj-Douglas
et al. (2017) used a combination of observation and tracking equipment to capture when a
dog turns their head. But interpreting the data, or checking that the interpretation is accurate,
requires animal behavior expertise.

Source: Mike Baldwin / Cartoon Stock

8 D ATA G AT H E R I N G272

The benefit of a focus group is that it allows diverse or sensitive issues to be raised that
might otherwise be missed, for example in the requirements activity to understand multiple
points within a collaborative process or to hear different user stories (Unger and Chandler,
2012). The method is more appropriate for investigating shared issues rather than individual
experiences. Focus groups enable people to put forward their own perspectives. A preset
agenda is developed to guide the discussion, but there is sufficient flexibility for the facili-
tator to follow unanticipated issues as they are raised. The facilitator guides and prompts
discussion, encourages quiet people to participate, and stops verbose ones from dominating
the discussion. The discussion is usually recorded for later analysis, and participants may be
invited to explain their comments more fully at a later date.

The format of focus groups can be adapted to fit within local cultural settings. For
example, a study with the Mbeere people of Kenya aimed to find out how water was being
used, any plans for future irrigation systems, and the possible role of technology in water
management (Warrick et al., 2016). The researcher met with the elders from the commu-
nity, and the focus group took the form of a traditional Kenyan “talking circle,” in which
the elders sit in a circle and each person gives their opinions in turn. The researcher, who
was from the Mbeere community, knew that it was impolite to interrupt or suggest that the
conversation needed to move along, because traditionally each person speaks for as long as
they want.

8.4.5 Planning and Conducting an Interview
Planning an interview involves developing the set of questions or topics to be covered, col-
lating any documentation to give to the interviewee (such as consent form or project descrip-
tion), checking that recording equipment works, structuring the interview, and organizing a
suitable time and place.

Developing Interview Questions
Questions may be open-ended (or open) or closed-ended (or closed). Open questions are best
suited where the goal of the session is exploratory; closed questions are best suited where
the possible answers are known in advance. An unstructured interview will usually consist
mainly of open questions, while a structured interview will usually consist of closed ques-
tions. A semi-structured interview may use a combination of both types.

Focus groups can be useful, but only if used for the right kind of activities. For a
discussion of when focus groups don’t work, see the following links:
https://www.nomensa.com/blog/2016/are-focus-groups-useful-research-
technique-ux
http://gerrymcgovern.com/why-focus-groups-dont-work/

https://www.nomensa.com/blog/2016/are-focus-groups-useful-research-technique-ux

https://www.nomensa.com/blog/2016/are-focus-groups-useful-research-technique-ux

http://gerrymcgovern.com/why-focus-groups-dont-work/

8 . 4 I N T E R v I E w s 273

The following guidelines help in developing interview questions (Robson and
McCartan, 2016):

• Long or compound questions can be difficult to remember or confusing, so split them
into two separate questions. For example, instead of “How do you like this smartphone
app compared with previous ones that you have used?” say, “How do you like this
smartphone app?” “Have you used other smartphone apps?” If so, “How did you like
them?” This is easier for the interviewee to respond to and easier for the interviewer
to record.

• Interviewees may not understand jargon or complex language and might be too embar-
rassed to admit it, so explain things to them in straightforward ways.

• Try to keep questions neutral, both when preparing the interview script and in conversa-
tion during the interview itself. For example, if you ask “Why do you like this style of
interaction?” this question assumes that the person does like it and will discourage some
interviewees from stating their real feelings.

DILEMMA
What They Say and What They Do

What users say isn’t always what they do. People sometimes give the answers that they think
show them in the best light, they may have forgotten what happened, or they may want to
please the interviewer by answering in the way they think will satisfy them. This may be
problematic when the interviewer and interviewee don’t know each other, especially if the
interview is being conducted remotely by Skype, Cisco Webex, or another digital conferenc-
ing system.

For example, Yvonne Rogers et al. (2010) conducted a study to investigate whether a
set of twinkly lights embedded in the floor of an office building could persuade people to
take the stairs rather than the lift (or elevator). In interviews, participants told the research-
ers that they did not change their behavior but logged data showed that their behavior did,
in fact, change significantly. So, can interviewers believe all of the responses they get? Are the
respondents telling the truth, or are they simply giving the answers that they think the inter-
viewer wants to hear?

It isn’t possible to avoid this behavior, but an interviewer can be aware of it and reduce
such biases by choosing questions carefully, by getting a large number of participants, or by
using a combination of data gathering techniques.

8 D ATA G AT H E R I N G274

AcTIvITy 8.2
Several devices are available for reading ebooks, watching movies, and browsing photographs
(see Figure 8.3). The design differs between makes and models, but they are all aimed at pro-
viding a comfortable user experience. An increasing number of people also read books and
watch movies on their smartphones, and they may purchase phones with larger screens for
this purpose.

The developers of a new device for reading books online want to find out how appeal-
ing it will be to young people aged 16–18, so they have decided to conduct some interviews.
1. What is the goal of this data gathering session?
2. Suggest ways of recording the interview data.

(a) (b)

(c) (d)

Figure 8.3 (a) Sony’s eReader, (b) Amazon’s Kindle, (c) Apple’s iPad, and (d) Apple’s iPhone
Source: (a) Sony Europe Limited, (b) Martyn Landi / PA Archive / PA Images, (c) Mark Lennihan / AP
Images, and (d) Helen Sharp

8 . 4 I N T E R v I E w s 275

It is helpful when collecting answers to closed-ended questions to list possible responses
together with boxes that can be checked. Here’s one way to convert some of the questions
from Activity 8.2:

1. Have you used a device for reading books online before? (Explore previous knowledge.)
Interviewer checks box: □ Yes □ No □ Don’t remember/know

2. Would you like to read a book using a device designed for reading online? (Explore initial
reaction; then explore the response.)
Interviewer checks box: □ Yes □ No □ Don’t know

3. Why?
If response is “Yes” or “No,” interviewer asks, “Which of the following statements repre-
sents your feelings best?”

3. Suggest a set of questions for use in an unstructured interview that seeks to understand the
appeal of reading books online to young people in the 16–18 year old age group.

4. Based on the results of the unstructured interviews, the developers of the new device have
found that an important acceptance factor is whether the device can be handled easily.
Write a set of semi-structured interview questions to evaluate this aspect based on an initial
prototype and run a pilot interview with two of your peers. Ask them to comment on your
questions and refine them based on their comments.

Comment
1. The goal is to understand what makes devices for reading books online appealing to peo-

ple aged 16–18.
2. Audio recording will be less cumbersome and distracting than taking notes, and all impor-

tant points will be captured. Video recording is not needed in this initial interview as it
isn’t necessary to capture any detailed interactions. However, it would be useful to take
photographs of any devices referred to by the interviewee.

3. Possible questions include the following: Why do you read books online? Do you ever
read print-based books? If so, what makes you choose to read online versus a print-based
format? Do you find reading a book online comfortable? In what way(s) does reading
online versus reading from print affect your ability to become engrossed in the story you
are reading?

4. Semi-structured interview questions may be open or closed-ended. Some closed-ended
questions that you might ask include the following:
• Have you used any kind of device for reading books online before?
• Would you like to read a book online using this device?
• In your opinion, is the device easy to handle?

Some open-ended questions, with follow-on probes, include the following:
• What do you like most about the device? Why?
• What do you like least about the device? Why?
• Please give me an example of where the device was uncomfortable or difficult to use.

8 D ATA G AT H E R I N G276

For “Yes,” interviewer checks one of these boxes:
⬜ I don’t like carrying heavy books.
⬜ This is fun/cool.
⬜ My friend told me they are great.
⬜ It’s the way of the future.
⬜ Another reason (interviewer notes the reason).

For “No,” interviewer checks one of these boxes:
⬜ I don’t like using gadgets if I can avoid it.
⬜ I can’t read the screen clearly.
⬜ I prefer the feel of paper.
⬜ Another reason (interviewer notes the reason).

4. In your opinion, is the device for reading online easy to handle or cumbersome?
Interviewer checks one of these boxes:

⬜ Easy to handle
⬜ Cumbersome
⬜ Neither

Running the Interview
Before starting, make sure that the goals of the interview have been explained to the inter-
viewee and that they are willing to proceed. Finding out about the interviewee and their
environment before the interview will make it easier to put them at ease, especially if it is an
unfamiliar setting.

During the interview, it is better to listen more than to talk, to respond with sympathy
but without bias, and to appear to enjoy the interview. The following is a common sequence
for an interview (Robson and McCartan, 2016):

1. An introduction in which the interviewer introduces themselves and explains why they
are doing the interview, reassures interviewees regarding any ethical issues, and asks
if they mind being recorded, if appropriate. This should be exactly the same for each
interviewee.

2. A warm-up session where easy, nonthreatening questions come first. These may include
questions about demographic information, such as “What area of the country do you
live in?”

3. A main session in which the questions are presented in a logical sequence, with the more
probing ones at the end. In a semi-structured interview, the order of questions may vary
between participants, depending on the course of the conversation, how much probing is
done, and what seems more natural.

4. A cooling-off period consisting of a few easy questions (to defuse any tension that may
have arisen).

5. A closing session in which the interviewer thanks the interviewee and switches off the
recorder or puts their notebook away, signaling that the interview has ended.

8 . 4 I N T E R v I E w s 277

8.4.6 Other Forms of Interview
Conducting face-to-face interviews and focus groups can be impractical, but the prevalence
of Skype, Cisco WebEx, Zoom, and other digital conferencing systems, email, and phone-
based interactions (voice or chat), sometimes with screen-sharing software, make remote
interviewing a good alternative. These are carried out in a similar fashion to face-to-face
sessions, but poor connections and acoustics can cause different challenges, and participants
may be tempted to multitask rather than focus on the session at hand. Advantages of remote
focus groups and interviews, especially when done through audio-only channels, include the
following:

• The participants are in their own environment and are more relaxed.
• Participants don’t have to travel.
• Participants don’t need to worry about what they wear.
• For interviews involving sensitive issues, interviewees can remain anonymous.

In addition, participants can leave the conversation whenever they want to by just cut-
ting the connection, which adds to their sense of security. From the interviewer’s perspective,
a wider set of participants can be reached easily, but a potential disadvantage is that the
facilitator does not have a good view of the interviewees’ body language.

Retrospective interviews, that is, interviews that reflect on an activity or a data gathering
session in the recent past, may be conducted with participants to check that the interviewer
has correctly understood what was happening. This is a common practice in observational
studies where it is sometimes referred to as member checking.

8.4.7 Enriching the Interview Experience
Face-to-face interviews often take place in a neutral location away from the interviewee’s
normal environment. This creates an artificial context, and it can be difficult for interviewees
to give full answers to the questions posed. To help combat this, interviews can be enriched
by using props such as personas prototypes or work artifacts that the interviewee or inter-
viewer brings along, or descriptions of common tasks (examples of these kinds of props are
scenarios and prototypes, which are covered in Chapter 11, “Discovering Requirements,”
and Chapter 12, “Design, Prototyping, and Construction”). These props can be used to pro-
vide context for the interviewees and help to ground the data in a real setting. Figure 8.4
illustrates the use of personas in a focus group setting.

For more information and some interesting thoughts on remote usability testing,
see http://www.uxbooth.com/articles/hidden-benefits-remote-research/.

The Hidden Benefits of Remote Research

8 D ATA G AT H E R I N G278

As another example, Clara Mancini et al. (2009) used a combination of questionnaire
prompts and deferred contextual interviews when investigating mobile privacy. A simple
multiple-choice questionnaire was sent electronically to the participants’ smartphones, and
they answered the questions using these devices. Interviews about the recorded events were
conducted later, based on the questionnaire answers given at the time of the event.

8.5 Questionnaires

Questionnaires are a well-established technique for collecting demographic data and users’
opinions. They are similar to interviews in that they can have closed or open-ended questions,
but once a questionnaire is produced, it can be distributed to a large number of participants
without requiring additional data gathering resources. Thus, more data can be collected than
would normally be possible in an interview study. Furthermore, participants who are located
in remote locations or those who cannot attend an interview at a particular time can be
involved more easily. Often a message is sent electronically to potential participants directing
them to an online questionnaire.

Effort and skill are needed to ensure that questions are clearly worded and the data col-
lected can be analyzed efficiently. Well-designed questionnaires are good for getting answers
to specific questions from a large group of people. Questionnaires can be used on their own

Figure 8.4 Enriching a focus group with personas displayed on the wall for all participants to see

8 . 5 Q u E s T I O N N A I R E s 279

or in conjunction with other methods to clarify or deepen understanding. For example, infor-
mation obtained through interviews with a small selection of interviewees might be corrobo-
rated by sending a questionnaire to a wider group to confirm the conclusions.

Questionnaire questions and structured interview questions are similar, so which technique
is used when? Essentially, the difference lies in the motivation of the respondent to answer the
questions. If their motivation is high enough to complete a questionnaire without anyone else
present, then a questionnaire will be appropriate. On the other hand, if the respondents need
some persuasion to answer the questions, a structured interview format would be better. For
example, structured interviews are easier and quicker to conduct if people will not stop to com-
plete a questionnaire, such as at a train station or while walking to their next meeting.

It can be harder to develop good questionnaire questions compared with structured
interview questions because the interviewer is not available to explain them or to clarify any
ambiguities. Because of this, it is important that questions are specific; when possible, ask
closed-ended questions and offer a range of answers, including a “no opinion” or “none of
these” option. Finally, use negative questions carefully, as they can be confusing and may lead
to false information. Some questionnaire designers, however, use a mixture of negative and
positive questions deliberately because it helps to check the users’ intentions.

8.5.1 Questionnaire Structure
Many questionnaires start by asking for basic demographic information (gender, age, place
of birth) and details of relevant experience (the number of hours a day spent searching on the
Internet, the level of expertise within the domain under study, and so on). This background
information is useful for putting the questionnaire responses into context. For example, if two
responses conflict, these different perspectives may be because of their level of experience—a
group of people who are using a social networking site for the first time are likely to express
different opinions than another group with five years’ experience of using such sites. However,
only contextual information that is relevant to the study goal needs to be collected. For exam-
ple, it is unlikely that a person’s height will provide relevant context to their responses about
Internet use, but it might be relevant for a study concerning wearables.

Specific questions that contribute to the data-gathering goal usually follow these demo-
graphic questions. If the questionnaire is long, the questions may be subdivided into related
topics to make it easier and more logical to complete.

The following is a checklist of general advice for designing a questionnaire:

• Think about the ordering of questions. The impact of a question can be influenced by
question order.

• Consider whether different versions of the questionnaire are needed for different populations.
• Provide clear instructions on how to complete the questionnaire, for example, whether
answers can be saved and completed later. Aim for both careful wording and good typography.

• Think about the length of the questionnaire, and avoid questions that don’t address the
study goals.

• If the questionnaire has to be long, consider allowing respondents to opt out at different stages.
It is usually better to get answers to some sections than no answers at all because of dropout.

• Think about questionnaire layout and pacing; for instance, strike a balance between using
white space, or individual web pages, and the need to keep the questionnaire as compact
as possible.

8 D ATA G AT H E R I N G280

8.5.2 Question and Response Format
Different formats of question and response can be chosen. For example, with a closed-ended
question, it may be appropriate to indicate only one response, or it may be appropriate to
indicate several. Sometimes, it is better to ask users to locate their answer within a range.
Selecting the most appropriate question and response format makes it easier for respondents
to answer clearly. Some commonly used formats are described next.

Check Boxes and Ranges
The range of answers to demographic questions is predictable. Nationality, for example, has
a finite number of alternatives, and asking respondents to choose a response from a prede-
fined list makes sense for collecting this information. A similar approach can be adopted if
details of age are needed. But since some people do not like to give their exact age, many
questionnaires ask respondents to specify their age as a range. A common design error arises
when the ranges overlap. For example, specifying two ranges as 15–20 and 20–25 will cause
confusion; that is, which box do people who are 20 years old check? Making the ranges
15–19 and 20–24 avoids this problem.

A frequently asked question about ranges is whether the interval must be equal in all
cases. The answer is no—it depends on what you want to know. For example, people who
might use a website about life insurance are likely to be employed individuals who are 21–65
years old. The question could, therefore, have just three ranges: under 21, 21–65, and over 65.
In contrast, to see how the population’s political views vary across generations might require
10-year cohort groups for people over 21, in which case the following ranges would be
appropriate: under 21, 21–30, 31–40, and so forth.

Rating Scales
There are a number of different types of rating scales, each with its own purpose (see Oppen-
heim, 2000). Two commonly used scales are the Likert and semantic differential scales. Their
purpose is to elicit a range of responses to a question that can be compared across respondents.
They are good for getting people to make judgments, such as how easy, how usable, and the like.

Likert scales rely on identifying a set of statements representing a range of possible opin-
ions, while semantic differential scales rely on choosing pairs of words that represent the
range of possible opinions. Likert scales are more commonly used because identifying suitable
statements that respondents will understand consistently is easier than identifying semantic
pairs that respondents interpret as intended.

Likert Scales
Likert scales are used for measuring opinions, attitudes, and beliefs, and consequently they
are widely used for evaluating user satisfaction with products. For example, users’ opinions
about the use of color in a website could be evaluated with a Likert scale using a range of
numbers, as in question 1 here, or with words as in question 2:

1. The use of color is excellent (where 1 represents strongly agree and 5 represents strongly
disagree):

1 2 3 4 5

□ □ □ □ □

8 . 5 Q u E s T I O N N A I R E s 281

2. The use of color is excellent:

Strongly agree Agree OK Disagree Strongly disagree

□ □ □ □ □

In both cases, respondents would be asked to choose the right box, number, or phrase.
Designing a Likert scale involves the following steps:

1. Gather a pool of short statements about the subject to be investigated. Examples are “This
control panel is clear” and “The procedure for checking credit rating is too complex.”
A brainstorming session with peers is a good way to identify key aspects to be investigated.

2. Decide on the scale. There are three main issues to be addressed here: How many points
does the scale need? Should the scale be discrete or continuous? How can the scale be rep-
resented? See Box 8.4 What Scales to Use: Three, Five, Seven, or More? for more on this.

3. Select items for the final questionnaire, and reword as necessary to make them clear.

In the first example above, the scale is arranged with 1 as the highest choice on the left
and 5 as the lowest choice on the right. The logic for this is that first is the best place to be
in a race and fifth would be the worst place. While there is no absolute right or wrong way
of ordering the numbers other researchers prefer to arrange the scales the other way around
with 1 as the lowest on the left and 5 as the highest on the right. They argue that intuitively
the highest number suggests the best choice and the lowest number suggests the worst choice.
Another reason for going from lowest to highest is that when the results are reported, it is
more intuitive for readers to see high numbers representing the best choices. The important
thing is to be consistent.

Semantic Differential Scales
Semantic differential scales explore a range of bipolar attitudes about a particular item, each
of which is represented as a pair of adjectives. The participant is asked to choose a point
between the two extremes to indicate agreement with the poles, as shown in Figure 8.5. The
score for the investigation is found by summing the scores for each bipolar pair. Scores are
then computed across groups of participants. Notice that in this example the poles are mixed
so that good and bad features are distributed on the right and the left. In this example, there
are seven positions on the scale.

UglyAttractive

ConfusingClear

ColorfulDull

BoringExciting

PleasingAnnoying

UnhelpfulHelpful

Poor Well designed

Figure 8.5 An example of a semantic differential scale

8 D ATA G AT H E R I N G282

BOX 8.4
What Scales to Use: Three, Five, Seven, or More?

Issues to address when designing Likert and semantic differential scales include the following:
how many points are needed on the scale, how should they be presented, and in what form?

Many questionnaires use seven- or five-point scales, and there are also three-point scales.
Some even use nine-point scales. Arguments for the number of points go both ways. Advocates
of long scales argue that they help to show discrimination. Rating features on an interface is
more difficult for most people than, say, selecting among different flavors of ice cream, and
when the task is difficult, there is evidence to show that people “hedge their bets.” Rather
than selecting the poles of the scales if there is no right or wrong, respondents tend to select
values nearer the center. The counterargument is that people cannot be expected to discern
accurately among points on a large scale, so any scale of more than five points is unnecessarily
difficult to use.

Another aspect to consider is whether to give the scale an even or odd number of points.
An odd number provides a clear central point, while an even number forces participants to
decide and prevents them from sitting on the fence.

We suggest the following guidelines:

How many points on the scale?
Use a small number, three, for example, when the possibilities are very limited, as in Yes/No
type answers.

□ □ □
Yes Don’t know No

Use a medium-sized range, five, for example, when making judgments that involve like/
dislike or agree/disagree statements.

Strongly
agree

Agree OK Disagree Strongly disagree

□ □ □ □ □

Use a longer range, seven or nine, for example, when asking respondents to make subtle
judgments, such as when asking about a user experience dimension such as “level of appeal”
of a character in a video game.

very appealing ok repulsive

Discrete or continuous?
Use boxes for discrete choices and scales for finer judgments.

What order?
Decide which way to order the scale, and be consistent.

8 . 5 Q u E s T I O N N A I R E s 283

8.5.3 Administering Questionnaires
Two important issues when using questionnaires are reaching a representative sample of par-
ticipants and ensuring a reasonable response rate. For large surveys, potential respondents
need to be selected using a sampling technique. However, interaction designers commonly use
a small number of participants, often fewer than 20 users. Completion rates of 100 percent are
often achieved with these small samples, but with larger or more remote populations, ensuring
that surveys are returned is a well-known problem. A 40 percent return is generally acceptable

AcTIvITy 8.3
Spot four poorly designed features in the excerpt from a questionnaire in Figure 8.6.

Comment
Some of the features that could be improved upon include the following:
• Question 2 requests an exact age. Many people prefer not to give this information and
would rather position themselves within a range.

• In question 3, the number of hours spent searching is indicated with overlapping scales, that
is, 1–3 and 3–5. How would someone answer if they spend 3 hours a day searching online?

• For question 4, the questionnaire doesn’t say how many boxes to check.
• The space left for people to answer open-ended question 5 is too small, which will annoy
some people and deter them from giving their opinions.

Many online survey tools prevent users from making some of these design errors. It is impor-
tant, however, to be aware of such things because paper is still sometimes used.

2. State your age in years

3. How many hours a day do you spend
searching online?

4. Which of the following do you do online?

5. How useful is the Internet to you?

purchase goods

5 hours

send e-mail
visit chatrooms
use bulletin boards
find information
read the news

Figure 8.6 A questionnaire with poorly designed features

8 D ATA G AT H E R I N G284

for many surveys, but much lower rates are common. Depending on your audience, you might
want to consider offering incentives (see section 8.2.3, “Relationship with Participants”).

While questionnaires are often online, paper questionnaires may be more convenient in some
situations, for example, if participants do not have Internet access or if it is expensive to use. Occa-
sionally, short questionnaires are sent within the body of an email, but more often the advantages
of the data being compiled automatically and either partly or fully analyzed make online ques-
tionnaires attractive. Online questionnaires are interactive and can include check boxes, radio
buttons, pull-down and pop-up menus, help screens, graphics, or videos (see Figure 8.7). They can
also provide immediate data validation; for example, the entry must be a number between 1 and
20, and automatically skip questions that are irrelevant to some respondents, such as questions
aimed only at teenagers. Other advantages of online questionnaires include faster response rates
and automatic transfer of responses into a database for analysis (Toepoel, 2016).

The main problem with online questionnaires is the difficulty of obtaining a random
sample of respondents; online questionnaires usually rely on convenience sampling, and
hence their results cannot be generalized. In some countries, online questions, often delivered
via smartphones, are frequently used in conjunction with television to elicit viewers’ opinions
of programs and political events.

Figure 8.7 An excerpt from a web-based questionnaire showing check boxes, radio buttons, and
pull-down menus

8 . 5 Q u E s T I O N N A I R E s 285

Deploying an online questionnaire involves the following steps (Toepoel, 2016, Chapter 10):

1. Plan the survey timeline. If there is a deadline, work backward from the deadline and plan
what needs to be done on a weekly basis.

2. Design the questionnaire offline. Using plain text is useful as this can then be copied more
easily into the online survey tool.

3. Program the online survey. How long this will take depends on the complexity of the
design, for example, how many navigational paths it contains or if it has a lot of interac-
tive features.

4. Test the survey, both to make sure that it behaves as envisioned and to check the ques-
tions themselves. This includes getting feedback from content experts, survey experts, and
potential respondents. This last group forms the basis of a pilot study.

5. Recruit respondents. As mentioned earlier, participants may have different reasons for
taking part in the survey, but especially when respondents need to be encouraged, make
the invitations intriguing, simple, friendly, respectful, trustworthy, motivating, interesting,
informative, and short.

There are many online questionnaire templates available that provide a range of options,
including different question types (for example open-ended, multiple choice), rating scales
(such as Likert, semantic differential), and answer types (for example, radio buttons, check
boxes, drop-down menus).

The following activity asks you to make use of one of these templates. Apart from being
able to administer an online questionnaire widely, these templates also enable the question-
naire to be segmented. For example, airline satisfaction questionnaires often have different
sections for check-in, baggage handling, airport lounge, inflight movies, inflight food service,
and so forth. If you didn’t use an airport lounge or check your baggage, you can skip those
sections. This avoids respondents getting frustrated by having to go through questions that
are not relevant to them. It is also a useful technique for long questionnaires, as it ensures
that if a respondent opts out for lack of time or gets tired of answering the questions, the
data that has been provided already is available to be analyzed.

AcTIvITy 8.4
Go to questionpro.com, surveymonkey.com, or a similar survey site and design your own
questionnaire using the set of widgets that is available for a free trial period.

Create an online questionnaire for the set of questions that you developed for Activity 8.2.
For each question, produce two different designs; for example, use radio buttons and drop-
down menus for one question, and provide a 10-point semantic differential scale and a 5-point
scale for another question.

What differences (if any) do you think the two designs will have on a respondent’s behav-
ior? Ask a number of people to answer one or the other of your questions and see whether the
answers differ for the two designs.

(Continued)

http://www.questionpro.com

http://www.surveymonkey.com

8 D ATA G AT H E R I N G286

BOX 8.5
Do people answer online questionnaires differently than paper and
pencil? If so, why?

There has been much research examining how people respond to surveys when using a com-
puter compared with paper and pencil methods. Some studies suggest that people are more
revealing and consistent in their responses when using a computer to report their habits and
behaviors, such as eating, drinking, and amount of exercise see Luce et al. (2003). Students
have also been found to rate their instructors less favorably when online (Chang, 2004).

In a Danish study in which 3,600 people were invited to participate, the researchers concluded
that although response rates for web-based invitations were lower, they were more cost-effective
(by a factor of 10) and had only slightly lower numbers of missing values than questionnaires sent
via paper (Ebert et al., 2018). Similarly, a study by Diaz de Rada and Dominguez-Alvarez (2014),
in which the quality of the information collected from a survey given to citizens of Andalusia in
Spain was analyzed, several advantages of using online versus paper-based questionnaires were
identified. These included a low number of unanswered questions, more detailed answers to
open-ended questions, and longer answers to questions in the online questionnaires than in the
paper questionnaires. In the five open-ended questions, respondents wrote 63 characters more on
average on the online questionnaires than on the paper questionnaires. For the questions in which
participants had to select from a drop-down menu, there was a better response rate than when
the selection was presented on paper with blank spaces.

One factor that can influence how people answer questions is the way the information
is structured, such as the use of headers, the ordering, and the placement of questions. Online
questionnaires provide more options for presenting information, including the use of drop-
down menus, radio buttons, and jump-to options, which may influence how people read
and navigate a questionnaire. But do these issues affect respondents’ answers? Smyth et al.
(2005) have found that providing forced choice formats results in more options being selected.
Another example is provided by Funcke et al. (2011), who found that continuous sliders ena-
bled researchers to collect more accurate data because they support continuous rather than
discrete scales. They also encouraged higher response rates. What can be concluded from these
investigations is that the details of questionnaire design can impact how respondents react.

Comment
Respondents may have used the response types in different ways. For example, they may select
the end options more often from a drop-down menu than from a list of options that are cho-
sen via radio buttons. Alternatively, you may find no difference and that people’s opinions are
not affected by the widget style used. Some differences, of course, may be due to the variation
between individual responses rather than being caused by features in the questionnaire design.
To tease the effects apart, you would need to ask a large number of participants (for instance,
in the range 50–100) to respond to the questions for each design.

8 . 6 O B s E R v AT I O N 287

8.6 Observation

Observation is useful at any stage during product development. Early in design, observation
helps designers understand the users’ context, tasks, and goals. Observation conducted later
in development, for example, in evaluation, may be used to investigate how well a prototype
supports these tasks and goals.

Users may be observed directly by the investigator as they perform their activities or indi-
rectly through records of the activity that are studied afterward (Bernard, 2017). Observation
may also take place in the field or in a controlled environment. In the former case, individuals
are observed as they go about their day-to-day tasks in the natural setting. In the latter case,
individuals are observed performing specified tasks within a controlled environment such as
a usability laboratory.

AcTIvITy 8.5
To appreciate the different merits of observation in the field and observation in a controlled
environment, read the following scenarios and answer the questions that appear after.

Scenario 1 A usability consultant joins a group of tourists who have been given a wear-
able navigation device that fits onto a wrist strap to test on a visit to Stockholm. After sight-
seeing for the day, they use the device to find a list of restaurants within 2 kilometers of their
current position. Several are listed, and they find the phone numbers of a few, call them to ask
about their menus, select one, make a booking, and head off to the restaurant. The usability
consultant observes some difficulty operating the device, especially on the move. Discussion
with the group supports the evaluator’s impression that there are problems with the interface,
but on balance the device is useful, and the group is pleased to get a table at a good restau-
rant nearby.

Scenario 2 A usability consultant observes how participants perform a preplanned task
using the wearable navigation device in a usability laboratory. The task requires the partici-
pants to find the phone number of a restaurant called Matisse. It takes them several minutes
to do this, and they appear to have problems. The video recording and interaction log suggest
that the interface is quirky and the audio interaction is of poor quality. This is supported by
participants’ answers on a user satisfaction questionnaire.
1. What are the advantages and disadvantages of these two types of observation?
2. When might each type of observation be useful?

Comment
1. The advantages of the field study are that the observer saw how the device could be used

in a real situation to solve a real problem. They experienced the delight expressed with
the overall concept and the frustration with the interface. By watching how the group
used the device on the move, they gained an understanding of what the participants liked
and what was lacking. The disadvantage is that the observer was an insider in the group,

(Continued)

8 D ATA G AT H E R I N G288

8.6.1 Direct Observation in the Field
It can be difficult for people to explain what they do or to describe accurately how they
achieve a task. It is unlikely that an interaction designer will get a full and true story using
interviews or questionnaires. Observation in the field can help fill in details about how users
behave and use technology, and nuances that are not elicited from other forms of investiga-
tion may be observed. Understanding the context provides important information about why
activities happen the way that they do. However, observation in the field can be complicated
and harder to do well than at first appreciated. Observation can also result in a lot of data,
some of which may be tedious to analyze and not very relevant.

All data gathering should have a clearly stated goal, but it is particularly important to
have a focus for an observation session because there is always so much going on. On the
other hand, it is also important to be prepared to change the plan if circumstances change.
For example, the plan may be to spend one day observing an individual performing a task,
but an unexpected meeting crops up, which is relevant to the observation goal and so it
makes sense to attend the meeting instead. In observation, there is a careful balance between
being guided by goals and being open to modifying, shaping, or refocusing the study as more
is learned about the situation. Being able to keep this balance is a skill that develops with
experience.

so how objective could they be? The data is qualitative, and while anecdotes can be very
persuasive, how useful are they? Maybe they were having such a good time that their
judgment was clouded and they missed hearing negative comments and didn’t notice
some of the participant’s annoyance. Another study could be done to find out more, but
it is not possible to replicate the exact conditions of this study. The advantages of the lab
study are that it is easier to replicate, so several users could perform the same task, specific
usability problems can be identified, users’ performance can be compared, and averages
for such measures as the time it took to do a specific task and the number of errors can
be calculated. The observer could also be more objective as an outsider. The disadvantage
is that the study is artificial and says nothing about how the device would be used in the
real environment.

2. Both types of study have merits. Which is better depends on the goals of the study. The
lab study is useful for examining details of the interaction style to make sure that usabil-
ity problems with the interface and button design are diagnosed and corrected. The field
study reveals how the navigation device is used in a real-world context and how it inte-
grates with or changes users’ behavior. Without this study, it is possible that developers
might not have discovered the enthusiasm for the device because the reward for doing
laboratory tasks is not as compelling as a good meal! In fact, according to Kjeldskov and
Skov (2014), there is no definitive answer to which kind of study is preferable for mobile
devices. They suggest that the real question is when and how to engage with longitudinal
field studies.

8 . 6 O B s E R v AT I O N 289

Structuring Frameworks for Observation in the Field
During an observation, events can be complex and rapidly changing. There is a lot for observ-
ers to think about, so many experts have a framework to structure and focus their observa-
tion. The framework can be quite simple. For example, this is a practitioner’s framework for
use in evaluation studies that focuses on just three easy-to-remember items:

The person: Who is using the technology at any particular time?
The place: Where are they using it?
The thing: What are they doing with it?

Even a simple framework such as this one based on who, where, and what can be surpris-
ingly effective to help observers keep their goals and questions in sight. Experienced observ-
ers may prefer a more detailed framework, such as the following (Robson and McCarten,
2016, p. 328), which encourages them to pay greater attention to the context of the activity:

Space: What is the physical space like, and how is it laid out?
Actors: What are the names and relevant details of the people involved?
Activities: What are the actors doing, and why?
Objects: What physical objects are present, such as furniture?
Acts: What are specific individual actions?
Events: Is what you observe part of a special event?
Time: What is the sequence of events?
Goals: What are the actors trying to accomplish?
Feelings: What is the mood of the group and of individuals?

This framework was devised for any type of observation, so when used in the context of
interaction design, it might need to be modified slightly. For example, if the focus is going to
be on how some technology is used, the framework could be modified to ask the following:

Objects: What physical objects, in addition to the technology being studied, are present, and
do they impact on the technology use?

Both of these frameworks are relatively general and could be used in many different
types of study, or as a basis for developing a new framework for a specific study.

AcTIvITy 8.6
1. Find a small group of people who are using any kind of technology, for example, smart-

phones, household appliances, or video game systems, and try to answer the question,
“What are these people doing?” Watch for three to five minutes, and write down what you
observe. When finished, note down how it felt to be doing this and any reactions in the
group of people observed.

2. If you were to observe the group again, what would you do differently?
3. Observe this group again for about 10 minutes using the detailed framework given above.

(Continued)

8 D ATA G AT H E R I N G290

Degree of Participation
Depending on the type of study, the degree of participation within the study environment
varies across a spectrum, which can be characterized as insider at one end and outsider at
the other. Where a particular study falls along this spectrum depends on its goal and on the
practical and ethical issues that constrain and shape it.

An observer who adopts an approach right at the outsider end of the spectrum is called a
passive observer, and they will not take any part in the study environment at all. It is difficult
to be a truly passive observer in the field, simply because it’s not possible to avoid interacting
with the activities. Passive observation is more appropriate in lab studies.

An observer who adopts an approach at the insider end of this spectrum is called a
participant observer. This means that they attempt, at various levels depending on the
type of study, to become a member of the group being studied. This can be a difficult role
to play since being an observer also requires a certain level of detachment, while being a
participant assumes a different role. As a participant observer, it is important to keep the
two roles clear and separate so that observation notes are objective while participation is
also maintained. It may not be possible to take a full participant observer approach for
other reasons. For example, the observer may not be skilled enough in the task at hand,
the organization/group may not be prepared for an outsider to take part in their activities,
or the timescale may not provide sufficient opportunity to become familiar enough with
the task to participate fully. Similarly, if observing activity in a private place such as the
home, full participation would be difficult even if, as suggested by some researchers
(for example, Bell et al., 2005), you have spent time getting to know the family before start-
ing the study. Chandrika Cycil et  al. (2013) overcame this issue in their study of in-car
conversations between parents and children by traveling with the families initially for a
week and then asking family members to video relevant episodes of activity. In this way,
they had gained an understanding of the context and family dynamics and then collected
more detailed data to study activity in depth.

Comment
1. What problems did this exercise highlight? Was it hard to watch everything and remember

what happened? How did the people being watched feel? Did they know they were being
watched? Perhaps some of them objected and walked away. If you didn’t tell them that
they were being watched, should you have?

2. The initial goal of the observation, that is, to find out what the people are doing, was
vague, and chances are that it was quite a frustrating experience not knowing what
was significant and what could be ignored. The questions used to guide observation
need to be more focused. For example, you might ask the following: What are the
people doing with the technology? Is everyone in the group using it? Are they looking
pleased, frustrated, serious, happy? Does the technology appear to be central to the
users’ goals?

3. Ideally, you will have felt more confident this second time, partly because it is the second
time doing some observation and partly because the framework provided a structure for
what to look at.

8 . 6 O B s E R v AT I O N 291

Planning and Conducting an Observation in the Field
The frameworks introduced in the previous section are useful for providing focus and also for
organizing the observation and data gathering activity. Choosing a framework is important,
but there are other decisions that need to be made, including the level of participation to
adopt, how to make a record of the data, how to gain acceptance in the group being studied,
how to handle sensitive issues such as cultural differences or access to private spaces, and how
to ensure that the study uses different perspectives (people, activities, job roles, and so forth).

One way to achieve this last point is to work as a team. This can have several benefits.

• Each person can agree to focus on different people or different parts of the context, thereby
covering more ground.

• Observation and reflection can be interweaved more easily when there is more than
one observer.

• More reliable data is likely to be generated because observations can be compared.
• Results will reflect different perspectives.

Once in the throes of an observation, there are other issues that need to be considered.
For example, it will be easier to relate to some people more than others. Although it will be
tempting to pay attention to them more than others, attention needs to be paid to everyone
in the group. Observation is a fluid activity, and the study will need to be refocused as it
progresses in response to what is learned. Having observed for a while, interesting phenom-
ena that seem relevant will start to emerge. Gradually, ideas will sharpen into questions that
guide further observation.

Observing is also an intense and tiring activity, but checking notes and records and
reviewing observations and experiences at the end of each day is important. If this is not
done, then valuable information may be lost as the next day’s events override the previous
day’s findings. Writing a diary or private blog is one way of achieving this. Any documents
or other artifacts that are collected or copied (such as minutes of a meeting or discussion
items) can be annotated, describing how they are used during the observed activity. Where
an observation lasts several days or weeks, time can be taken out of each day to go through
notes and other records.

As notes are reviewed, separate personal opinion from observation and mark issues for
further investigation. It is also a good idea to check observations and interpretations with an
informant or members of the participant group for accuracy.

DILEMMA
When to Stop Observing?

Knowing when to stop doing any type of data gathering can be difficult for novices, but it is
particularly tricky in observational studies because there is no obvious ending. Schedules often
dictate when your study ends. Otherwise, stop when nothing new is emerging. Two indica-
tions of having done enough are when similar patterns of behavior are being seen and when
all of the main stakeholder groups have been observed and a good understanding of their
perspectives has been achieved.

8 D ATA G AT H E R I N G292

Ethnography
Ethnography has traditionally been used in the social sciences to uncover the organization
of societies and their activities. Since the early 1990s, it has gained credibility in interaction
design, and particularly in the design of collaborative systems; see Box 8.6, “Ethnography
in Requirements” and Crabtree (2003). A large part of most ethnographic studies is direct
observation, but interviews, questionnaires, and studying artifacts used in the activities also
feature in many ethnographic studies. A distinguishing feature of ethnographic studies com-
pared with other data gathering is that a situation is observed without imposing any a priori
structure or framework upon it, and everything is viewed as “strange.” In this way, the aim is
to capture and articulate the participants’ perspective of the situation under study.

BOX 8.6
Ethnography in Requirements

The MERboard is a tool scientists and engineers use to display, capture, annotate, and share
information in support of the operation of two Mars Exploration Rovers (MERs) on the sur-
face of Mars. The MER (see Figure 8.8) acts like a human geological explorer by collecting
and analyzing samples and then transmitting the results to the scientists on Earth. The scien-
tists and engineers collaboratively analyze the data received, decide what to study next, create
plans of action, and send commands to the robots on the surface of Mars.

The requirements for MERboard were identified partly through ethnographic field-
work, observations, and analysis (Trimble et al., 2002). The team of scientists and engi-
neers ran a series of field tests that simulated the process of receiving data, analyzing it,
creating plans, and transmitting them to the MERs. The main problems they identified
stemmed from the scientists’ limitations in displaying, sharing, and storing information
(see Figure 8.9a).

Figure 8.8 Mars Exploration Rover
Source: NASA Jet Propulsion Laboratory (NASA-JPL)

8 . 6 O B s E R v AT I O N 293

Ethnography has become popular within interaction design because it allows designers
to obtain a detailed and nuanced understanding of people’s behavior and the use of technol-
ogy that cannot be obtained by other methods of data gathering (Lazar et al., 2017). While
there has been much discussion of how big data can address many design issues, big data is
likely to be most powerful when combined with ethnography to explain how and why people
do what they do (Churchill, 2018).

The observer in an ethnographic study adopts a participant observer (insider) role as
much as possible (Fetterman, 2010). While participant observation is a hallmark of eth-
nographic studies, it is also used within other methodological frameworks such as action
research (Hayes, 2011), where one of the goals is to improve the current situation.

Ethnographic data is based on what is available, what is “ordinary,” what it is that people
do, say, and how they work. The data collected therefore has many forms: documents, notes
taken by the observer(s), pictures, and room layout sketches. Notes may include snippets of
conversations and descriptions of rooms, meetings, what someone did, or how people reacted
to a situation. Data gathering is opportunistic, and observers make the most of opportunities
as they present themselves. Often, interesting phenomena do not reveal themselves immedi-
ately but only later, so it is important to gather as much as possible within the framework of
observation. Initially, spend time getting to know people in the participant group and bond-
ing with them. Participants need to understand why the observers are there, what they hope

These observations led to the development of MERboard (see Figure 8.9b), which contains
four core applications: a whiteboard for brainstorming and sketching, a browser for display-
ing information from the web, the capability to display personal information and information
across several screens, and a file storage space linked specifically to MERboard.

(a) (b)

Figure 8.9 (a) The situation before MERboard; (b) a scientist using MERboard to present
information
Source: Trimble et al. (2002)

8 D ATA G AT H E R I N G294

to achieve, and how long they plan to be there. Going to lunch with them, buying coffee,
and bringing small gifts, for example, cookies, can greatly help this socialization process.
Moreover, key information may be revealed during one of these informal gatherings.

It is important to show interest in the stories, gripes, and explanations that are provided and
to be prepared to step back if a participant’s phone rings or someone else enters the workspace.
A good tactic is to explain to one of the participants during a quiet moment what you think is
happening and then let them correct any misunderstandings. However, asking too many ques-
tions, taking pictures of everything, showing off your knowledge, and getting in their way can be
very off-putting. Putting up cameras on tripods on the first day may not be a good idea. Listening
and watching while sitting on the sidelines and occasionally asking questions is a better approach.

The following is an illustrative list of materials that might be recorded and collected dur-
ing an ethnographic study (adapted from Crabtree, 2003, p. 53):

• Activity or job descriptions
• Rules and procedures (and so on) that govern particular activities
• Descriptions of activities observed
• Recordings of the talk taking place between parties involved in observed activities
• Informal interviews with participants explaining the detail of observed activities
• Diagrams of the physical layout, including the position of artifacts
• Photographs of artifacts (documents, diagrams, forms, computers, and so on) used in the
course of observed activities

• Videos of artifacts as used in the course of observed activities
• Descriptions of artifacts used in the course of observed activities
• Workflow diagrams showing the sequential order of tasks involved in observed activities
• Process maps showing connections between activities

Traditionally, ethnographic studies in this field aim to understand what people do and how
they organize action and interaction within a particular context of interest to designers. However,
recently there has been a trend toward studies that draw more on ethnography’s anthropological
roots and the study of culture. This trend has been brought about by the perceived need to use
different approaches because the computers and other digital technologies, especially mobile
devices, are embedded in everyday activity, and not just in the workplace as in the 1990s.

BOX 8.7
Doing Ethnography Online

As collaboration and social activity online have increased, ethnographers have adapted their
approach to study social media and the various forms of computer-mediated communication
(Rotman et al., 2013; Bauwens and Genoud, 2014). This practice has various names, the most
common of which are online ethnography (Rotman et al., 2012), virtual ethnography (Hine,
2000), and netnography (Kozinets, 2010). Where a community or activity has both an online and
offline presence, it is common to incorporate both online and offline techniques within the data
gathering program. However, where the community or activities of interest exist almost exclu-
sively online, then mostly online techniques are used and virtual ethnography becomes central.

8 . 6 O B s E R v AT I O N 295

8.6.2 Direct Observation in Controlled Environments
Observing users in a controlled environment may occur within a purposely built usability
lab, but portable labs that can be set up in any room are quite common. Portable laboratories
can mean that more participants take part because they don’t have to travel away from their
normal environment. Observation in a controlled environment inevitably takes on a more
formal character than observation in the field, and the user may feel more apprehensive.
As with interviews, it is a good idea to prepare a script to guide how the participants will be
greeted, be told about the goals of the study and how long it will last, and have their rights
explained. Use of a script ensures that each participant will be treated in the same way, which
brings more credibility to the results obtained from the study.

The same basic data recording techniques are used for direct observation in the labora-
tory and field studies (that is, capturing photographs, taking notes, collecting video, and so
on), but the way in which these techniques are used is different. In the lab the emphasis is on
the details of what individuals do, while in the field the context is important, and the focus is
on how people interact with each other, the technology, and their environment.

The arrangement of equipment with respect to the participant is important in a controlled
study because details of the person’s activity need to be captured. For example, one camera
might record facial expressions, another might focus on mouse and keyboard activity, and
another might record a broad view of the participant and capture body language. The stream
of data from the cameras can be fed into a video editing and analysis suite where it is coor-
dinated and time-stamped, annotated, and partially edited.

Why is it necessary to distinguish between online and face-to-face ethnography? It is
important because interaction online is different from interaction in person. For example,
communication in person is richer (through gesture, facial expression, tone of voice, and so
on) than online communication, and anonymity is more easily achieved when communicating
online. In addition, virtual worlds have a persistence, due to regular archiving, that does not
typically occur in face-to-face situations. This makes characteristics of the communication dif-
ferent, which often includes how ethnographers introduce themselves to the community, how
they act within the community, and how they report their findings. For these reasons, some
researchers who work primarily online also try to meet with some of the participants face-to-
face, particularly when working on sensitive topics (Lingel, 2012).

Special tools may be developed to support ethnographic data collection. Mobilab is an
online collaborative platform that was developed for citizens living in Switzerland to report and
discuss their daily mobility during an eight-week period using their mobile phones, tablets, and
computers (Bauwens and Genoud, 2014). Mobilab enabled the researchers to more easily engage
in discussion with participants on a variety of topics, including trucks parking on a bikeway.

For observational studies in large social spaces, such as digital libraries or Facebook,
there are different ethical issues to consider. For example, it is unrealistic to ask everyone
using a digital library to sign any kind of form agreeing to be involved in the study, yet
participants do need to understand the observer’s role and the purpose of their study. The
presentation of results needs to be modified too. Quotes from participants in the community,
even if anonymized in the report, can easily be attributed by a simple search of the community
archive or the IP address of the sender, so care is needed to protect their privacy.

8 D ATA G AT H E R I N G296

The Think-Aloud Technique
One of the problems with observation is that the observer doesn’t know what users are think-
ing and can only guess from what they see. Observation in the field should not be intrusive,
as this will disturb the context the study is trying to capture. This limits the questions being
asked of the participant. However, in a controlled environment, the observer can afford to
be a little more intrusive. The think-aloud technique is a useful way of understanding what
is going on in a person’s head.

Imagine observing someone who has been asked to evaluate the interface of the web
search engine Lycos.com. The user, who does not have much experience of web searches,
is told to look for a phone for a 10-year-old child. They are told to type www.lycos.com
and then proceed however they think best. They type the URL and get a screen similar to
the one in Figure 8.10.

Next, they type child’s phone in the search box. They get a screen similar to the one
shown in Figure 8.11. They are silent. What is going on? What are they thinking? One
way around the problem of knowing what they are doing is to collect a think-aloud pro-
tocol, a technique developed by Anders Ericsson and Herbert Simon (1985) for examin-
ing people’s problem-solving strategies. The technique requires people to say out loud
everything that they are thinking and trying to do so that their thought processes are
externalized.

So, let’s imagine an action replay of the situation just described, as follows, but this time
the user has been instructed to think aloud:

“I’m typing in www.lycos.com, as you told me.”
“Now I am typing child’s phone and then clicking the search button.

“It’s taking a few seconds to respond.”
“Oh! Now I have a choice of other websites to go to. Hmm, I wonder which one I should
select. Well, it’s for a young child so I want a ‘child-safe phone.’ This one mentions safe
phones
“Gosh, there’s a lot more models to select from than I expected! Hmm, some of these are
for older children. I wonder what I do next to find one for a 10-year-old.”

Figure 8.10 Home page of Lycos search engine
Source: https://www.lycos.com

http://Lycos.com.

8 . 6 O B s E R v AT I O N 297

I guess I should scroll through them and identify those that might be appropriate.”

Now you know more about what the user is trying to achieve, but they are silent again.
They are looking at the screen, but what are they thinking now? What are they looking at?

The occurrence of these silences is one of the biggest problems with the think-aloud
technique.

Figure 8.11 The screen that appears in response to searching for “child’s phone”
Source: https://www.lycos.com

AcTIvITy 8.7
Try a think-aloud exercise yourself. Go to a website, such as Amazon or eBay, and look for
something to buy. Think aloud as you search and notice how you feel and behave.

Afterward, reflect on the experience. Was it difficult to keep speaking all the way through
the task? Did you feel awkward? Did you stop talking when you got stuck?

Comment
Feeling self-conscious and awkward doing this is a common response, and some people say
they feel really embarrassed. Many people forget to speak out loud and find it difficult to do so
when the task becomes difficult. In fact, you probably stopped speaking when the task became
demanding, and that is exactly the time when an observer is most eager to hear what’s happening.

(Continued)

8 D ATA G AT H E R I N G298

8.6.3 Indirect Observation: Tracking Users’ Activities
Sometimes direct observation is not possible because it is too intrusive or observers cannot
be present over the duration of the study, and so activities are tracked indirectly. Diaries and
interaction logs are two techniques for doing this.

Diaries
Participants are asked to write a diary of their activities on a regular basis, including things like
what they did, when they did it, what they found hard or easy, and what their reactions were
to the situation. For example, Sohn et al. (2008) asked 20 participants to record their mobile
information needs through text messages and then to use these messages as prompts to help
them answer six questions on a website at the end of each day. From the data collected, they
identified 16 categories of mobile information needs, the most frequent of which was “trivia.”

Diaries are useful: when participants are scattered and unreachable in person; when the
activity is private, for example, in the home; or when it relates to feelings, for instance, emo-
tions or motivation. For example, Jang et al. (2016) used diaries with interviews to collect
data about users’ experiences with smart TVs in the home as compared to within a con-
trolled lab setting. The study in the home was conducted over several weeks during which
participants were asked to keep a diary of their experiences and feelings. Surveys were also
collected. This mixed-methods study informed the user experience design of future systems.

Diaries have several advantages: they do not take up much researcher time to collect
data; they do not require special equipment or expertise; and they are suitable for long-term
studies. In addition, templates, like those used in open-ended online questionnaires, can be
created online to standardize the data entry format so that the data can be entered directly
into a database for analysis. However, diary studies rely on participants being reliable and
remembering to complete them at the assigned time and as instructed, so incentives may be
needed, and the process has to be straightforward.

Determining how long to run a diary study can be tricky. If the study goes on for too
long, participants may lose interest and need incentives to continue. In contrast, if the study
is too short, important data may be missed. For example, in a study of children’s experiences
of a game, Elisa Mekler et al. (2014) used diaries to collect data after each gaming session
in a series. After the first few sessions, all of the children in the study showed loss of motiva-
tion for the game. However, by the end of the study, those who completed the game were
more motivated than those who did not complete the game. Had the data been collected only
once, the researchers may not have observed the impact of game completion on the children’s
motivation.

If a user is silent during a think-aloud protocol, the observer could interrupt and remind
them to think out loud, but that would be intrusive. Another solution is to have two people
work together so that they talk to each other. Working with another person (called construc-
tive interaction [Miyake, 1986]) is often more natural and revealing because participants
talk in order to help each other along. This technique has proved to be particularly success-
ful with children, and it also avoids possible cultural influences on concurrent verbalization
(Clemmensen et al., 2008).

8 . 6 O B s E R v AT I O N 299

Another problem is that the participants’ memories of events may be exaggerated or detail
is forgotten; for example, they may remember them as better or worse than they really were
or as taking more or less time than they actually did take. One way of mitigating this problem
is to collect other data in diaries (such as photographs including selfies, audio and video clips,
and so on). Scott Carter and Jennifer Mankoff (2005) considered whether capturing events
through pictures, audio, or artifacts related to the event affects the results of the diary study.
They found that images resulted in more specific recall than other media, but audio was useful
for capturing events when taking a photo was too awkward. Tangible artifacts, such as those
shown in Figure 8.12, also encouraged discussion about wider beliefs and attitudes.

The experience sampling method (ESM) is similar to a diary in that it relies on partici-
pants recording information about their everyday activities. However, it differs from more
traditional diary studies because participants are prompted at random times via email, text
message, or similar means to answer specific questions about their context, feelings, and
actions (Hektner et al., 2006). These prompts have the benefit of encouraging immediate
data capture. Niels van Berkel et al. (2017) provide a comprehensive survey of ESM and its
evolution, tools, and uses across a wide range of studies.

Interaction Logs, Web Analytics, and Data Scraping
Interaction logging uses software to record users’ activity in a log that can be examined later.
A variety of actions may be recorded, such as key presses and mouse or other device move-
ments, time spent searching a web page, time spent looking at help systems, and task flow

Figure 8.12 Some tangible objects collected by participants involved in a study about a jazz festival
Source: Carter and Mankoff (2005). Reproduced with permission of ACM Publications

8 D ATA G AT H E R I N G300

through software modules. A key advantage of logging activity is that it is unobtrusive pro-
vided system performance is not affected, but it also raises ethical concerns about observing
participants if this is done without their knowledge. Another advantage is that large volumes
of data can be logged automatically. Visualization tools are therefore helpful for exploring
and analyzing this data quantitatively and qualitatively. Algorithmic and statistical methods
may also be used.

Examining the trail of activity that people leave behind when they are active on websites,
Twitter, or Facebook is also a form of indirect observation. You can see an example of this by
looking at a Twitter feed to which you have access, for example, that of a friend, president,
prime minister, or some other leader. These trails allow examination of discussion threads on
a particular topic, such as climate change, or reactions to comments made by a public figure
or to a topic that is trending today. If there are just a few posts, then it is easy to see what
is going on, but often the most interesting posts are those that generate a lot of comments.
Examining thousands, tens of thousands, and even millions of posts requires automated tech-
niques. Web analytics and data scraping are discussed further in Chapter 10.

8.7 Choosing and Combining Techniques

Combining data gathering techniques into a single data gathering program is common practice,
for example, when collecting case study data (see Box 8.8). The benefit of using a combination
of methods is to provide multiple perspectives. Choosing which data gathering techniques to
use depends on a variety of factors related to the study goals. There is no right technique or
combination of techniques, but some will undoubtedly be more appropriate than others. The
decision about which to use will need to be made after taking all of the factors into account.

Table 8.1 provides an overview to help choose a set of techniques for a specific project.
It lists the kind of information obtained (such as answers to specific questions) and the
type of data (for example, mostly qualitative or mostly quantitative). It also includes some
advantages and disadvantages for each technique. Note that different modalities can be used
for some of these techniques. For example, interviews and focus groups can be conducted
face-to-face, by phone, or through teleconferencing, so when considering advantages and
disadvantages of the techniques, this should also be taken into account.

In addition, technique choice is influenced by practical issues.

• The focus of the study. What kind of data will support the focus and goal of the study? This
will be influenced by the interaction design activity and the level of maturity of the design.

• The participants involved. Characteristics of the target user group including their location
and availability.

• The nature of the technique. Does the technique require specialist equipment or training,
and do the investigators have the appropriate knowledge and experience?

• Available resources. Expertise, tool support, time, and money.

8 . 7 c H O O s I N G A N D c O M B I N I N G T E c H N I Q u E s 301

Technique Good for Kind of data Advantages Disadvantages

Interviews Exploring
issues

Some
quantitative
but mostly
qualitative

Interviewer can
guide interviewee if
necessary. Encourages
contact between
developers and users.

Artificial environment
may intimidate
interviewee. It also
removes them from
the environment where
work is typically
being done.

Focus groups Collecting
multiple
viewpoints

Some
quantitative
but mostly
qualitative

Highlights areas
of consensus and
conflict. Encourages
contact between
developers and
users.

Possibility of dominant
characters.

Questionnaires Answering
specific
questions

Quantitative
and
qualitative

Can reach many
people with low
resource requirements.

The design is key.
Response rates may
be low. Unless
carefully designed, the
responses may
not provide
suitable data.

Direct
observation
in the field

Understanding
context of
user activity

Mostly
qualitative

Observing
gives insights
that other
techniques
don’t provide.

Very time-consuming.
Huge amounts of data
are produced.

Direct
observation in
a controlled
environment

Capturing the
detail of what
individuals do

Quantitative
and
qualitative

Can focus on the
details of a task
without interruption.

Results may have
limited use in the
normal environment
because the
conditions were
artificial.

Indirect
observation

Observing
users without
disturbing
their activity;
data captured
automatically

Quantitative
(logging) and
qualitative
(diary)

User doesn’t get
distracted by the data
gathering; automatic
recording means
that it can extend
over long periods
of time.

A large amount of
quantitative data needs
tool support to analyze
(logging); participants’
memories may
exaggerate (diary).

Table 8.1 Overview of data gathering techniques and their use

8 D ATA G AT H E R I N G302

AcTIvITy 8.9
For each of the following products, consider what kinds of data gathering would be appropri-
ate and how to use the different techniques introduced earlier. Assume that product develop-
ment is just starting and that there is sufficient time and resources to use any of the techniques.
1. A new software app to support a small organic produce shop. There is a system running

already with which the users are reasonably happy, but it is looking dated and needs upgrading.
2. An innovative device for diabetes sufferers to help them record and monitor their blood

sugar levels.
3. An ecommerce website that sells fashion clothing for young people.

Comment
1. As this is a small shop, there are likely to be few stakeholders. Some period of observation

would be important to understand the context of the new and the old systems. Interview-
ing the staff rather than giving them questionnaires is likely to be appropriate because
there aren’t very many of them, and this will yield richer data and give the developers a
chance to meet the users. Organic produce is regulated by a variety of laws, so looking at
this documentation will help you understand any legal constraints that have to be taken
into account. This suggests a series of interviews with the main users to understand the
positive and negative features of the existing system, a short observation session to under-
stand the context of the system, and a study of documentation surrounding the regulations.

2. In this case, the user group is quite large and spread out geographically, so talking to all
of them is not feasible. However, interviewing a representative sample of potential users,

BOX 8.8
Collecting case study data

Case studies often use a combination of methods, for example, direct and indirect observa-
tions and interviews. Although people frequently use the term case study colloquially to refer
to a study that they are using as a case example, there is also a case study methodology that
collects field study data over days, months, or even years. There is a body of literature
that provides advice on how to do good case studies. Robert Yin (2013), for example, identifies
these data collection sources: documentation, archival records, interviews, direct observations,
participant observation, and physical artifacts. Case studies are good for integrating multiple per-
spectives, for example, studying new technology in the wild, and for giving meaning to first impres-
sions. The data collection process tends to be intensive, concurrent, interactive, and iterative.

In a study of how local communities organize and adapt technology for managing their
local rivers and streams, approaching it as a case study allowed a detailed contextual analysis
of events and relationships that occurred over multiple groups of volunteers during a two-year
period (Preece et al., 2019). From this study, the researchers learned about the volunteers’
needs for highly flexible software to support the diverse groups of participants working on a
wide range of water-related topics.

8 . 7 c H O O s I N G A N D c O M B I N I N G T E c H N I Q u E s 303

In-Depth Activity
The aim of this in-depth activity is to practice data gathering. Assume that you have been
employed to improve the user experience of an interactive product such as a smartphone app,
a digital media player, a Blu-ray player, computer software, or some other type of technology.
This existing product may be redesigned, or a completely new product may be created. To do
the assignment, find a group of people or a single individual prepared to be the user group.
These could be your family, friends, peers, or people in a local community group.
For this assignment:
(a) Clarify the basic goal of improving the product by considering what this means in your

circumstances.
(b) Watch the group (or person) casually to get an understanding of any issues that might cre-

ate challenges for this activity and any information to help refine the study goals.
(c) Explain how you would use each of the three data gathering techniques: interview, ques-

tionnaire, and observation in your data gathering program. Explain how your plan takes
account of triangulation.

(d) Consider your relationship with the user group and decide if an informed consent form is
required. (Figure 8.1 will help you to design one if needed.)

(e) Plan your data gathering program in detail.
• Decide what kind of interview to run and design a set of interview questions. Decide how

to record the data, then acquire and test any equipment needed and run a pilot study.
• Decide whether to include a questionnaire in your data gathering program, and design

appropriate questions for it. Run a pilot study to check the questionnaire.
• Decide whether to use direct or indirect observation and where on the outsider/insider

spectrum should the observers be. Decide how to record the data, then acquire and test
any equipment needed and run a pilot study.

(f) Carry out the study, but limit its scope. For example, interview only two or three people
or plan only two half-hour observation periods.

(g) Reflect on this experience and suggest what you would do differently next time.
Keep the data gathered, as this will form the basis of the in-depth activity in Chapter 9.

possibly at a local diabetic clinic, is feasible. Observing current practices to monitor blood
sugar levels will help you understand what is required. An additional group of stakeholders
would be those who use or have used the other products on the market. These stakeholders
can be questioned about their experience with their existing devices so that the new device
can be an improvement. A questionnaire sent to a wider group in order to confirm the
findings from the interviews would be appropriate, as might a focus group where possible.

3. Again, the user group is quite large and spread out geographically. In fact, the user group
may not be very well defined. Interviews backed up by questionnaires and focus groups
would be appropriate. In this case, identifying similar or competing sites and evaluating
them will help provide information for an improved product.

8 D ATA G AT H E R I N G304

Further Reading

FETTERMAN, D. M. (2010). Ethnography: Step by Step (3rd ed.) Applied Social Research
Methods Series, Vol. 17. Sage. This book introduces the theory and practice of ethnography,
and it is an excellent guide for beginners. It covers both data gathering and data analysis in
the ethnographic tradition.

FULTON SURI, J. (2005) Thoughtless Acts? Chronicle Books, San Francisco. This intriguing
little book invites you to consider how people react to their environment. It is a good intro-
duction to the art of observation.

HEATH, C., HINDMARSH, J. AND LUFF, P. (2010) Video in Qualitative Research: Analyzing
Social Interaction in Everyday Life. Sage. This is an accessible book that provides practical
advice and guidance about how to set up and perform data gathering using video record-
ing. It also covers data analysis, presenting findings, and potential implications from video
research based on their own experience.

summary
This chapter has focused on three main data gathering methods that are commonly used in
interaction design: interviews, questionnaires, and observation. It has described in detail the
planning and execution of each. In addition, five key issues of data gathering were presented,
and how to record the data gathered was discussed.

Key Points
• All data gathering sessions should have clear goals.
• Depending on the study context, an informed consent form and other permissions may be
needed to run the study.

• Running a pilot study helps to test out the feasibility of a planned data gathering session
and associated instruments such as questions.

• Triangulation involves investigating a phenomenon from different perspectives.
• Data may be recorded using handwritten notes, audio or video recording, a camera, or any
combination of these.

• There are three styles of interviews: structured, semi-structured, and unstructured.
• Questionnaires may be paper-based, via email, or online.
• Questions for an interview or questionnaire can be open or closed-ended. Closed-ended
questions require the interviewee to select from a limited range of options. Open-ended
questions accept a free-range response.

• Observation may be direct or indirect.
• In direct observation, the observer may adopt different levels of participation, ranging from
insider (participant observer) to outsider (passive observer).

• Choosing appropriate data gathering techniques depends on the focus of the study, partici-
pants involved, nature of the technique, and resources available.

F u R T H E R R E A D I N G 305

OLSON, J. S. AND KELLOGG, W. A. (eds) (2014) Ways of Knowing in HCI. Springer. This
edited collection contains useful chapters on a wide variety of data collection and analysis
techniques. Some topics that are particularly relevant to this chapter include: ethnography,
experimental design, log data collection and analysis, ethics in research, and more.

ROBSON, C. AND McCARTAN, K. (2016) Real World Research (4th edn). John Wiley &
Sons. This book provides comprehensive coverage of data gathering and analysis techniques
and how to use them. Early books and related books by Robson also address topics discussed
in this chapter.

TOEPOEL, V. (2016) Doing Surveys Online. Sage. This book is a “hands-on guide” for pre-
paring and conducting a wide range of surveys including surveys for mobile devices, opt-in
surveys, panels, polls, and more. It also discusses details about sampling that can be applied
to other data gathering techniques.

Chapter 9

D A T A A N A LY S I S , I N T E R P R E T A T I O N ,
A N D P R E S E N T A T I O N

9.1 Introduction

9.2 Qualitative and Quantitative

9.3 Basic Quantitative Analysis

9.4 Basic Qualitative Analysis

9.5 What Kind of Analytic Framework to Use

9.6 Tools to Support Data Analysis

9.7 Interpreting and Presenting the Findings

Objectives
The main goals of this chapter are to accomplish the following:

• Discuss the difference between qualitative and quantitative data and analysis.
• Enable you to analyze data gathered from questionnaires.
• Enable you to analyze data gathered from interviews.
• Enable you to analyze data gathered from observation studies.
• Make you aware of software packages that are available to help your analysis.
• Identify some of the common pitfalls in data analysis, interpretation, and presentation.
• Enable you to interpret and present your findings in a meaningful and appropriate
manner.

9.1 Introduction

The kind of analysis that can be performed on a set of data will be influenced by the goals
identified at the outset and the data gathered. Broadly speaking, a qualitative analysis approach,
a quantitative analysis approach, or a combination of qualitative and quantitative approaches
may be taken. The last of these is very common, as it provides a more comprehensive account
of the behavior being observed or the performance being measured.

9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N308

Most analysis, whether it is quantitative or qualitative, begins with the initial reactions
or observations from the data. This may involve identifying patterns or calculating simple
numerical values such as ratios, averages, or percentages. For all data, but especially when
dealing with large volumes of data (that is, Big Data), it is useful to look over the data to
check for any anomalies that might be erroneous. For example, people who are 999 years
old. This process is known as data cleansing, and there are often digital tools to help with the
process. This initial analysis is followed by more detailed work using structured frameworks
or theories to support the investigation.

Interpretation of the findings often proceeds in parallel with analysis, but there are differ-
ent ways to interpret results, and it is important to make sure that the data supports any con-
clusions. A common mistake is for the investigator’s existing beliefs or biases to influence the
interpretation of results. Imagine that an initial analysis of the data has revealed a pattern of
responses to customer care questionnaires that indicates that inquiries from customers routed
through the Sydney office of an organization take longer to process than those routed through
the Moscow office. This result can be interpreted in many different ways. For example, the
customer care operatives in Sydney are less efficient, they provide more detailed responses,
the technology supporting the inquiry process in Sydney needs to be updated, customers
reaching the Sydney office demand a higher level of service, and so on. Which one is correct?
To determine whether any of these potential interpretations is accurate, it would be appropri-
ate to look at other data such as customer inquiry details and maybe to interview staff.

Another common mistake is to make claims that go beyond what the data can support.
This is a matter of interpretation and of presentation. Using words such as many or often
or all when reporting conclusions needs to be carefully considered. An investigator needs to
remain as impartial and objective as possible if the conclusions are to be trusted. Showing
that the conclusions are supported by the results is an important skill to develop.

Finally, finding the best way to present findings is equally skilled, and it depends on the
goals but also on the audience for whom the study was performed. For example, a formal
notation may be used to report the results for the requirements activity, while a summary
of problems found, supported by video clips of users experiencing those problems, may be
better for presentation to the team of developers.

This chapter introduces a variety of methods, and it describes in more detail how to
approach data analysis and presentation using some of the common approaches taken in
interaction design.

9.2 Quantitative and Qualitative

Quantitative data is in the form of numbers, or data that can easily be translated into
numbers. Examples are the number of years’ experience the interviewees have, the number
of projects a department handles at a time, or the number of minutes it takes to perform a
task. Qualitative data is in the form of words and images, and it includes descriptions, quotes
from interviewees, vignettes of activity, and photos. It is possible to express qualitative data
in numerical form, but it is not always meaningful to do so (see Box 9.1).

It is sometimes assumed that certain forms of data gathering can only result in quantitative
data and that others can only result in qualitative data. However, this is a fallacy. All forms
of data gathering discussed in the previous chapter may result in qualitative and quantitative

9 . 2 Q u A N T I TAT I v E A N D Q u A L I TAT I v E 309

data. For example, on a questionnaire, questions about the participant’s age or number of
software apps they use in a day will result in quantitative data, while any comments will
result in qualitative data. In an observation, quantitative data that may be recorded includes
the number of people involved in a project or how many hours someone spends sorting out
a problem, while notes about feelings of frustration, or the nature of interactions between
team members, are qualitative data.

Quantitative analysis uses numerical methods to ascertain the magnitude, amount, or
size of something; for example, the attributes, behavior, or strength of opinion of the partici-
pants. For example, in describing a population, a quantitative analysis might conclude that
the average person is 5 feet 11 inches tall, weighs 180 pounds, and is 45 years old. Qualita-
tive analysis focuses on the nature of something and can be represented by themes, patterns,
and stories. For example, in describing the same population, a qualitative analysis might
conclude that the average person is tall, thin, and middle-aged.

BOX 9.1
Use and Abuse of Numbers

Numbers are infinitely malleable and can make a convincing argument, but it is important
to justify the manipulation of quantitative data and what the implications will be. Before
adding a set of numbers together, finding an average, calculating a percentage, or performing
any other kind of numerical translation, consider whether the operation is meaningful in the
specific context.

Qualitative data can also be turned into a set of numbers. Translating non-numerical data
into a numerical or ordered scale is appropriate at times, and this is a common approach in
interaction design. However, this kind of translation also needs to be justified to ensure that it
is meaningful in the given context. For example, assume you have collected a set of interviews
from sales representatives about their use of a new mobile app for reporting sales queries.
One way of turning this data into a numerical form would be to count the number of words
uttered by each interviewee. Conclusions might then be drawn about how strongly the sales
representatives feel about the app; for example, the more they had to say about the product,
the stronger they felt about it. But do you think this is a wise way to analyze the data? Does
it help to answer the study questions?

Other, less obvious, abuses include translating small population sizes into percentages.
For example, saying that 50 percent of users take longer than 30 minutes to place an order
through an e-commerce website carries a different meaning than saying that two out of four
users had the same problem. It is better not to use percentages unless the number of data
points is at least 10, and even then it is appropriate to use both percentages and raw numbers
to make sure that the claim is not misunderstood.

It is possible to perform legitimate statistical calculations on a set of data and still present
misleading results by not making the context clear or by choosing the particular calculation
that gives the most favorable result (Huff, 1991). In addition, choosing and applying the best
statistical test requires careful thinking (Cairns, 2019), as using an inappropriate test can
unintentionally misrepresent the data.

9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N310

9.2.1 First Steps in Analyzing Data
Having collected the data, some initial processing is normally required before data analy-
sis can begin in earnest. For example, audio data may be transcribed by hand or by
using an automated tool, such as Dragon; quantitative data, such as time taken or errors
made, is usually entered into a spreadsheet, like Excel. Initial analysis steps for data typi-
cally collected through interviews, questionnaires, and observation are summarized in
Table 9.1.

Interviews
Interviewer notes need to be written up and expanded as soon as possible after the interview
has taken place so that the interviewer’s memory is clear and fresh. An audio or video record-
ing may be used to help in this process, or it may be transcribed for more detailed analysis.

Usual raw data Example
qualitative data

Example
quantitative data

Initial
processing steps

Interviews Audio recordings.
Interviewer notes.
Video recordings.

Responses to open-
ended questions.
Video pictures.
Respondent’s
opinions.

Age, job
role, years of
experience.
Responses to
close-ended
questions.

Transcription
of recordings.
Expansion
of notes.
Entry of answers
to close-ended
questions into
a spreadsheet

Question naires Written responses.
Online database.

Responses to open-
ended questions.
Responses
in “further
comments” fields.
Respondent’s
opinions.

Age, job
role, years of
experience.
Responses to
close-ended
questions.

Clean up data.
Filter into
different data sets.
Synchronization
between data
recordings.

Observation Observer’s notes.
Photographs.
Audio and video
recordings.
Data logs.
Think-aloud
Diaries.

Records
of behavior.
Description of
a task as it is
undertaken.
Copies of informal
procedures.

Demographics of
participants.
Time spent
on a task.
The number of
people involved
in an activity.
How many
different types
of activity are
undertaken.

Expansion of notes.
Transcription of
recordings.

Table 9.1 Data gathered and typical initial processing steps for interviews, questionnaires,
and observation

9 . 3 B A S I c Q u A N T I TAT I v E A N A LY S I S 311

Transcription takes significant effort, as people talk more quickly than most people can type
(or write), and the recording is not always clear. It is worth considering whether to transcribe
the whole interview or just sections of it that are relevant. Deciding what is relevant, however,
can be difficult. Revisiting the goals of the study to see which passages address the research
questions can guide this process.

Close-ended questions are usually treated as quantitative data and analyzed using basic
quantitative analysis (see Section 9.3 “Basic Quantitative Analysis”). For example, a question
that asks for the respondent’s age range can easily be analyzed to find out the percentage of
respondents in each. More complicated statistical techniques are needed to identify relation-
ships between responses that can be generalized, such as whether there is an interaction
between the condition being tested and a demographic. For example, do people of different
ages use Facebook for different lengths of time when first logging on in the morning or at
night before they go to bed? Open-ended questions typically result in qualitative data that
might be searched for categories or patterns of response.

Questionnaires
Increasingly, questionnaire responses are provided using online surveys, and the data is auto-
matically stored in a database. The data can be filtered according to respondent subpopula-
tions (for instance, everyone under 16) or according to a particular question (for example,
to understand respondents’ reactions to one kind of robot personality rather than another).
This allows analyses to be conducted on subsets of the data and hence to draw specific con-
clusions for more targeted goals. To conduct this kind of analysis requires sufficient data
from a large enough sample of participants.

Observation
Observation can result in a wide variety of data including notes, photographs, data logs,
think-aloud recordings (often called protocols), video, and audio recordings. Taken together,
these different types of data can provide a rich picture of the observed activity. The difficult
part is working out how to combine the different sources to create a coherent narrative
of what has been recorded; analytic frameworks, discussed in section 9.5, can help with
this. Initial data processing includes writing up and expanding notes and transcribing ele-
ments of the audio and video recordings and the think-aloud protocols. For observation in
a controlled environment, initial processing might also include synchronizing different data
recordings.

Transcriptions and the observer’s notes are most likely to be analyzed using qualitative
approaches, while photographs provide contextual information. Data logs and some ele-
ments of the observer’s notes would probably be analyzed quantitatively.

9.3 Basic Quantitative Analysis

Explaining statistical analysis requires a whole book on its own (for example, see Cairns,
2019). Here, we introduce two basic quantitative analysis techniques that can be used effec-
tively in interaction design: averages and percentages. Percentages are useful for standard-
izing the data, particularly to compare two or more large sets of responses.

9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N312

Averages and percentages are fairly well-known numerical measures. However, there
are three different types of average, and using the wrong one can lead to the misinter-
pretation of the results. These three are: mean, median, and mode. Mean refers to the
commonly understood interpretation of average; that is, add together all the figures and
divide by the number of figures with which you started. Median and mode averages are
less well-known but are very useful. The median is the middle value of the data when the
numbers are ranked. The mode is the most commonly occurring number. For example, in
a set of data (2, 3, 4, 6, 6, 7, 7, 7, 8), the median is 6 and the mode is 7, while the mean
is 50/9 = 5.56. In this case, the difference between the different averages is not that great.
However, consider the set (2, 2, 2, 2, 450). Now the median is 2, the mode is 2, and the
mean is 458/5 = 91.6!

Use of simple averages can provide useful overview information, but they need to be used
with caution. Evangelos Karapanos et al. (2009) go further and suggest that averaging treats
diversity among participants as error and proposes the use of a multidimensional scaling
approach instead.

Before any analysis can take place, the data needs to be collated into analyzable data sets.
Quantitative data can usually be translated into rows and columns, where one row equals
one record, such as respondent or interviewee. If these are entered into a spreadsheet such
as Excel, this makes simple manipulations and data set filtering easier. Before entering data
in this way, it is important to decide how to represent the different possible answers. For
example, “don’t know” represents a different response from no answer at all, and they need
to be distinguished, for instance, with separate columns in the spreadsheet. Also, if dealing
with options from a close-ended question, such as job role, there are two different possible
approaches that affect the analysis. One approach is to have a column headed “Job role” and
to enter the job role as it is given by the respondent or interviewee. The alternative approach
is to have a column for each possible answer. The latter approach lends itself more easily to
automatic summaries. Note, however, that this option will be open only if the original ques-
tion was designed to collect the appropriate data (see Box 9.2).

Source: Mike Baldwin / Cartoon Stock

9 . 3 B A S I c Q u A N T I TAT I v E A N A LY S I S 313

BOX 9.2
How Question Design Affects Data Analysis

Different question designs affect the kinds of analyses that can be performed and the kinds
of conclusions that can be drawn. To illustrate this, assume that some interviews have
been conducted to evaluate a new app that lets you try on virtual clothes and see yourself
in real time as a 3D holograph. This is an extension of the Memory Mirror described at
http://memorymirror.com.

Assume that one of the questions asked is: “How do you feel about this new app?”
Responses to this will be varied and may include that it is cool, impressive, realistic, clunky,
technically complex, and so on. There are many possibilities, and the responses would need
to be treated qualitatively. This means that analysis of the data must consider each individual
response. If there are only 10 or so responses, then this may not be too bad, but if there
are many more, it becomes harder to process the information and harder to summarize
the findings. This is typical of open-ended questions; that is, answers are not likely to be
homogeneous and so they will need to be treated individually. In contrast, answers to a close-
ended question, which gives respondents a fixed set of alternatives from which to choose,
can be treated quantitatively. So, for example, instead of asking “How do you feel about the
virtual try-on holograph?” assume that you have asked “In your experience, are virtual try-on
holographs realistic, clunky, or distorted?” This clearly reduces the number of options and the
responses would be recorded as “realistic,” “clunky,” or “distorted.”

When entered in a spreadsheet, or a simple table, initial analysis of this data might look
like the following:

Respondent Realistic Clunky Distorted

A 1

B 1

C 1

. . .

Z 1

Total 14 5 7

Based on this, we can then say that 14 out of 26 (54 percent) of the respondents think
virtual try-on holographs are realistic, 5 out of 26 (19 percent) think they are clunky, and 7
out of 26 (27 percent) think they are distorted. Note also that in the table, respondents’ names
are replaced by letters so that they are identifiable but anonymous to any onlookers. This
strategy is important for protecting participants’ privacy.

Another alternative that might be used in a questionnaire is to phrase the question in
terms of a Likert scale, such as the following one. This again alters the kind of data and hence
the kind of conclusions that can be drawn.

(Continued)

Home

9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N314

For simple collation and analysis, spreadsheet software such as Excel or Google
Sheets is often used as it is commonly available, is well understood, and offers a variety
of numerical manipulations and graphical representations. Basic analysis might involve
finding out averages and identifying outliers, in other words, values that are significantly
different from the majority, and hence not common. Producing a graphical representation
provides an overall view of the data and any patterns it contains. Other tools are avail-
able for performing specific statistical tests, such as online t-tests and A/B testing tools.
Data visualization tools can create more sophisticated representations of the data such
as heatmaps.

For example, consider the set of data shown in Table 9.2, which was collected during an
evaluation of a new photo sharing app. This data shows the users’ experience of social media
and the number of errors made while trying to complete a controlled task with the new app.
It was captured automatically and recorded in a spreadsheet; then the totals and averages
were calculated. The graphs in Figure 9.1 were generated using the spreadsheet package.
They show an overall view of the data set. In particular, it is easy to see that there are no
significant outliers in the error rate data.

Adding one more user to Table 9.2 with an error rate of 9 and plotting the new data as a
scatter graph (see Figure 9.2) illustrates how graphs can help to identify outliers. Outliers are
usually removed from the main data set because they distort the general patterns. However,
outliers may also be interesting cases to investigate further in case there are special circum-
stances surrounding those users and their session.

Virtual try-on holographs are realistic:

strongly agree agree neither disagree strongly disagree

□ □ □ □ □

The data could then be analyzed using a simple spreadsheet or table:

Respondent Strongly agree Agree Neither Disagree Strongly disagree

A 1 1

B

C 1

. . .

Z 1

Total 5 7 10 1 3

In this case, the kind of data being collected has changed. Based on this second set, nothing
can be said about whether respondents think the virtual try-on holographs are clunky or dis-
torted, as that question has not been asked. We can only say that, for example, 4 out of 26
(15 percent) disagreed with the statement that virtual try-on holographs are realistic, and of
those, 3 (11.5 percent) strongly disagreed.

9 . 3 B A S I c Q u A N T I TAT I v E A N A LY S I S 315

These initial investigations also help to identify other areas for further investigation. For
example, is there something special about users with error rate 0 or something distinctive
about the performance of those who use the social media only once a month?

Social media use

User More than
once a day

Once a
day

Once a
week

Two or three
times a week

Once a month Number of
errors made

1 1 4

2 1 2

3 1 1

4 1 0

5 1 2

6 1 3

7 1 2

8 1 0

9 1 3

10 1 2

11 1 1

12 1 2

13 1 4

14 1 2

15 1

16 1 1

17 1 1 0

18 1 0

Totals 4 7 2 3 2 30

Mean 1.67

(to 2 decimal places)

Table 9.2 Data gathered during a study of a photo sharing app

9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N316

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

1 5 7 9 11 13 15 17

User

N
um

be
r

of
e

rr
or

s
m

ad
e

3

Social media use

> once a day

once a day

once a week

2 or 3 times a week

once a month

(a)

(b)

Figure 9.1 Graphical representations of the data in Table 9.2 (a) The distribution of errors made
(take note of the scale used in these graphs, as seemingly large differences may be much smaller in
reality). (b) The spread of social media experience within the participant group.

9 . 3 B A S I c Q u A N T I TAT I v E A N A LY S I S 317

0

2

4

6

8

10

0 5 10 15 20
User

N
um

be
r

of
e

rr
or

s
m

ad
e

Figure 9.2 Using a scatter diagram helps to identify outliers in your data quite quickly

AcTIvITY 9.1
The data in the following table represents the time taken for a group of users to select and buy
an item from an online shopping website.
Using a spreadsheet application to which you have access, generate a bar graph and a scatter
diagram to provide an overall view of the data. From this representation, make two initial
observations about the data that might form the basis of further investigation.

User A B C D E F G H I J K L M N O P Q R S

Time to
complete
(mins)

15 10 12 10 14 13 11 18 14 17 20 15 18 24 12 16 18 20 26

Comment
The bar graph and scatter diagram are shown here.

Time to complete task

0

5

10

15

20

25

30

User

T
im

e
in

m
in

ut
es

A B C F G H I J K L M N O P Q RE SD

(Continued)

9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N318

It is fairly straightforward to compare two sets of results, for instance from the evalua-
tion of two interactive products, using these kinds of graphical representations of the data.
Semantic differential data can also be analyzed in this way and used to identify trends, pro-
vided that the format of the question is appropriate. For example, the following question was
asked in a questionnaire to evaluate two different smartphone designs:

For each pair of adjectives, place a cross at the point between them that reflects the
extent to which you believe the adjectives describe the smartphone design. Please place
only one cross between the marks on each line.

Annoying Pleasing

Di�cult to useEasy to use

ExpensiveValue-for-money

Attractive Unattractive

Not secureSecure

UnhelpfulHelpful

Lo-techHi-tech

FragileRobust

E�cientIne�cient

DatedModern

Time to complete task

A

B
C

D

E F
G

H

I
J

K

L
M

N

O

P
Q

R

S

0

5

10

15

20

25

30

0 5 10 15 20

User

T
im

e
in

m
in

ut
es

From these two diagrams, there are two areas for further investigation. First, the values for user
N (24) and user S (26) are higher than the others and could be looked at in more detail. In addi-
tion, there appears to be a trend that the users at the beginning of the testing time (particularly
users B, C, D, E, F, and G) performed faster than those toward the end of the testing time. This
is not a clear-cut situation, as O also performed well, and I, L, and P were almost as fast, but
there may be something about this later testing time that has affected the results, and it is worth
investigating further.

9 . 3 B A S I c Q u A N T I TAT I v E A N A LY S I S 319

Table 9.3 and Table 9.4 show the tabulated results from 100 respondents. Note that the
responses have been translated into five categories, numbered from 1 to 5, based on where
the respondent marked the line between each pair of adjectives. It is possible that respond-
ents may have intentionally put a cross closer to one side of the box than the other, but it is
acceptable to lose this nuance in the data, provided that the original data is not lost, and any
further analysis could refer back to it.

The graph in Figure 9.3 shows how the two smartphone designs varied according to the
respondents’ perceptions of how modern the design is. This graphical notation shows clearly
how the two designs compare.

1 2 3 4 5

Annoying 35 20 18 15 12 Pleasing

Easy to use 20 28 21 13 18 Difficult to use

Value-for-money 15 30 22 27 6 Expensive

Attractive 37 22 32 6 3 Unattractive

Secure 52 29 12 4 3 Not secure

Helpful 33 21 32 12 2 Unhelpful

Hi-tech 12 24 36 12 16 Lo-tech

Robust 44 13 15 16 12 Fragile

Inefficient 28 23 25 12 12 Efficient

Modern 35 27 20 11 7 Dated

Table 9.3 Phone 1

1 2 3 4 5

Annoying 24 23 23 15 15 Pleasing

Easy 37 29 15 10 9 Difficult to use

Value-for-money 26 32 17 13 12 Expensive

Attractive 38 21 29 8 4 Unattractive

Secure 43 22 19 12 4 Not secure

Helpful 51 19 16 12 2 Unhelpful

Hi-tech 28 12 30 18 12 Lo-tech

Robust 46 23 10 11 10 Fragile

Inefficient 10 6 37 29 18 Efficient

Modern 3 10 45 27 15 Dated

Table 9.4 Phone 2

9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N320

Data logs that capture users’ interactions automatically, such as with a website or smart-
phone, can also be analyzed and represented graphically, thus helping to identify patterns
in behavior. Also, more sophisticated manipulations and graphical images can be used to
highlight patterns in collected data.

9.4 Basic Qualitative Analysis

Three basic approaches to qualitative analysis are discussed in this section: identifying
themes, categorizing data, and analyzing critical incidents. Critical incident analysis is a way
to isolate subsets of data for more detailed analysis, perhaps by identifying themes or apply-
ing categories. These three basic approaches are not mutually exclusive and are often used in
combination, for example, when analyzing video material critical incidents may first be iden-
tified and then a thematic analysis undertaken. Video analysis is discussed further in Box 9.3.

As with quantitative analysis, the first step in qualitative analysis is to gain an overall
impression of the data and to start looking for interesting features, topics, repeated obser-
vations, or things that stands out. Some of these will have emerged during data gathering,
and this may already have suggested the kinds of pattern to look for, but it is important to
confirm and re-confirm findings to make sure that initial impressions don’t bias analysis. For
example, you might notice from the logged data of people visiting TripAdviser.com that they
often look for reviews for hotels that are rated “terrible” first. Or, you might notice that a
lot of respondents all say how frustrating it is to have to answer so many security questions
when logging onto an online banking service. During this first pass, it is not necessary to
capture all of the findings but instead to highlight common features and record any surprises
that arise (Blandford, 2017).

For observations, the guiding framework used in data gathering will give some struc-
ture to the data. For example, the practitioner’s framework for observation introduced in
Chapter 8, “Data Gathering,” will have resulted in a focus on who, where, and what, while

Perceptions of Modern

0

5

10

15

20

25

30

35

40

45

50

1 3 5

N
um

be
r

of
r

es
po

nd
en

ts
Phone 1

Phone 2

42

Figure 9.3 A graphical comparison of two smartphone designs according to whether they are per-
ceived as modern or dated

http://TripAdviser.com

9 . 4 B A S I c Q u A L I TAT I v E A N A LY S I S 321

using the more detailed framework will result in patterns relating to physical objects, people’s
goals, sequences of events, and so on.

Qualitative data can be analyzed inductively, that is, extracting concepts from the data,
or deductively, in other words using existing theoretical or conceptual ideas to categorize
data elements (Robson and McCartan, 2016). Which approach is used depends on the data
obtained and the goal of the study, but the underlying principle is to classify elements of the
data in order to gain insights toward the study’s goal. Identifying themes (thematic analysis)
takes an inductive approach, while categorizing data takes a deductive approach. In practice,
analysis is often performed iteratively, and it is common for themes identified inductively
then to be applied deductively to new data, and for an initial, pre-existing categorization
scheme, to be enhanced inductively when applied to a new situation or new data. One of the
most challenging aspects of identifying themes or new categories is determining meaningful
codes that are orthogonal (that is, codes which do not overlap). Another is deciding on the
appropriate granularity for them, for example at the word, phrase, sentence, or paragraph
level. This is also dependent on the goal of the study and the data being analyzed.

Whether an inductive or deductive approach is used, an objective is to produce a reli-
able analysis, that is, one that can be replicated by someone else if they were to use the same
type of approach. One way to achieve this is to train another person to do the coding. When
training is complete, both researchers analyze a sample of the same data. If there is a large
discrepancy between the two analyses, either training was inadequate or the categorization
is not working and needs to be refined. When a high level of reliability is reached between
the two researchers, it can be quantified by calculating the inter-rater reliability. This is the
percentage of agreement between the analyses of the two researchers, defined as the number
of items of agreement, for example the number of categories or themes arising from the data
that have been identified consistently by both researchers, expressed as a percentage of the
total number of items examined. An alternative measure where two researchers have been
used is Cohen’s kappa, (κ), which considers the possibility that agreement has occurred due
to chance (Cohen, 1960).

Using more sophisticated analytical frameworks to structure the analysis of qualitative
data can lead to additional insights that go beyond the results of these basic techniques.
Section 9.5 introduces frameworks that are commonly used in interaction design.

BOX 9.3
Analyzing Video Material

A good way to start a video analysis is to watch what has been recorded all the way through
while writing a high-level narrative of what happens, noting down where in the video there
are any potentially interesting events. How to decide which is an interesting event will depend
on what is being observed. For example, in a study of the interruptions that occur in an open
plan office, an event might be each time that a person takes a break from an ongoing activity,
for instance, when a phone rings, someone walks into their cubicle, or email arrives. If it is a
study of how pairs of students use a collaborative learning tool, then activities such as turn-
taking, sharing of input devices, speaking over one another, and fighting over shared objects
would be appropriate to record.

9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N322

9.4.1 Identifying Themes
Thematic analysis is considered an umbrella term to cover a variety of different approaches
to examining qualitative data. It is a widely used analytical technique that aims to identify,
analyze, and report patterns in the data (Braun and Clarke, 2006). More formally, a theme
is something important about the data in relation to the study goal. A theme represents a
pattern of some kind, perhaps a particular topic or feature found in the data set, which is
considered to be important, relevant, and even unexpected with respect to the goals driv-
ing the study. Themes that are identified may relate to a variety of aspects: behavior, a user
group, events, places or situations where those events happen, and so on. Each of these kinds
of themes may be relevant to the study goals. For example, descriptions of typical users may
be an outcome of data analysis that focuses on participant characteristics. Although thematic
analysis is described in this section on qualitative analysis, themes and patterns may also
emerge from quantitative data.

After an initial pass through the data, the next step is to look more systematically for
themes across participants’ transcripts, seeking further evidence both to confirm and discon-
firm initial impressions in all of the data. This more systematic analysis focuses on checking for
consistency; in other words, do the themes occur across all participants, or is it only one or two
people who mention something? Another focus is on finding further themes that may not have
been noticed first time. Sometimes, the refined themes resulting from this systematic analysis
form the primary set of findings for the analysis, and sometimes they are just the starting point.

The study’s goal provides an orienting focus for the identification and formulation of
themes in the first and subsequent passes through the data. For example, consider a survey
to evaluate whether the information displayed on a train travel website is appropriate and
sufficient. Several of the respondents suggest that the station stops in between the origin
and destination stations should be displayed. This is relevant to the study’s goal and would be
reported as a main theme. In another part of the survey, under further comments you might
notice that several respondents say the company’s logo is distracting. Although this too is a
theme in the data, it is not directly relevant to the study’s goals and may be reported only as
a minor theme.

Chronological and video times are used to index events. These may not be the same,
since recordings can run at different speeds from real time and video can be edited. Labels for
certain routine events are also used, for instance lunchtime, coffee break, staff meeting, and
doctor’s rounds. Spreadsheets are used to record the classification and description of events,
together with annotations and notes of how the events began, how they unfolded, and how
they ended.

Video can be augmented with captured screens or logged data of people’s interactions
with a computer display, and sometimes transcription is required. There are various logging
and screen capture tools available for this purpose, which enable interactions to be played
back as a movie, showing screen objects being opened, moved, selected, and so on. These can
then be played in parallel with the video to provide different perspectives on the talk, physical
interactions, and the system’s responses that occur. Having a combination of data streams can
enable more detailed and fine-grained patterns of behavior to be interpreted (Heath
et al., 2010).

9 . 4 B A S I c Q u A L I TAT I v E A N A LY S I S 323

Once a number of themes have been identified, it is usual to step back from the set
of themes to look at the bigger picture. Is an overall narrative starting to emerge, or are the
themes quite disparate? Do some seem to fit together with others? If so, is there an over-
arching theme? Can you start to formulate a meta-narrative, that is, an overall picture of
the data? In doing this, some of the original themes may not seem as relevant and can be
removed. Are there some themes that contradict each other? Why might this be the case?
This can be done individually, but more often this is applied in a group using brainstorming
techniques with sticky notes.

A common technique for exploring data, identifying themes, and looking for an overall
narrative is to create an affinity diagram. The approach seeks to organize individual ideas
and insights into a hierarchy showing common structures and themes. Notes are grouped
together when they are similar in some fashion. The groups are not predefined, but rather
they emerge from the data. This process was originally introduced into the software quality
community from Japan, where it is regarded as one of the seven quality processes. The affin-
ity diagram is built gradually. One note is put up first, and then the team searches for other
notes that are related in some way.

Affinity diagrams are used in Contextual Design (Beyer and Holtzblatt, 1998; Holtz-
blatt, 2001), but they have also been adopted widely in interaction design (Lucero, 2015).
For example, Madeline Smith et al. (2018) conducted interviews to design a web app for
co-watching videos across a distance, and they used affinity diagramming to identify require-
ments from interviewee transcripts (see Figure 9.4). Despite the prevalence of digital col-
laboration tools, the popularity of physical affinity diagramming using sticky notes drawn
by hand, has persisted for many years (Harboe and Huang, 2015).

Figure 9.4 Section of an affinity diagram built during the design of a web application
Source: Smith (2018). Used courtesy of Madeline Smith

9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N324

9.4.2 Categorizing Data
Inductive analysis is appropriate when the study is exploratory, and it is important to let the
themes emerge from the data itself. Sometimes, the analysis frame (the set of categories used)
is chosen beforehand, based on the study goal. In that case, analysis proceeds deductively. For
example, in a study of novice interaction designer behavior in Botswana, Nicole Lotz et  al.
(2014) used a set of predetermined categories based on Schön (1983)’s design and reflection
cycle: naming, framing, moving, and reflecting. This allowed the researchers to identify detailed
patterns in the designers’ behavior, which provided implications for education and support.

To illustrate categorization, we present an example derived from a set of studies look-
ing at the use of different navigation aids in an online educational setting (Ursula Armi-
tage, 2004). These studies involved observing users working through some online educational
material (about evaluation methods), using the think-aloud technique. The think-aloud
protocol was recorded and then transcribed before being analyzed from various perspec-
tives, one of which was to identify usability problems that the participants were having with
the online environment known as Nestor Navigator (Zeiliger et al., 1997). An excerpt from the
transcription is shown in Figure 9.5.

I’m thinking that it’s just a lot of information to absorb from the screen. I just I don’t concentrate
very well when I’m looking at the screen. I have a very clear idea of what I’ve read so far . . .
but it’s because of the headings I know OK this is another kind of evaluation now and before it
was about evaluation which wasn’t anyone can test and here it’s about experts so it’s like it’s
nice that I’m clicking every now and then coz it just sort of organizes the thoughts. But it would
still be nice to see it on a piece of paper because it’s a lot of text to read.

Am I supposed to, just one question, am supposed to say something about what I’m reading
and what I think about it the conditions as well or how I feel reading it from the screen, what is
the best thing really?

Observer: What you think about the information that you are reading on the screen . . . you
don’t need to give me comments . . . if you think this bit fits together.

There’s so much reference to all those previously said like I’m like I’ve already forgotten the
name of the other evaluation so it said unlike the other evaluation this one like, there really is
not much contrast with the other it just says what it is may be . . . so I think I think of . . .

Maybe it would be nice to have other evaluations listed to see other evaluations you know
here, to have the names of other evaluations other evaluations just to, because now when
I click previous I have to click it several times so it would be nice to have this navigation,
extra links.

Figure 9.5 Excerpt from a transcript of a think-aloud protocol when using an online educational
environment. Note the prompt from the observer about halfway through.
Source: Armitage (2004). Used courtesy of Ursula Armitage

To read more about the use of affinity diagrams in interaction design, see the fol-
lowing page:
https://uxdict.io/design-thinking-methods-affinity-diagrams-357bd8671ad4

9 . 4 B A S I c Q u A L I TAT I v E A N A LY S I S 325

This excerpt was analyzed using a categor