Posted: February 26th, 2023

kindly have look to my request


Module 10: Critical Thinking Assignment

Aligning IT Strategies to Business Strategies (120 points)

Delta Corporation has been very impressed with the progress it has made with its new product line and the new marketing approach that you recommended and instigated. As a result, it is now considering expanding this approach for its other product lines, providing them with the ability to market and sell all of its products online. However, it has not considered the implications for its IT department in developing this type of plan. As the Social Media Marketing Consultant, you are familiar with integrating business and IT strategic planning. They have called upon you to provide advice. You need to:

1.
Explain what the company needs to consider if it is going to move to a more on-line approach for its sales and marketing activities. In particular,
describe the consequences for its IT department and
why they need to be involved in the planning phase.

2. Given the likely increase in costs associated with this move, you need to
outline the potential benefits and possible pitfalls of outsourcing the IT maintenance and development.

3.
Outline some strategic technology trends that the company may wish to monitor and consider for the future.

4. Finally,
research a company in Saudi Arabia and
discuss whether they were able to align their overall strategy and their IT strategy. If they were successful
discuss
how they managed this alignment and
what benefits they gained from doing so. If they were not so successful,
explain
how and
why you felt that occurred.

Your well-written report should be 4-5 pages in length, not including the title and reference pages. To make it easier to read and therefore grade, make sure you clearly delineate each section of your answer so it can be matched with the relevant question. Use APA style guidelines, citing at least five references as appropriate. Review the grading rubric to see how you will be graded for this assignment.

Use assay form introduction which has a thesis, body and conclusion

DO NOT FORGET USING HEADINGS

Information Technology
for Management

On-Demand Strategies for Performance,
Growth and Sustainability

Eleventh Edition

Eleventh Edition

Information Technology
for Management

On-Demand Strategies for Performance,
Growth and Sustainability

EFRAIM TURBAN

CAROL POLLARD
Appalachian State University

GREGORY WOOD
Canisius College

VP AND EDITORIAL DIRECTOR Mike McDonald
EXECUTIVE EDITOR Lise Johnson
EDITORIAL ASSISTANT Ethan Lipson
EDITORIAL MANAGER Judy Howarth
CONTENT MANAGEMENT DIRECTOR Lisa Wojcik
CONTENT MANAGER Nichole Urban
SENIOR CONTENT SPECIALIST Nicole Repasky
PRODUCTION EDITOR Loganathan Kandan
PHOTO RESEARCHER Billy Ray
COVER PHOTO CREDIT © Ditty_about_summer/Shutterstock

This book was set in 9.5/12.5 pt Source Sans Pro by SPi Global and printed and bound by Strategic
Content Imaging.

Founded in 1807, John Wiley & Sons, Inc. has been a valued source of knowledge and understanding
for more than 200 years, helping people around the world meet their needs and fulfill their aspira-
tions. Our company is built on a foundation of principles that include responsibility to the communi-
ties we serve and where we live and work. In 2008, we launched a Corporate Citizenship Initiative, a
global effort to address the environmental, social, economic, and ethical challenges we face in our
business. Among the issues we are addressing are carbon impact, paper specifications and procure-
ment, ethical conduct within our business and among our vendors, and community and charitable
support. For more information, please visit our website: www.wiley.com/go/citizenship.

Copyright © 2018, 2015, 2013, 2011, 2010 John Wiley & Sons, Inc. All rights reserved. No part of this
publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any
means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as per-
mitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior
written permission of the Publisher, or authorization through payment of the appropriate per-copy
fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923 (Web site: www.
copyright.com). Requests to the Publisher for permission should be addressed to the Permissions
Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030-5774, (201) 748-6011, fax
(201) 748-6008, or online at: www.wiley.com/go/permissions.

Evaluation copies are provided to qualified academics and professionals for review purposes only,
for use in their courses during the next academic year. These copies are licensed and may not be sold
or transferred to a third party. Upon completion of the review period, please return the evaluation
copy to Wiley. Return instructions and a free of charge return shipping label are available at: www.
wiley.com/go/returnlabel. If you have chosen to adopt this textbook for use in your course, please
accept this book as your complimentary desk copy. Outside of the United States, please contact your
local sales representative.

ISBN: 978-1-118-89079-0 (PBK)
ISBN: 978-1-119-39783-0 (EVALC)

Library of Congress Cataloging in Publication Data:

Names: Turban, Efraim, author. | Pollard, Carol (Carol E.), author. | Wood,
Gregory R., author.
Title: Information technology for management : on-demand strategies for
performance, growth and sustainability / Efraim Turban, Carol Pollard,
Gregory R. Wood.
Description: 11th edition. | Hoboken, NJ : John Wiley & Sons, 2018. |
Includes bibliographical references and index. |
Identifiers: LCCN 2017037711 (print) | LCCN 2017046158 (ebook) | ISBN
9781118890868 (epub) | ISBN 9781119172390 (pdf) | ISBN 9781118890790 (pbk.)
Subjects: LCSH: Management information systems.
Classification: LCC T58.6 (ebook) | LCC T58.6 .T765 2017 (print) | DDC
658.4/038011—dc23
LC record available at https://lccn.loc.gov/2017037711

The inside back cover will contain printing identification and country of origin if omitted from this
page. In addition, if the ISBN on the back cover differs from the ISBN on this page, the one on the
back cover is correct.

v

Brief Contents

PREFACE xiii
ACKNOWLEDGMENTS xviii

PART 1 Reshaping Enterprises and Consumers
in the On-Demand Economy

1 Disruptive IT Impacts Companies,
Competition, and Careers 1

2 Information Systems, IT Architecture, Data
Governance, and Cloud Computing 25

3 Data Management, Data Analytics,
and Business Intelligence 65

4 Networks, Collaborative Technology,
and the Internet of Things 101

5 Cybersecurity and Risk Management
Technology 127

PART 2 Winning, Engaging, and Retaining
Consumers for Growth

6 Search, Semantic, and Recommendation
Technology 165

7 Web 2.0 and Social Technology 199

8 Retail, E-commerce, and Mobile Commerce
Technology 240

PART 3 Optimizing Performance, Processes,
and Productivity

9 Functional Business Systems 269

10 Enterprise Systems 300

11 Data Visualization and Geographic
Information Systems 331

PART 4 Managing Business Relationships,
Projects, and Ethical Responsibilities

12 IT Strategy, Sourcing, and Strategic
Technology Trends 354

13 Systems Development and Project
Management 385

14 IT Ethics, Privacy, and Sustainability 417

GLOSSARY 443
ORGANIZATION INDEX 448
NAME INDEX 450
SUBJECT INDEX 451

vi

PREFACE xiii
ACKNOWLEDGMENTS xviii

PART 1 Reshaping Enterprises
and Consumers in the On-Demand
Economy

1 Disruptive IT Impacts Companies,
Competition, and Careers 1

Case 1.1 Opening Case: Uber and Airbnb Revolutionize
Business Models in the On-Demand Economy 3

1.1 Doing Business in the On-Demand Economy 4
Growth of the On-Demand Economy 5
Digital Business Models 6
IT’s Role in the On-Demand Economy 7
IT Business Objectives 8

1.2 Business Process Improvement and Competitive
Advantage 8
What Is a Business Process? 9
Improving Business Processes 9
Don’t Automate, Obliterate! 10
Gaining a Competitive Advantage 11
Software Support for BPM 13

1.3 IT Innovation and Disruption 13
Social–Mobile–Analytics–Cloud (SMAC) Model 13
Technology Mega Trends 14
Lessons Learned from Companies Using Disruptive
Technologies 16

1.4 IT and You 17
On-Demand Workers 17
IT Adds Value to Your Performance and Career 19
Becoming an Informed IT User 21

Case 1.2 Business Case: The Internet of Things Comes
to the NFL 23

Case 1.3 Video Case: Knowing More and Doing More 24

2 Information Systems,
IT Architecture, Data Governance,
and Cloud Computing 25

Case 2.1 Opening Case: Detoxing Location-Based
Advertising Data at MEDIATA 27

2.1 IS Concepts and Classification 28

Components of an IS 29
Data, Information, Knowledge, and Wisdom 30
Types of ISs 31
Transaction Processing System (TPS) 32
Management Information System (MIS) 33
Decision Support System (DSS) 34
Executive Information System (EIS) 35
ISS Exist within Corporate Culture 36

2.2 IT Infrastructure, IT Architecture, and Enterprise
Architecture 37
EA Helps to Maintain Sustainability 38
Developing an Enterprise Architecture (EA) 41

2.3 Information Management and Data
Governance 42
Information Management Harnesses
Scattered Data 43
Reasons for Information Deficiencies 43
Factors Driving the Shift from Silos to Sharing
and Collaboration 45
Business Benefits of Information Management 45
Data Governance: Maintaining Data Quality
and Cost Control 46

2.4 Data Centers and Cloud Computing 48
Data Centers 48
Integrating Data to Combat Data Chaos 50
Cloud Computing 52
Selecting a Cloud Vendor 52
Cloud Infrastructure 54
Issues in Moving Workloads from the Enterprise
to the Cloud 54

2.5 Cloud Services and Virtualization 55
Anything as a Service (XAAS) Models 55
Going Cloud 58
Virtualization and Virtual Machines 58

Case 2.2 Business Case: Data Chaos Creates Risk 62
Case 2.3 Video Case: Cloud Computing at Coca-Cola Is

Changing Everything 63

3 Data Management, Data Analytics,
and Business Intelligence 65

Case 3.1 Opening Case: Coca-Cola Strategically Manages
Data to Retain Customers and Reduce Costs 66

3.1 Data Management and Database Technologies 69
Database Management Systems and SQL 69
DBMS and Data Warehousing Vendors
Respond to Latest Data Demands 72

Contents

CONTENTS vii

3.2 Centralized and Distributed Database
Architectures 73
Garbage In, Garbage Out 75
Data Ownership and Organizational Politics 76
Data Life Cycle and Data Principles 77
Master Data and Master Data Management 78

3.3 Data Warehouses 79
Procedures to Prepare EDW Data for Analytics 80
Building a Data Warehouse 80
Real-Time Support from an Active Data
Warehouse 81

3.4 Big Data Analytics and Data Discovery 83
Human Expertise and Judgment are Needed 85
Data and Text Mining 88
Creating Business Value 88
Text Analytics Procedure 90
Analytics Vendor Rankings 90

3.5 Business Intelligence and
Electronic Records Management 91
Business Benefits of BI 92
Common Challenges: Data Selection
and Quality 92
Aligning BI Strategy with Business Strategy 92
BI Architecture and Analytics 93
Electronic Records Management 94
Legal Duty to Retain Business Records 94
ERM Best Practices 94
ERM Benefits 95
ERM for Disaster Recovery,
Business Continuity, and Compliance 95

Case 3.2 Business Case: Big Data Analytics is the “Secret
Sauce” for Revitalizing McDonald’s 98

Case 3.3 Video Case: Verizon Improves Its
Customer Experience with Data Driven
Decision-Making 99

4 Networks, Collaborative
Technology, and the Internet
of Things 101

Case 4.1 Opening Case: Sony Builds an IPv6 Network
to Fortify Competitive Edge 102

4.1 Network Fundamentals 104
Network Types 104
Intranets, Extranets, and Virtual Private
Networks 105
Network Terminology 105
Functions Supported by Business Networks 106
Quality of Service 107

4.2 Internet Protocols (IP), APIs, and Network
Capabilities 109

Comparing 3G, 4G, 4G LTE, and 5G Network
Standards 110
Circuit versus Packet Switching 111
Application Program Interfaces and Operating
Systems 111

4.3 Mobile Networks and Near-Field
Communication 113
Increase in Mobile Network Traffic and Users 114
Higher Demand for High-Capacity Mobile
Networks 115
Mobile Infrastructure 115
Two Components of Wireless Infrastructure 116
Business Use of Near-Field Communication 117
Choosing Mobile Network Solutions 118

4.4 Collaborative Technologies and the Internet
of Things 119
Virtual Collaboration 120
Group Work and Decision Processes 120
The Internet of Things (IoT) 121
IoT Sensors, Smart Meters, and the Smart Grid 121

Case 4.2 Business Case: Google Maps API for
Business 125

Case 4.3 Video Case: Small Island Telecom Company
Goes Global 126

5 Cybersecurity and Risk
Management Technology 127

Case 5.1 Opening Case: Yahoo Wins the Gold and Silver
Medal for the Worst Hacks in History! 129

5.1 The Face and Future of Cyberthreats 130
Intentional Threats 132
Unintentional Threats 132
Hacking 133
Cyber Social Engineering and Other Related
Web-Based Threats 134
Denial-of-Service 137
Insider and Privilege Misuse 137
Physical Theft or Loss 138
Miscellaneous Errors 138
New Attack Vectors 138

5.2 Cyberattack Targets and Consequences 139
“High-Profile” and “Under-the-Radar” Attacks 139
Critical Infrastructure Attacks 140
Theft of Intellectual Property 141
Identity Theft 142
Bring Your Own Device 142
Social Media Attacks 144

5.3 Cyber Risk Management 146
IT Defenses 146
Business Continuity Planning 149
Government Regulations 149

vii i CONTENTS

5.4 Defending Against Fraud 150
Occupational Fraud Prevention
and Detection 151
General Controls 152
Internal Controls 153
Cyber Defense Strategies 153
Auditing Information Systems 155

5.5 Frameworks, Standards, and Models 155
Risk Management and IT Governance
Frameworks 155
Industry Standards 157
IT Security Defense-In-Depth Model 157

Case 5.2 Business Case: Lax Security at LinkedIn
Exposed 161

Case 5.3 Video Case: Botnets, Malware Security, and
Capturing Cybercriminals 163

PART 2 Winning, Engaging, and
Retaining Consumers for Growth

6 Search, Semantic, and
Recommendation Technology 165

Case 6.1 Opening Case: Mint.com Uses Search
Technology to Rank Above Established
Competitors 166

6.1 Using Search Technology for Business
Success 168
How Search Engines Work 168
Web Directories 168
How Crawler Search Engines Work 169
Why Search Is Important for Business 172

6.2 Organic Search and Search Engine
Optimization 178
Strategies for Search Engine Optimization 178
Content and Inbound Marketing 180
Black Hat versus White Hat SEO: Ethical Issues
in Search Engine Optimization 181

6.3 Pay-Per-Click and Paid Search Strategies 182
Creating a PPC Advertising Campaign 182
Metrics for Paid Search Advertising 184

6.4 A Search for Meaning—Semantic Technology 184
What Is the Semantic Web? 185
The Language(s) of Web 3.0 185
Semantic Web and Semantic Search 186
Semantic Web for Business 187

6.5 Recommendation Engines 188
Recommendation Filters 189

Case 6.2 Business Case: Deciding What to Watch—Video
Recommendations at Netflix 195

Case 6.3 Video Case: Power Searching with
Google 196

7 Web 2.0 and Social
Technology 199

Case 7.1 Opening Case: Social Customer Service Takes
Off at KLM 200

7.1 Web 2.0—The Social Web 201
The Constantly Changing Web 201
Invention of the World Wide Web 202
A Platform for Services and Social Interaction 202
Emergence of Social Applications, Networks,
and Services 203
Why Managers Should Understand Web
Technology 205
Communicating on the Web 206
Social Media Applications and Services 207
Social Media Is More than Facebook, YouTube, and
Twitter 207
With Web 2.0, Markets are Conversations 209

7.2 Social Networking Services and Communities 210
The Power of the Crowd 212
Crowdfunding 212
Social Networking Services 213
Facebook Dominates Social Networking 214
Google Takes on Facebook with G+ 216
Be in the Now with Snapchat 217
And Now for Something Different: Second Life 218
Private Social Networks 219
Future of Social Networking Systems 220

7.3 Engaging Consumers with Blogs and
Microblogs 220
What Is the Purpose of a Blog? 220
Blogging and Public Relations 222
Reading and Subscribing to Blogs 222
Blogging Platforms 222
Microblogs 223
Twitter 223
Tumblr Blogs 225

7.4 Mashups, Social Metrics, and
Monitoring Tools 226
What Makes a Mashup Social 226
RSS Technology 227
Social Monitoring Services 227

7.5 Enterprise 2.0: Workplace Collaboration and
Knowledge Sharing 229
Tools for Meetings and Discussions 230
Social Tools for Information Retrieval and
Knowledge Sharing 230
Social Bookmarking Tools 231
Content Creation and Sharing 232

Case 7.2 Business Case: Facebook Helps Songkick Rock
the Ticket Sales Industry 236

Case 7.3 Business Case: AT&T’s “It Can Wait” Campaign
against Distracted Driving 237

CONTENTS ix

8 Retail, E-commerce, and Mobile
Commerce Technology 240

Case 8.1 Opening Case: Macy’s Races Ahead with Mobile
Retail Strategies 241

8.1 Retailing Technology 243
Keeping Up with Consumer Demands and
Behavior 243
The Omni-Channel Retailing Concept 244

8.2 Business-to-Consumer (B2C) E-commerce 246
Online Banking 246
International and Multiple-Currency
Banking 246
Online Recruiting 246
Issues in Online Retailing 250
Online Business and Marketing Planning 250

8.3 Business-to-Business (B2B) E-commerce and
E-procurement 251
Sell-Side Marketplaces 251
E-Sourcing 252
E-Procurement 252
Electronic Data Interchange (EDI) Systems 253
Public and Private Exchanges 253

8.4 Mobile Commerce 253
Information: Competitive Advantage in Mobile
Commerce 255
Mobile Entertainment 258
Hotel Services and Travel Go Wireless 259
Mobile Social Networking 259

8.5 Mobile Transactions and Financial Services 260
Mobile Payment Systems 260
Mobile Banking and Financial Services 262
Short Codes 263
Security Issues 263

Case 8.2 Business Case: Chegg’s Mobile Strategy 266
Case 8.3 Video Case: Searching with Pictures

Using MVS 267

PART 3 Optimizing Performance,
Processes, and Productivity

9 Functional Business Systems 269

Case 9.1 Opening Case: Ducati Redesigns Its
Operations 271

9.1 Business Management Systems and Functional
Business Systems 272
Business Management Systems (BMSs) 273
Management Levels 273
Business Functions vs. Cross-Functional Business
Processes 274
Transaction Processing Systems 275

9.2 Production and Operations Management
Systems 277
Transportation Management Systems 278
Logistics Management 278
Inventory Control Systems 279
Computer-Integrated Manufacturing and
Manufacturing Execution Systems 281

9.3 Sales and Marketing Systems 282
Data-Driven Marketing 284
Sales and Distribution Channels 284
Social Media Customer Service 284
Marketing Management 285

9.4 Accounting, Finance, and Regulatory Systems 286
Financial Disclosure: Reporting and
Compliance 286
Fraud Prevention and Detection 289
Auditing Information Systems 291
Financial Planning and Budgeting 291

9.5 Human Resource Systems, Compliance, and
Ethics 293
HR Information Systems 293
Management and Employee Development 295
HR Planning, Control, and Management 295

Case 9.2 Business Case: HSBC Combats Fraud in Split-
second Decisions 297

Case 9.3 Video Case: United Rentals Optimizes Its
Workforce with Human Capital Management 298

10 Enterprise Systems 300

Case 10.1 Opening Case: 3D Printing Drives the “Always-
On” Supply Chain 301

10.1 Enterprise Systems 303
Implementation Challenges of Enterprise
Systems 305
Investing in Enterprise Systems 305
Implementation of Best Practices 306
Enterprise Systems Insights 307

10.2 Enterprise Resource Planning (ERP) 307
Brief History of ERP 308
Technology Perspective 308
Achieving ERP Success 311

10.3 Supply Chain Management Systems 313
Managing the Flow of Materials, Data,
and Money 315
Order Fulfillment and Logistics 315
Steps in the Order Fulfillment Process 315
Innovations Driving Supply Chain Strategic
Priorities 316

10.4 Customer Relationship Management Systems 319
How are CRM Apps Different from ERP? Why are they
Different? 319
CRM Technology Perspective 320

x CONTENTS

Customer Acquisition and Retention 320
CRM for a Competitive Edge 320
Common CRM Mistakes: How to Avoid
Them 321
Justifying CRM 322

10.5 Enterprise Social Platforms 323
Growth of Enterprise Social Investments
and Markets 323
Sharepoint 324
Oracle’s Social Network 326
Jive 326
Chatter 326

Case 10.2 Business Case: Lowe’s Fresh Approach to
Supply Chain Management 328

Case 10.3 Video Case: Procter & Gamble: Creating
Conversations in the Cloud with 4.8 Billion
Consumers 329

11 Data Visualization and Geographic
Information Systems 331

Case 11.1 Opening Case: Safeway and PepsiCo
Collaborate to Reduce Stock Outages using Data
Visualization 332

11.1 Data Visualization and Learning 334
Learning, Exploration, and Discovery with
Visualization 336
Data Discovery Market Separates from the
BI Market 336
How Is Data Visualization Used in Business? 340
Data Visualization Tools 341

11.2 Enterprise Data Mashups 342
Mashup Architecture 343
Why Do Business Users Need Data Mashup
Technology? 344
Enterprise Mashup Technology 344

11.3 Digital Dashboards 345
Dashboards are Real Time 347
How Operational and Strategic
Dashboards Work 348
Benefits of Digital Dashboards 348

11.4 Geographic Information Systems and
Geospatial Data 349
Geocoding 350
GIS Is Not Your Grandfather’s Map 350
Infrastructure and Location-Aware Collection
of Geospatial Data 350
Applying GIS in Business 351

Case 11.2 Visualization Case: Are You Ready for
Football? 353

Case 11.3 Video Case: The Beauty of Data
Visualization—Data Detective 353

PART 4 Managing Business
Relationships, Projects, and Ethical
Responsibilities

12 IT Strategy, Sourcing, and Strategic
Technology Trends 354

Case 12.1 Opening Case: Intel Reaps Rewards from
Sustainable IT Strategy 355

12.1 IT Strategic Planning 357
Value Drivers 358
IT Strategic Plan Objectives 358
IT and Business Disconnects 359
Corporate and IT Governance 359
Reactive Approach to IT Investments Will Fail 359
IT Strategic Planning Process 359

12.2 Aligning IT with Business Objectives 362
Achieving and Sustaining a Competitive
Advantage 364

12.3 IT Sourcing Strategies 367
Sourcing and Cloud Services 368
Factors Driving Outsourcing 369
Outsourcing Risks and Hidden Costs 370
Offshoring 370
Outsourcing Life Cycle 371
Managing IT Vendor Relationships 373
Contracts: Get Everything in Writing 373

12.4 Balanced Scorecard 374
The Balanced Scorecard 374
Using the Balance Scorecard 375
Applying the BSC 377

12.5 Strategic Technology Trends 378
Strategic Technology Scanning 380
Finding Strategic Technologies 380

Case 12.2 Business Case: Cisco IT Improves Strategic
Vendor Management 382

Case 12.3 Data Analysis: Third-Party versus Company-
Owned Offshoring 383

13 Systems Development and Project
Management 385

Case 13.1 Opening Case: Denver International Airport
Learns from Mistakes Made in Failed Baggage-
Handling System Project 386

13.1 System Development Life Cycle 388
Stages of the SDLC 388

13.2 Systems Development Methodologies 391
Waterfall Model 391
Object-Oriented Analysis and Design 392
Agile Methodology 392

CONTENTS xi

The DevOps Approach to Systems
Development 394

13.3 Project Management Fundamentals 395
What Is a Project? 396
Choosing Projects 396
The Triple Constraint 397
The Project Management Framework 397

13.4 Initiating, Planning, and Executing Projects 399
Project Initiation 400
Project Planning 400
Project Execution 403

13.5 Monitoring/Controlling and Closing
Projects 404
Project Monitoring and Controlling 404
Project Closing or Post Mortem 407
Why Projects Fail 408
IT Project Management Mistakes 410

Case 13.2 Business Case: Steve Jobs’ Shared Vision
Project Management Style 412

Case 13.3 Demo Case: Mavenlink Project Management
and Planning Software 413

14 IT Ethics, Privacy, and
Sustainability 417

Case 14.1 Opening Case: Lessons Learned: How Google
Glass Raised Risk and Privacy Challenges 418

14.1 IT Ethics 420
Ethical versus Unethical Behavior 420
Competing Responsibilities 423

14.2 Privacy and Civil Rights 424
Privacy and the New Privacy
Paradox 424
Social Media Recruiting 425
Legal Note: Civil Rights 426
Competing Legal Concerns 427
Financial Organizations Must Comply with Social
Media Guidelines 428

14.3 Technology Addictions and Focus
Management 430
Digital Distractions and Loss of Focus 430
Focus Management 430

14.4 ICT and Sustainable Development 432
Global Temperature Rising Too Much
Too Fast 432
IT and Global Warming 433
Technology to Transform Business and
Society 436
Next Wave of Disruption Will Be More
Disruptive 438

Case 14.2 Business Case: Android Auto and
CarPlay Keep Drivers Safe, Legal, and
Productive 439

Case 14.3 Video Case: IT Ethics in the
Workplace 440

GLOSSARY 443
ORGANIZATION INDEX 448
NAME INDEX 450
SUBJECT INDEX 451

xiii

Information Technology for Management discusses a variety of
business strategies and explains how they rely on data, digital
technology, and mobile devices to support them in the on-
demand economy. Our goal is to provide students from any
business discipline with a strong foundation for understand-
ing the critical role that digital technology plays in enhancing
business sustainability, profitability, and growth and excel in
their careers. Enabling technologies discussed in this textbook
include the following:

• Performance Combining the latest capabilities in big data
analytics, reporting, collaboration, search, and digital com-
munication helps enterprises be more agile and cuts costs to
optimize business performance and profitability.

• Growth Strategic technologies enable business to create
new core competencies, expand their markets, and move
into new markets to experience exponential growth in the
on-demand economy.

• Sustainability Cloud services are fundamental to sus-
taining business profitability and growth in today’s on-
demand economy. They play a critical role in managing
projects and sourcing agreements, respecting personal pri-
vacy, encouraging social responsibility, and attracting and
engaging customers across multimedia channels to promote
sustainable business performance and growth.

In this 11th edition, students learn, explore, and understand
the importance of IT’s role in supporting the three essential
components of business performance improvement: technology,
business processes, and people.

What’s New in the
11th Edition?
In the 11th edition of IT for Management, we present and dis-
cuss concepts in a comprehensive yet easy-to-understand for-
mat by actively engaging students through a wide selection of
case studies, interactive figures, video animations, tech notes,
concept check questions, online and interactive exercises, and
critical thinking questions. We have enhanced the 11th edition
in the following ways:

New Author Dr. Carol Pollard, Professor of Computer Infor-
mation Systems at the Walker College of Business and former
Executive Director of the Center for Applied Research in Emerg-
ing Technologies (CARET) at Appalachian State University in
North Carolina, has taken the helm for the 11th edition. Carol

has applied her innovative teaching and learning techniques to
create a stronger pedagogical focus and more engaging format
for the text.
Diverse Audience IT for Management is directed toward
undergraduate, introductory MBA courses, and Executive Educa-
tion courses in Management Information Systems and General
Business programs. Concepts are explained in a straightforward
way, and interactive elements, tools, and techniques provide
tangible resources that appeal to all levels of students.
Strong Pedagogical Approach To encourage improved learn-
ing outcomes, we employed a blended learning approach, in
which different types of delivery and learning methods, enabled
and supported by technology, are blended with traditional
learning methods. For example, case study and theoretical
content are presented visually, textually, and/or interactively
to enable different groups of students to use different learning
strategies in different combinations to fit their individual learn-
ing style and enhance their learning. Throughout the book,
content has been reorganized to improve development of the
topics and improve understanding and readability. A large
number of images that did not enhance understanding have
been removed and replaced with informative and interactive
figures and tables that better convey critical concepts.
Leading-Edge Content Prior to and during the writing pro-
cess, we consulted with a number of vendors, IT professionals,
and managers who are hands-on users of leading technologies,
to learn about their IT/business successes, challenges, experi-
ences, and recommendations. To integrate the feedback of
these business and IT professionals, new or updated chapter
opening and closing cases have been added to many of the
chapters along with the addition of relevant, leading-edge
content in the body of the chapters.
New Technologies and Expanded Topics New to this edition
are the IT framework, business process reengineering, geoco-
ding, systems developments methodologies, including Water-
fall, object-oriented analysis, Agile and DevOps, advances
in Search Technology, the growth of Mobile Commerce and
Mobile Payment Systems, the Always-On Supply Chain, and
the Project Management framework. In addition, with more
purchases and transactions starting online and attention being
a scarce resource, students learn how search, semantic, and
recommendation technologies function to improve revenue.
Table P-1 provides a detailed list of new and expanded topics.
Useful Tools and Techniques New to this edition is a feature
we call the “IT Toolbox.” This involves the provision of a set of
useful tools or techniques relevant to chapter content. Collec-
tively, these tools and techniques equip readers with a suite of
IT tools that will be useful in their university classes, workplace,
and personal life.

Preface

xiv PREFACE

Chapter New and Expanded IT and Business Topics Innovative Enterprises
1. Disruptive IT Impacts

Companies, Competition,
and Careers

• IT’s role in the on-demand economy
• Business process improvement
• Business process re-engineering
• SMAC model
• Nature of on-demand work
• Becoming an informed IT user
• Technology mega trends

• Uber
• Airbnb
• FitBit
• NFL
• Teradata

2. Information Systems, IT Archi-
tecture, Data Governance, and
Cloud Computing

• IS concepts and framework
• Information, knowledge, wisdom model
• Software-defined data center

• Mediata
• National Climatic Data center
• U.S. National Security Agency
• Apple
• Uber
• WhatsApp
• Slack
• Vanderbilt University Medical Center
• Coca-Cola

TABLE P-1 Overview of New and Expanded Topics and Innovative Enterprises Discussed in the Chapters

Engaging Students
to Assure Learning
The 11th edition of Information Technology for Management
engages students with up-to-date coverage of the most impor-
tant IT trends today. Over the years, this IT textbook has dis-
tinguished itself with an emphasis on illustrating the use of
cutting-edge business technologies for supporting and achiev-
ing managerial goals and objectives. The 11th edition contin-
ues this tradition with more interactive activities and analyses.

Real-World Case Studies Each chapter contains numerous
real-world examples illustrating how businesses use IT to increase
productivity, improve efficiency, enhance communication and
collaboration, and gain a competitive edge. Faculty will appreciate
a variety of options for reinforcing student learning that include
three different types of Case Studies (opening case, video case,
and business case), along with interactive figures and whiteboard
animations that provide a multimedia overview of each chapter.
Interactive Figures and Whiteboard Animations The unique
presentation of interactive figures and whiteboard anima-
tions facilitates reflection on the textual content of the book
and provides a clear path to understanding key concepts. The
whiteboard animations fit particularly well with the “flipping
the classroom” model and complement additional functional-
ity and assets offered throughout the 11th edition. The interac-
tive figures actively engage the students in their own learning
to effectively reinforce concepts.
Learning Aids Each chapter contains various learning aids,
which include the following:

• Learning Objectives are listed at the beginning of each
chapter to help students focus their efforts and alert
them to the important concepts that will be discussed.

• IT at Work boxes spotlight real-world cases and innova-
tive uses of IT.

• Definitions of Key Terms appear in the margins
throughout the book.

• Tech Note boxes explore topics such as “Key
Performance Indicators” and “Six Basic Systems
Development Guidelines.”

• Career Insight boxes highlight different jobs in the IT
for management field.

End-of-Chapter Activities At the end of each chapter,
features designed to assure student learning include the
following:

• Critical Thinking Questions are designed to facilitate
student discussion.

• Online and Interactive Exercises encourage students
to explore additional topics.

• Analyze and Decide questions help students apply IT
concepts to business decisions.

• Concept Questions test students’ comprehension of
each learning objective at the end of each chapter to
ensure that the students are clear on the concepts.
Students are provided with immediate feedback on
their performance.

Details of New and Enhanced
Features of the 11th Edition
The textbook consists of 14 chapters organized into four mod-
ules. All chapters have new or updated sections, as shown in
Table P-1.

PREFACE xv

Chapter New and Expanded IT and Business Topics Innovative Enterprises
3. Data Management, Data

Analytics, and Business
Intelligence

• Dirty data costs and consequences
• Data life cycle
• Genomics and big data
• Aligning business intelligence with business strategy

• Coca-Cola
• Capitol One
• Travelocity
• First Wind
• Argo Corporation
• Walmart
• Infinity Insurance
• DoD and Homeland Security
• CarMax
• McDonald’s
• Verizon

4. Networks, Collaborative
Technology, and the
Internet of Things

• IPv6 protocol
• Types of networks
• Network terminology
• Quality of service
• Net neutrality
• Mobile networks and near-field communication
• Internet of Things

• Sony
• AT & T
• Time-Warner
• Amazon
• Warner Music
• Proctor & Gamble
• Walmart
• Ford
• Asda
• Unilever
• Caterpillar
• Santander
• Google
• Isle of Man

5. Cybersecurity and Risk
Management Technology

• Data breaches
• Major sources of cyberthreats
• Classes of hackers
• Spear phishing
• Crimeware categories
• Denial of service
• KPMG data loss barometer
• Enterprise risk management framework

• Yahoo
• Global Payments, Inc.
• Government of China
• Google
• U.S. Chamber of Commerce
• Brookings Institution
• LinkedIn
• Damballa

6. Search, Semantic, and Recom-
mendation Technology

• Social search technologies
• Personal assistant and voice search
• Mobile search and mobile SEO
• On-page and off-page SEO factors
• Updates to Google’s ranking algorithm
• Semantic search technologies

• Mint.com
• Google
• Microsoft
• Yahoo
• Netflix
• Apple
• Amazon
• Diigo
• World Wide Web Consortium (W3C)

TABLE P-1 Overview of New and Expanded Topics and Innovative Enterprises Discussed in the Chapters (continued)

(continued)

xvi PREFACE

Chapter New and Expanded IT and Business Topics Innovative Enterprises
7. Web 2.0 and Social Technology • Snapchat, the #2 social platform

• Social bookmarking
• Social customer service moves from optional

to essential
• Role of APIs in development of new Web applications

and functionality
• The dominance of Facebook and the demise

of Google+
• Emerging virtual-world technology

• KLM Royal Dutch Airlines
• Facebook, Inc.
• Myntra
• Snap, Inc.
• Kickstarter.com
• GoFundMe.com
• Oculus VR
• High Fidelity
• Twitter
• Social Mention
• Diigo
• Clipix
• Dropbox

8. Retail, E-commerce, and
Mobile Commerce Technology

• Direct and marketplace B2B ecommerce
• In-store retail technology
• Omni-channel retailing
• Growth of mobile commerce
• Growth of the mobile gaming market
• Mobile payment methods
• Mobile visual search

• Macys Department Stores
• Amazon.com
• Ally Bank
• LinkedIn.com
• Alibaba.com
• Dell, Inc.
• The Walt Disney Company
• PayPal, Inc.
• Chegg.com

9. Functional Business Systems • Business management systems
• Cross-functional coordination and integration

of systems
• Systems that support supply-chain management
• Social customer service
• eXtensible Business Reporting Language (XBRL)

• Ducati Motor Holding S.p.A.
• Office Depot
• Schurman Fine Papers
• BAE Systems
• Adweek
• Salesforce.com
• LinkedIn
• HSBC Bank
• United Rentals

10. Enterprise Systems • 3D printing impact on supply chain
• Selecting an ERP vendor
• Factors for ERP success
• Order fulfillment
• Always-on supply chain
• Enterprise social platforms

• Organovo
• Ferrari
• GE
• Siemens
• Organic Valley Family of Farms
• Boers & Co.
• Peters Ice Cream
• ScanSource
• Avanade
• Dillards
• FoxMeyer Drugs
• Joint Munitions Command
• Flower.com
• Red Robin
• Lowe’s
• Procter & Gamble

TABLE P-1 Overview of New and Expanded Topics and Innovative Enterprises Discussed in the Chapters (continued)

PREFACE xvii

Supplemental Materials
An extensive package of instructional materials is available
to support this 11th edition. These materials are accessible
from the book companion website at www.wiley.com/college/
turban.

• Instructor’s Manual The Instructor’s Manual presents
objectives from the text with additional information to make
them more appropriate and useful for the instructor. The
manual also includes practical applications of concepts,
case-study elaboration, answers to end-of-chapter ques-
tions, questions for review, questions for discussion, and
Internet exercises.

• Test Bank The test bank contains over 1,000 ques-
tions and problems (about 75 per chapter) consisting of
multiple-choice, short answer, fill-ins, and critical thinking/
essay questions.

• PowerPoint Presentation A series of slides designed
around the content of the text incorporates key points from
the text and illustrations where appropriate.

• Chapter Summary Whiteboard Animations A series of
video animations that summarize the content of each chapter
in an entertaining way to engage the students in grasping the
subject matter.

Chapter New and Expanded IT and Business Topics Innovative Enterprises
11. Data Visualization and

Geographic Information
Systems

• Increasing reliance on data discovery
• Data visualization tools
• Enterprise data mashups
• Geocoding

• Safeway
• PepsiCo
• IBM
• ADP Corp.
• Department of Veterans Affairs
• General Motors

12. IT Strategy, Sourcing, and
Strategic Technology Trends

• Business–IT alignment
• IT strategic planning
• Porter’s competitive forces model
• Porter’s value chain model
• Five-phase outsourcing life cycle
• IT sourcing strategies
• Strategic technology trends
• Technology scanning

• Intel
• Nestle Nespresso
• LinkedIn
• ESSA Academy
• Cisco
• Citigroup

13. Systems Development and
Project Management

• SDLC stages
• Systems development methodologies
• DevOps
• Project management framework
• PM core and support knowledge areas
• Responsibility matrix

• Denver International Airport
• U.S. Census Bureau
• Apple
• Mavenlink

14. IT Ethics, Privacy, and
Sustainability

• Ethical vs. unethical behavior
• Privacy paradox
• Climate change
• Technology addiction
• “People-first” approach to technology
• Disruptive technologies

• Google
• Target
• Facebook
• SnapChat
• NASA
• Apple

TABLE P-1 Overview of New and Expanded Topics and Innovative Enterprises Discussed in the Chapters (continued)

xviii PREFACE

Acknowledgments
No book is produced through the sole efforts of its authors, and
this book is no exception. Many people contributed to its crea-
tion, both directly and indirectly, and we wish to acknowledge
their contributions.

Special thanks go to the team at John Wiley, particularly
Darren Lalonde, Emma Townsend-Merino, Ethan Lipson, and
Loganathan Kandan for their ongoing and encouraging edito-
rial expertise and leadership. Their guidance, patience, humor,
and support during the development and production of this
most recent version of the textbook made the process much
easier. We couldn’t have done it without you!

Our sincere thanks also go to the following reviewers of the
11th edition. Their feedback, insights, and suggestions were
invaluable in ensuring the accuracy and readability of the book:

Joni Adkins, Northwest Missouri State University
Ahmad Al-Omari, Dakota State University
Rigoberto Chinchilla, Eastern Illinois University
Michael Donahue, Towson University
Samuel Elko, Seton Hill University
Robert Goble, Dallas Baptist University
Eileen Griffin, Canisius College
Binshan Lin, Louisiana State University in Shreveport
Thomas MacMullen, Eastern Illinois University
James Moore, Canisius College
Beverly S. Motich, Messiah College
Barin Nag, Towson University

Luis A. Otero, Inter-American University of Puerto Rico,
Metropolitan Campus

John Pearson, Southern Illinois University

Daniel Riding, Florida Institute of Technology

Josie Schneider, Columbia Southern University

Derek Sedlack, South University

Eric Weinstein, The University of La Verne

Patricia White, Columbia Southern University

Gene A. Wright, University of Wisconsin–Milwaukee

Many thanks also go to our dedicated graphic designers,
Kevin Hawley and Nathan Sherrill, without whose help we
would not have been able to create the innovative Whiteboard
Animations, and to Senior Photo Editor, Billy Ray, whose exten-
sive and expert research into the images used in the textbook
greatly enhanced the overall “look” of this 11th edition.

Extra special thanks go to our families, friends, and col-
leagues for the enormous encouragement, support, and under-
standing they provided as we dedicated time and effort to
creating this new edition.

Finally, we dedicate the 11th edition of Information
Technology for Management to the Memory of Dr. Linda
Volonino, the driving force behind editions 7 through 10 of IT
for Management. Thank you Linda, for all your hard work in
providing the foundation for this latest edition of the textbook.

CAROL POLLARD
GREGORY WOOD

1

CHAPTER 1

Disruptive IT Impacts Companies,
Competition, and Careers

LEARNING OBJECTIVES

1.1 Describe how the on-demand economy is changing the way
that business is conducted.

1.2 Explain the role of IT in business process improvement.
Understand the concepts of business process reengineering
and competitive advantage.

1.3 Describe innovating technologies and explain how they are
disrupting enterprises.

1.4 Understand the value of being an “informed user” of IT and
the ways in which IT can add value to your career path and
performance in the on-demand economy.

CHAPTER OUTLINE

Case 1.1 Opening Case: Uber, Airbnb, and the
On-Demand Economy

1.1 Doing Business in the On-Demand Economy

1.2 Business Process Improvement and
Competitive Advantage

1.3 IT Innovation and Disruption

1.4 IT and You

Case 1.2 Business Case: The Internet of Things
Comes to the NFL

Case 1.3 Video Case: What Is the Value of Knowing
More and Doing More?

Introduction
The more digital technology advances, the more it is almost instantly integrated into our daily
lives. Many managers and entrepreneurs recognize the need to integrate digital technology
into their products and services. For example, it has been estimated that 78% of business

2 CHAPTER 1 Disruptive IT Impacts Companies, Competition, and Careers

leaders expect their organizations to be a digital business by 2020. Outdated and complex
application architectures with a mix of interfaces can delay or prevent the release of new
products and services, and maintaining these obsolete systems absorbs large portions of the
information technology (IT) budget.

Companies such as Uber, Airbnb, Shyp, TaskRabbit, and other participants in the on-
demand economy are leveraging IT to create exciting new business models and revolu-
tionize the way workers, businesses, and customers interact and compete. Peter Hinssen, a
well-known business author, university lecturer, and digital consultant, described the change
in digital technology as follows:

Technology used to be nice. It used to be about making things a little bit better, a little
bit more efficient. But, technology stopped being nice: it’s disruptive. It’s changing our
business models, our consumer markets, our organizations. (MacIver, 2015)

As businesses continue to join the on-demand economy, IT professionals must constantly
scan for innovative new technologies to provide business value and help shape the future of
the business. For example, smart devices, mobile apps, sensors, and technology platforms—
along with increased customer demand for digital interactions and on-demand services—have
moved commerce in fresh new directions. We’ve all heard the phrase “there’s an app for that”
and that kind of consumer thinking is what drives the on-demand economy.

Business leaders today need to know what steps to take to get the most out of mobile,
social, cloud, big data, analytics, visualization technologies, and the Internet of Things (IoT) to
move their business forward and enable new on-demand business models. Faced with oppor-
tunities and challenges, managers need to know how to leverage IT earlier and more efficiently
than their competitors.

A goal of this book is to empower you to improve your use and management of IT at
work by raising your understanding of IT terminology, practices, and tools and developing
your IT skills to transform you into an informed IT user. Throughout this book, you will learn
how digital technology is transforming business and society in the on-demand economy as
the IT function takes on key strategic and operational roles that determine an enterprise’s
success or failure. You will also be provided with an in-depth look at IT trends that have
immediate and future capacity to influence products, services competition, and business
relationships. Along the way, we’ll describe many different ways in which IT is being used
and can be used in business and provide you with the some of the terminology, techniques
and tools that enable organizations to leverage IT to improve growth, performance, and
sustainability.

In this opening chapter, you will learn about the powerful impacts of digital technology
on people, business, government, entertainment, and society that are occurring in today’s on-
demand economy. You will also discover how leading companies are deploying digital tech-
nology and changing their business models, business processes, customer experiences, and
ways of working. We will present examples of innovative products, services, and distribution
channels to help you understand the digital revolution that is currently shaping the future of
business, the economy and society and changing management careers. And, we’ll explain why
IT is important to you and how becoming an “informed user” of IT will add significant value to
your career and overall quality of life.

Introduction 3

Case 1.1 Opening Case
THE ON-DEMAND BUSINESS FRAMEWORK

CORE ON-DEMAND
SERVICES

Logistics Management
Offline Services Move Online
Vendor Management
Interface Layer

CONSUMER
TECHNOLOGY

Ubiquitous Connectivity
Mobile Adoption
App Marketplace

COMPLIMENTARY
RESOURCES

Payment Systems
Cloud Services
CRM Platforms
1099 Community

CONSUMER
BEHAVIOR

THE
ON-DEMAND
ECONOMY

Convenience
Efficiency
Simplicity
Instant Gratification

N
IC

O
LA

S
M

A
ET

ER
LI

N
C

K
/

St
rin

ge
r /

G
et

ty
Im

ag
es

Uber and Airbnb Revolutionize Business Models
in the On-Demand Economy
If you’ve used Uber or Airbnb, then you have participated in the
on-demand economy where speed, convenience, and simplicity
are key factors in consumer behavior and purchasing decisions.
Michael Boland, author of What’s Driving the Local On-Demand Econ-
omy, explains that as consumers, “We’re being conditioned to expect
everything on-demand as the mobile device increasingly becomes the
remote control for the physical world” (Boland,  2015). For example,
the majority of consumers who tap an Uber app to get a ride would
not consider dialing an 800  number for a taxi. With all transactions
performed by apps and automated processes, the entire process from
hailing to paying for a ride is slick, quick, and easy, without cash or
credit cards.

Tech Platforms Enabled On-Demand Services to Take Off
Decades of technological innovation have given us smartphone apps,
mobile payment platforms, GPS and map technology, and social
authentication. These technologies are being used to build the infra-
structure needed for on-demand services. This infrastructure—also
referred to as a technology platform or technology stack—supports
the exchange and coordination of staggering amounts of data. The
term technology stack reflects the fact that the platform is made up of
multiple layers (stacks) of hardware, software, network connectivity,
and data analytics capabilities.

In many consumer markets today, companies that do not have
iPhone or Android apps or technology platforms that support the
exchange of goods and services—no matter how useful their website—
may find themselves losing their competitive edge.

On-Demand Economy Requires a New Business Model
Uber and Airbnb are popular examples of companies that developed
on-demand business models to transform slow-to-innovate indus-
tries. A simple definition of business model is the way a company
generates revenue and makes a profit. On-demand business mod-
els provide real-time fulfillment of goods and services, which have
attracted millions of users worldwide. This model fits best when
speed and convenience matter the most. The ground transporta-
tion, grocery, and restaurant industries are examples of hyper-growth

categories in the on-demand world. Forward-thinking companies are
reshaping these industries.

Uber Business Model
Uber disrupted the taxi industry with a workforce that is essentially
any person with a smartphone and a car. Location-aware smartphone
apps bring drivers and passengers together, while in-app accounts
make the cashless payment process effortless. By simply opening the
Uber app and pressing the middle button for several seconds (a long
press), customers can order a ride to their current location, selecting
the kind of car they want. Payment is automatically charged to the
credit card on file with receipts via email.

The Uber concept developed in response to taxi scarcities. It
started on a snowy Paris night in 2008 when the two founders could
not get a cab. They wanted a dead-simple app that could get them
a car with a tap. On June 1, 2015, the entrepreneurs celebrated
Uber fifth anniversary and announced that the company had grown
into a transportation network covering 311 cities in 58 countries in
North and South Americas, Europe, Africa, Asia Pacific, and the
Middle East.

Uber has invested in new and developing technologies and part-
nerships. The company partnered with Carnegie Mellon University to
build robotic cars and new mapping software. In March 2015, Uber pur-
chased deCarta, a 40-person mapping start-up to reduce its depend-
ence on Google maps.

Airbnb Business Model
Another disruption to a traditional industry occurred when Airbnb
blindsided the hotel industry. Airbnb allows anyone with a spare
apartment or room—even if only for a day—to run their own bed and
breakfast by giving them a technology platform to market themselves
to a global market. By 2016, the Airbnb site had over 1.5 million list-
ings in 190 countries and 34,000 cities. Over 40 million guests have
used Airbnb worldwide. For comparison, Hilton, InterContinental, and
Marriott, the largest hotel chains in the world, have less than 1 million
rooms each.

Uber and Airbnb do not own inventory. Instead, they scale up
(expand) by improving their ability to acquire and match customers
and service providers.

4 CHAPTER 1 Disruptive IT Impacts Companies, Competition, and Careers

1.1 Doing Business in the
On-Demand Economy
The on-demand economy is revolutionizing commercial activities in businesses around the
world. The businesses in this new economy are fueled by years of technology innovation and a
radical change in consumer behavior. As companies become more highly digitized, it becomes
more and more apparent that what companies can do depends on what their IT and data man-
agement systems can do. For over a decade, powerful new digital approaches to doing business
have emerged. And there is sufficient proof to expect even more rapid and dramatic changes
due to IT breakthroughs and advances.

In market segment after market segment, mobile communications and technology stacks
make it financially feasible for companies to bring together consumers and providers of prod-
ucts and services. These capabilities have created the on-demand economy. As Ev Williams,
cofounder of Twitter says,

The internet makes human desires more easily attainable. In other words, it offers
convenience. Convenience on the internet is basically achieved by two things: speed,
and cognitive ease. If you study what the really big things on the internet are, you
realize they are masters at making things fast and not making people think.

On-demand economy is the
economic activity created by
technology companies that fulfill
consumer demand through
the immediate provisioning of
products and services.

Business Success in Terms of Company Growth
and Valuation
The ride-hailing app Uber and the housing rental app Airbnb are two
of the most valuable start-ups, as displayed in Figure 1.1. Valuation
of a company at its early stages is based heavily on its growth potential
and future value. In contrast, the valuation of an established company
is based on its present value, which is calculated using traditional
financial ratios and techniques related to revenues or other assets.

Uber’s massive market value—estimated at $60 billion—is
greater than 80% of all Standard & Poor (S&P) 500 companies, many
of which have been around for 25, 50, or 100 years. Investors valued
Airbnb at $24 billion—higher than the value of the hotel giant Marriott

International. These companies would never have been able to grow
in the old way as a traditional organization, with their own inventory of
products, services, and workforce and traditional forms of technology.

Questions
1. In what ways are the Uber and Airbnb business similar or

different?

2. How did Uber achieve its new business model?

3. To what extent do you think changing their business models con-
tributed to the success of Uber and Airbnb?

Sources: Compiled from Primack (2015), Storbaek (2015), Winkler and MacMillan
(2015,) Jaconi (2014), Uber.com (2017), Airbnb.com (2017).

Started in 2008
Airbnb—short for Air Bed and
Breakfast

The leading disrupter in the hotel
and vacation rental market

By 2016, Airbnb was valued at
about $25 billion. Exceeded the
value of Marriott International

Uber

Started in 2009. Founder Garrett
Camp wanted to tackle the taxi
shortage problem in San Francisco

Uber epitomizes disruption

Changed the way customers think
about grabbing a ride

By 2016, Uber had higher valuation
than companies that make the cars
its drivers use–GM, Honda, and Ford

Airbnb

FIGURE 1.1 On-demand business models of Airbnb and Uber have been extremely
successful.

Doing Business in the On-Demand Economy 5

The proliferation of smartphone-connected consumers, simple and secure purchase flows, and
location-based services are a few of the market conditions and technological innovations that
are propelling the explosion of on-demand services.

Just as the rapid growth of online-only Amazon and eBay transformed retail, the even faster
growth of app-driven companies, like Uber, Airbnb, and Grubhub, has disrupted the taxi, hotel,
and restaurant markets. As you read in the opening case, in six short years, Uber changed the
taxi industry as it rose from start-up to the world’s most valuable private technology company,
and Airbnb tackled the fiercely competitive hotel market and attracted more than 60 million
customers to become the third most valuable venture-capital-backed company in the world.
Another example is Grubhub who became No. 1 in online food ordering, controlling over 20%
of that $9 billion market. What today’s successful technology businesses have in common are
platform-based business models. Platforms consist of hardware, software, and networks that
provide the connectivity for diverse transactions, such as ordering, tracking, user authen-
tication, and payments. These business models are designed to serve today’s on-demand
economy, which is all about time (on-demand), convenience (tap an app), and personalized
service (my way). For example, millennials want the ease of online payment over cash and
insist on efficiency for all aspects of their lives, including shopping, delivery, and travel.

Key strategic and tactical questions that determine an organization’s profitability and
management performance are shown in Figure  1.2. Answers to each question require an
understanding of the capabilities of mundane to complex IT, which ones to implement and
how to manage them.

Growth of the On-Demand Economy
Whether it is ease of scheduled deliveries or the corresponding time savings, the growth of
the on-demand economy is a product of its alignment with consumers’ growing appetite
for greater convenience, speed, and simplicity. A recent survey reported that 86.5 million
Americans have used the services of at least one on-demand start-up company (Chriss, 2016).

The growth of the on-demand economy demonstrates the high level of interest consumers
have in on-demand services from dog walking to laundry services, short-term home rentals,
massages, and truck hauling. Although just applying a mobile app to an existing service will not
ensure a company’s success, IT is a vital and integral part of the all businesses that are part of
the on-demand economy.

Business
processes,
producers,
and technology

Strategic direction:
industry, markets,
and customers

Business model

• What do we do?

• What is our direction?

• What markets & customers should
we be targeting and how do we
prepare for them?

• How do we do it?

• How do we generate revenues &
profits to sustain ourselves and
build our brand?

• How well do we do it?

• How can we be more
efficient?

FIGURE 1.2 Key strategic and tactical questions.

6 CHAPTER 1 Disruptive IT Impacts Companies, Competition, and Careers

Low Cost of Entry One of the reasons that the on-demand economy has taken off is
that it is easier than ever to become an on-demand business. Companies like Dispatch, a soft-
ware as-a-service company, allow entrepreneurs to move into the on-demand world quickly
and inexpensively. For example, Aatlantic Fitness, a fitness equipment repair service company,
moved into the on-demand economy using Dispatch, and Handyman Connection, a 20-year-
old home repair service company, is using Dispatch’s platform to compete with Handy, an on-
demand service for house cleaning that has raised $60 million in venture capital.

Digital Business Models
The on-demand economy is driving the transformation of traditional business models to digital
business models to serve customers what they want and where they want it.

Business models are the ways enterprises generate revenue or sustain themselves. Digital
business models define how businesses make money via digital technology. Companies that
adopt digital business models are better positioned to take advantage of business opportu-
nities and survive, according to the Accenture Technology Vision 2013 report (Accenture, 2013).
Figure  1.3 contains examples of new technologies that destroyed old business models and
created new ones.

The ways in which market leaders are transitioning to digital business models include the
following:

• NBA talent scouts rely on sports analytics and advanced scouting systems NBA talent
scouts used to crunch players’ stats, watch live player performances, and review hours of
tapes to create player profiles. Now software that tracks players’ performance has changed
how basketball and soccer players are evaluated. For example, STATS’ SportVU technology
is revolutionizing the way sports contests are viewed, understood, played, and enjoyed.
SportVU uses six palm-sized digital cameras that track the movement of every player
on the court, record ball movement 25 times per second, and convert movements into
statistics. SportVU produces real-time and highly complex statistics to complement the tra-
ditional play-by-play. Predictive sport analytics can provide a 360-degree view of a player’s
performance and help teams make trading decisions. Sports analytics bring about small
competitive advantages that can shift games and even playoff series.

• Dashboards keep casino floor staff informed of player demand Competition in the
gaming industry is fierce, particularly during bad economic conditions. The use of manual
spreadsheets and gut-feeling decisions did not lead to optimal results. Casino opera-
tors facing pressure to increase their bottom line have invested in analytic tools, such as

Location-aware technologies
track items through
production and delivery to
reduce wasted time and
inefficiency in supply chains
and other business-to-
business (B2B) transactions

Twitter dominates the
reporting of news and events
as they are still happening

Facebook became the most
powerful sharing network
in the world

Smartphones, tablets, other
touch devices, and their apps
reshaped how organizations
interact with customers—and
how customers want
businesses to interact with
them

FIGURE 1.3 Digital business models refer to how companies engage
their customers digitally to create value via websites, social channels, and
mobile devices.

Doing Business in the On-Demand Economy 7

To address these issues, IT leaders said they need to focus on relationships, meet more
frequently with top management, and spend significant amounts of time with functional
leaders, customers, and suppliers. Companies also need to emphasize finding, keeping,
and developing IT talent and on improving IT to improve business performance. These
findings point to one clear message—IT in the on-demand economy is about meeting cus-
tomer needs.

Tangam’s Yield Management solution (TYM). TYM is used to increase the yield (profitability)
of blackjack, craps, and other table games. The analysis and insights from real-time apps
are used to improve the gaming experience and comfort of players.

Today, a top concern of well-established corporations, global financial institutions, born-on-the-
Web retailers, and government agencies is how to design their digital business models in order to

• Deliver an incredible customer experience
• Turn a profit
• Increase market share
• Engage their employees

In the digital (online) space, the customer experience must measure up to the very best the
Web has to offer. Stakes are high for those who get it right—or get it wrong. Forrester research
repeatedly confirms there is a strong relationship between the quality of a firm’s customer
experience and loyalty, which, in turn, increases revenue (Schmidt-Subramanian et al., 2013).

IT’s Role in the On-Demand Economy
According to the 2016 survey conducted by the Society of Information Management (SIM), 1,213
IT leaders (including 490 chief information officers (CIOs)) from 801 companies reported com-
panies that are more highly digitized and tightly connected are putting a greater emphasis on
the strategic use of IT to enhance growth and improve performance. As a result, IT priorities and
spending are changing (Kappelman et al., 2017).

A review of the top 10 IT management priorities reported in the survey results is shown
in Table 1.1. Along with business-IT alignment and security, Table 1.1 clearly demonstrates a
need for companies to focus on strategic and organizational priorities such as innovation, IT
and business agility, speed of IT delivery, and business productivity and efficiency.

TABLE 1.1 10 Top IT Management Priorities

IT Management Issues
1 Technology Alignment with the Business

2 Security, Cybersecurity & Privacy

3 Innovation

4 IT Agility & Flexibility

5 Business Agility & Flexibility

6 Business Cost Reduction & Controls

7 IT Cost Reduction & Controls

8 Speed of IT Delivery & IT Time to Market

9 Business Strategic Planning

10 Business Productivity & Efficiency

Adapted from Kappelman et al. (2017).

8 CHAPTER 1 Disruptive IT Impacts Companies, Competition, and Careers

IT Business Objectives
Now, more than ever, IT must be responsive to the needs of consumers who are demanding a
radical overhaul of business processes in companies across diverse industry sectors. Intuitive
interfaces, around-the-clock availability, real-time fulfillment, personalized treatment, global
consistency, and zero errors—this is the world to which customers have become increasingly
accustomed. And, it’s not just about providing a superior user or customer experience—when
companies get it right, they can also offer more competitive prices because of lower costs, bet-
ter operational controls, and open themselves up to less risk.

According to Chirantan Basu of Chron (Basu, 2017), to stay abreast of the ever- changing
business landscape and customer needs, IT today must concentrate on the following six
business objectives:

1. Product development From innovations in microprocessors to efficient drug-delivery
systems, IT helps businesses respond quickly to changing customer demands.

2. Stakeholder integration Companies use their investor relations websites to
communicate with shareholders, research analysts, and others in the market.

3. Process improvement An ERP system replaces dozens of legacy systems for finance,
human resources, and other functional areas, to increase efficiency and cost-effectiveness
of internal business processes.

4. Cost efficiencies IT allows companies to reduce transaction and implementation costs,
such as costs of duplication and postage of email versus snail mail.

5. Competitive advantage Companies can use agile development, prototyping, and other
systems methodologies to being a product to market cost-effectively and quickly.

6. Globalization Companies can outsource most of their noncore functions, such as HR
and finance, to offshore companies and use ICT to stay in contact with its global employ-
ees, customers, and suppliers 24/7.

Every technology innovation triggers opportunities and threats to business models and strat-
egies. With rare exceptions, every business model depends on a mix of IT, knowledge of its
potential, the requirements for success, and, equally important, its limitations.

Questions

1. What precipitated the on-demand economy?

2. How is IT contributing to the success of the on-demand economy?

3. List the six IT business objectives.

4. What are the key strategic and tactical questions that determine an organization’s profitability and
management performance?

5. What is a business model?

6. What is a digital business model?

7. Give two examples of how companies are transitioning to digital business models.

8. What factors are driving the move to digital business models?

1.2 Business Process Improvement
and Competitive Advantage
Given that a company’s success depends on the efficiency of its business processes, even small
improvements in key processes can have significant payoff. All functions and departments in
the enterprise have tasks they need to complete to produce outputs, or deliverables, in order
to meet their objectives.

Business Process Improvement and Competitive Advantage 9

Before you can begin to improve something, you have to understand what it is you are
improving. We’ll start by defining a business process, looking at its characteristics, and then
exploring ways in which a business process can be improved either incrementally or radically
through Business Process Reengineering.

What Is a Business Process?
Business processes are series of steps by which organizations coordinate and organize tasks
to get work done. In the simplest terms, a process consists of activities that convert inputs into
outputs by doing work.

Examples of common business processes are as follows:

• Accounting Invoicing; reconciling accounts; auditing
• Finance Credit card or loan approval; estimating credit risk and financing terms
• Human resources (HR) Recruiting and hiring; assessing compliance with regulations;

evaluating job performance
• IT or information systems Generating and distributing reports and data visualizations;

data analytics; data archiving
• Marketing Sales; product promotion; design and implementation of sales campaigns;

qualifying a lead
• Production and operations Shipping; receiving; quality control; inventory management
• Cross-functional business processes Involving two or more functions, for example,

order fulfillment and product development

Three Components of a Business Process Business processes have the three
basic components shown in Figure 1.4. They involve inputs, activities, and deliverables.

Processes can be formal or informal. Formal processes are documented and have well-
established steps. Order taking and credit approval processes are examples. Routine formal
processes are referred to as standard operating procedures (SOPs). An SOP is a well-defined
and documented way of doing something. An effective SOP documents who will perform the
tasks; what materials to use; and where, how, and when the tasks are to be performed. SOPs
are needed for the handling of food, hazardous materials, or situations involving safety, secu-
rity, or compliance. In contrast, informal processes are typically undocumented, have inputs
that may not yet been identified, and are knowledge-intensive. Although enterprises would
prefer to formalize their informal processes in order to better understand, share, and optimize
them, in many situations process knowledge remains in people’s heads.

Processes range from slow, rigid to fast-moving, adaptive. Rigid processes can be struc-
tured to be resistant to change, such as those that enforce security or compliance regulations.
Adaptive processes are designed to respond to change or emerging conditions, particularly in
marketing and IT.

Improving Business Processes
Designing an effective process can be complex because you need a deep understanding of the
inputs and outputs (also known as deliverables), how things can go wrong, and how to prevent

Deliverables are the outputs
or tangible things that are
produced by a business process.
Common deliverables are
products, services, actions, plans,
or decisions, such as to approve
or deny a credit application.
Deliverables are produced in order
to achieve specific objectives.

raw materials,
data,
knowledge,
expertise

work that
transforms
inputs and acts
on data and
knowledge

products,
services,
plans,
or actions

Inputs Activities

Business Process

Deliverables

FIGURE 1.4 Three components of a business process.

10 CHAPTER 1 Disruptive IT Impacts Companies, Competition, and Careers

things from going wrong. For example, Dell had implemented a new process to reduce the time
that tech support spent handling customer service calls. In an effort to minimize the length of
the call, tech support’s quality dropped so much that customers had to call multiple times to
solve their problems. The new process had backfired—increasing the time to resolve computer
problems and aggravating Dell customers.

The importance of efficient business processes and continuous process improvement
cannot be overemphasized. Why? Because 100% of an enterprise’s performance is the result
of its processes. Maximizing the use of inputs in order to carry out similar activities better than
one’s competitors is a critical success factor (CSF). Poorly designed, flawed, or outdated
business processes waste resources, increase costs, cause delays, and aggravate customers.
For example, when customers’ orders are not filled on time or correctly, customer loyalty
suffers, returns increase, and reshipping increases costs. The blame may not be employee
incompetence, but a flawed order fulfillment process.

Don’t Automate, Obliterate!
In today’s on-demand economy, incrementally improving a business process isn’t always
sufficient to create the type of change required. Instead, radical changes need to occur to meet
higher customer expectations. To do this, companies have to go beyond simply automating
an existing process. They must reinvent the entire business process, including reducing the
number of steps required, eliminating documents, developing automated decision-making,
and dealing with regulatory and fraud issues. Operating models, skills, organizational struc-
tures, and roles need to be redesigned to match the reinvented processes. Data models should
be adjusted and rebuilt to enable better decision-making, performance tracking, and cus-
tomer insights.

Leading organizations have come to recognize that it can take a long time to see the benefits
of traditional large-scale projects that migrate all current processes to digital and sometimes
they don’t work. Instead, successful companies are reinventing processes, challenging every-
thing related to an existing process and rebuilding it using cutting-edge digital technology. For
example, rather than creating technology tools to help back-office employees type customer
complaints into their systems, leading organizations create self-serve options for customers to
type in their own complaints.

Business Process Reengineering (BPR) The process by which these types of rad-
ical process change can be achieved is referred to as business process reengineering (BPR),
its slogan is “Don’t automate, obliterate!” (Hammer and Champy,  2006). Consisting of eight
stages, shown in Figure 1.5, BPR proposes that simply applying IT to a manual or outdated pro-
cess does not always optimize it. Instead, processes need to be examined to determine whether
they are still necessary. After unnecessary processes are identified and eliminated, the remain-
ing ones are redesigned (or reengineered) in order to automate or streamline them. Next, the
new process is implemented and put into operation and its performance is evaluated. Finally,
the process is reassessed over time to continually improve it.

The goal of BPR is to eliminate unnecessary, non-value-added processes, and simplify
and automate the remaining processes to significantly reduce cycle time, labor, and costs. For
example, reengineering the credit approval process cuts time from several days or hours to
minutes or less. Simplifying processes naturally reduces the time needed to complete the pro-
cess, which also cuts down on errors.

After eliminating waste, technology can enhance processes by (1) automating existing
manual processes; (2) expanding the data flows to reach more functions in order to make it pos-
sible for sequential activities to occur in parallel; and (3) creating innovative business processes
that, in turn, create new business models. For instance, consumers can scan an image of a
product and land on an e-commerce site, such as Amazon.com, selling that product. This pro-
cess flips the traditional selling process by making it customer-centric.

You will read more about optimizing business processes and role of business process
management (BPM) role in the alignment of IT and business strategy in Chapter 13.

Business Process Improvement and Competitive Advantage 11

Gaining a Competitive Advantage
Understanding trends that affect the new ways business is being done and getting in front of
those trends by changing adding, deleting, and changing existing business processes gives
organizations an important competitive advantage over their competitors. Helping a
company gain, maintain, and sustain a competitive advantage in the market is a very impor-
tant function of IT. In business, as in sports or politics, companies want to win—customers,
market share, and position in the industry. Basically, this requires gaining an edge over com-
petitors by being first to take advantage of market opportunities, providing better customer
experiences, offering unique products or services, or convincing customers why your business
is a more attractive alternative than your competitors.

Influential industry leaders cite “new competition” as their largest business challenge.
Once an enterprise has developed a competitive edge, it can only be sustained by continually
pursuing new and better ways to compete. Maintaining a competitive advantage requires
forecasting trends and industry changes and figuring out what the company needs to do to stay
ahead of the game. It demands continuously tracking competitors and their future plans and
promptly taking corrective actions. Competitiveness depends heavily on IT agility and
responsiveness. The benefit of IT agility is that it enables organizations to take advantage of
opportunities faster or more effectively.

Closely related to IT agility is flexibility. For example, mobile networks are flexible—able
to be set up, moved, or removed easily, without dealing with cables and other physical require-
ments of wired networks. Mass migration to mobile devices from PCs has expanded the scope
of IT beyond traditional organizational boundaries—making location practically irrelevant.

IT agility, flexibility, and mobility are tightly interrelated and fully dependent on an organi-
zation’s IT infrastructure and architecture, which are discussed in Chapter 2.

With mobile devices, applications, platforms, and social media becoming inseparable
parts of work life and corporate collaboration and with more employees working from home,
the result is the rapid consumerization of IT. IT consumerization is the migration of consumer

Competitive advantage is an
edge that enables a company
to outperform its average
competitor.

Agility means being able to
respond quickly.

Responsiveness means that IT
capacity can be easily scaled up or
down as needed, which essentially
requires cloud computing.

Flexibility means having
the ability to quickly integrate
new business functions or to
easily reconfigure software or
applications.

1. Develop
Vision and
Objectives

2. Understand
Existing

Processes

3. Identify
Process for
Redesign

4. Identify
Change Levers

5. Implement
New Process

6. Make New
Process

Operational

7. Evaluate New
Process

8. Perform
Continuous

Improvement

Business Process
Reengineering

FIGURE 1.5 Eight phases of business process reengineering.

12 CHAPTER 1 Disruptive IT Impacts Companies, Competition, and Careers

technology into enterprise IT environments. This shift has occurred because personally owned
IT is as capable and cost–effective as its enterprise equivalents. IT at Work 1.1 demonstrates
how FitBit has maintained a competitive advantage with its fitness tracker.

As evidenced by the mergers of Grubhub/Seamless in food delivery and Handybook/Exec
in home services, consolidation will accelerate as competition grows. This trend is underscored
in the following examples:

• Collaboration of complementary, noncompetitive businesses will become commonplace
as a means to collectively educate consumers about the benefits of on-demand ser-
vices, increase awareness, and provide added value to core users. These range from
cross- promotions similar to that of Instacart/Washio to close partnerships such as that of
WunWun/Gett.

• Legacy providers in hospitality, transportation, and other Fortune 500s will “partner with”
or “acquire” more innovative on-demand companies. Ken Chenault, Chairman and CEO
of American Express, conceded in their annual report, “Our industry is being redefined by
many forces, including the continued revolution in online and mobile technologies, which
is transforming commerce and society.” Given the emerging influence of on-demand ser-
vices, Amex launched a partnership with Uber earlier this year. American Express was able
to get a foot in the door by allowing customers to earn 2× points for its spend on Uber with
an American Express credit card.

• As on-demand businesses solve for the current technological and logistical challenges,
design will increasingly become one of the most meaningful forms of competitive advantage.
Creating a memorable, frictionless user interface is the next battleground for addressing con-
sumers’ insatiable appetite for greater simplicity and convenience. Scott Belsky points out,
“A new cohort of design-driven companies are adding a layer of convenience between us and
the underlying services and utilities that improve our lives. This could change everything.”

Sources: Compiled from Ashcroft (2015), Nusca (2015), Grand View Research
(2016), and Fitbit.com (2016).

IT at Work 1.1

FitBit: Smart, Connected Device Transforms
Competition and Promotes Sustainability
In the first year of its existence, FitBit sold 100,000 devices. At the
time, there were countless weight loss and exercise programs, plans,
and gimmicks. But smart, connected wearable activity trackers
were virtually nonexistent. Five years later, FitBit managed to take
the title biggest selling manufacturer of wearable tech when it sold
a whopping 21 million devices in 1 year. It still holds that title today.

Vision: Simple Approach Plus Smart Device
FitBit was launched in San Francisco, California, by Eric Friedman
and James Park. These entrepreneurs took a basic approach to
personal health and fitness—eating right and keeping active. Their
vision was to develop a smart device that would motivate users to
be more active, eat a well-rounded diet, and ultimately become
healthier.

Throughout the day, FitBit logs data about the wearer’s activi-
ties, including the number of steps taken, distance travelled, calories
burned, and what needs to be done to reach a personal daily goal, for
example, walking 2 miles. FitBit’s internal memory can store at least a
week of activity data.

One of FitBit’s competitive strengths is the app that is accessi-
ble from a smartphone. Users can sync FitBit devices and view their
online profile, activity levels, and sleep patterns on dashboards that
display on more than 150 mobile devices, including iOS, Android, and
Windows Phone products. This compatibility maximizes the number

of friends and family in each user’s network to share performance
stats. It also motivates and increases user retention.

First Class Fitness
A smart wearable product that fits effortlessly into users’ life styles
launched an industry and made FitBit a market leader. In the sec-
ond quarter of 2015 (2Q15), FitBit shipped 4.4 million units, up
159% from the same quarter a year ago (2Q14) and held 24.3%
global market share. Second in line was Apple with 3.6 million
units shipped in 2Q15 and 19.9% global market share. Thanks to
the technology that enabled FitBit and the company’s growing
reputation, Friedman and Park are likely to be in business for a
long time.

IT at Work Questions
1. How did FitBit manage to take the title of biggest selling

manufacturer of wearable technology tech and sustain it?

2. What could other companies who produce fitness trackers
challenge FitBit in the marketplace?

3. What other features do you think consumers would like Fit-
Bit to incorporate into its fitness tracker to further improve
it? How would consumers and FitBit benefit from these
improvements?

IT Innovation and Disruption 13

Software Support for BPM
The purpose of business process management (BPM) is to help enterprises become more agile
and effective by enabling them to better understand, manage, and adapt their business pro-
cesses. Vendors, consulting and tech firms offer BPM expertise, services, software suites, and tools.

BPM software is used to map processes performed either by computers or manually—and
to design new ones. The software includes built-in templates showing workflows and rules
for various functions, such as rules for credit approval. These templates and rules provide
consistency and high-quality outcomes. For example, Oracle’s WebLogic Server Process Edition
includes server software and process integration tools for automating complex business
processes, such as handling an insurance claim.

But, BPM initiatives can be extremely challenging, and in order to be successful, BPM
requires buy-in from a broad cross section of the business, the right technology selection, and
highly effective change management processes.

Questions

1. What is a business process? Give three examples.

2. What is the difference between business deliverables and objectives?

3. List and give examples of the three components of a business process.

4. Explain the differences between formal and informal processes.

5. What is an SOP?

6. What is the purpose of BPM?

1.3 IT Innovation and Disruption
Digital technology creates new markets, businesses, products, and careers. As digital changes the
way consumers and retailers buy and sell products, companies must adapt and innovate to ensure
their product offerings, platforms, technologies, and search options cater to these changing needs.

Social–Mobile–Analytics–Cloud (SMAC) Model
We are in the era of social–mobile–analytics–cloud (SMAC) computing that is reshaping
business strategies and day-to-day operations (Figure 1.6).

Business process
management consists of
methods, tools, and technology to
support and continuously improve
business processes.

Estimated 15 billion
devices are connected to
the Internet—forecasted
to hit 50 billion by 2020

as more devices connect
via mobile networks

Current 4.2 billion IoT
devices projected to

increase to 24 billion in
2020. This represents

73% of the total Internet-
connected base

79% of online adults and
68% of all Americans use

Facebook. Mobile use
generates 30% of

Facebook’s ad revenue.

U.S. mobile commerce
sales top $104.05 billion

Facebook dominates all
other social platforms
with audience reach

FIGURE 1.6 SMAC reshapes business strategies and day-to-day
operations.

14 CHAPTER 1 Disruptive IT Impacts Companies, Competition, and Careers

The cloud consists of huge data centers accessible via the Internet and forms the core by
providing 24/7 access to storage, applications, and services. Handhelds and wearables, such
as FitBit, Pebble, and the Apple Watch, and their users form the edge. Social channels con-
nect the core and edge. The SMAC integration creates the technical and services infrastructure
needed for digital business. This infrastructure makes it possible to meet the expectations of
employees, customers, and business partners given that almost everyone is connected (social),
everywhere they go (mobile), gets the information they need (analytics), and has 24/7 access to
products and services (cloud).

Here are three examples of SMAC’s influence:

1. Powerful social influences impact advertising and marketing Connections and feed-
back via social networks have changed the balance of influence. Consumers are more likely
to trust tweets from ordinary people than recommendations made by celebrity endorse-
ments. And, negative sentiments posted or tweeted can damage brands.

2. Consumer devices go digital and offer new services The Nike+ FuelBand wristband
helps customers track their exercise activities and calories burned. The device links to a
mobile app that lets users post their progress on Facebook.

3. eBay’s move to cloud technology improves sellers’ and buyers’ experiences The
world’s largest online marketplace, eBay, moved its IT infrastructure to the cloud. With
cloud computing, eBay is able to introduce new types of landing pages and customer
experiences without the delay associated with having to buy additional computing
resources.

The balance of power has shifted as business is increasingly driven by individuals for whom
mobiles are an extension of their body and mind. They expect to use location-aware services,
apps, alerts, social networks, and the latest digital capabilities at work and outside work. To a
growing extent, customer loyalty and revenue growth depend on a business’s ability to offer
unique customer experiences that wow customers more than competitors can.

Technology Mega Trends
For 21st-century enterprises, connectivity, big data and analytics, and digitization are tech-
nology mega trends that cannot be ignored. Business breakthroughs and innovation would be
impossible without them. They also mark the difference between outdated 20th-century
business models and practices and those of today’s on-demand economy.

The most influential IT mega trends driving digital transformation of companies in the on-
demand economy are discussed next.

Connectivity Companies need to connect with consumers and business partners across
multiple channels and devices using digital platforms that consist of hardware, software
( mobile apps), networks (social media), (embedded sensors), and cloud computing.

For example, rather than run applications or programs from software stored on a com-
puter or server owned by the company, cloud computing allows companies to access the
same kinds of applications through the Internet. Major business cloud computing providers
include Amazon Web Services (AWS), Cisco Powered, Dell Cloud Solutions, Google Cloud, IBM
Cloud Solutions, and Teradata Cloud. One of the many benefits of cloud is that it provides the
flexibility to acquire or expand connectivity and computing power as needed for operations,
business transactions, and communication.

Expanded connectivity supports smart products, which have the ability to sense, process,
report, and take corrective action, such as smart clothing, watches, phones, to smart buildings
and smart cities. This IoT is becoming a driving force in the on-demand economy.

Connectivity pushes other sub trends, like big data, to create market opportunities for
new products and services, such as social sentiment analysis, open innovation, new insurance
business models, and micro personalized marketing and medicines. Big data is one of the many
disruptive technologies that are impacting people, processes, and profits.

Mega trends are forces that
shape or create the future
of business, the economy,
and society.

IT Innovation and Disruption 15

Big Data and Data Analytics There is no question that the increasing volume of
data can be valuable, but only if they are processed and available when and where they are
needed. The problem is that the amount, variety, structure, and speed of data being generated
or collected by enterprises differ significantly from traditional data. Big data are what high-
volume, mostly text data are called. Big data stream in from multiple channels and sources,
including the following:

• Mobile devices and machine-to-machine sensors embedded in everything from airport
runways to casino chips (Later in this chapter, you will read more about the IoT.)

• Social content from texts, tweets, posts, blogs
• Clickstream data from the Web and Internet searches
• Video data and photos from retail and user-generated content
• Financial, medical, research, customer, and business-to-business transactions.

Big data are 80% to 90% unstructured. Unstructured data do not have a predictable
format like a credit card application form. Huge volumes of unstructured data flooding into an
enterprise are too much for traditional technology to process and analyze quickly. Big data tend
to be more time-sensitive than traditional (or small) data. Data collected from social, mobile, and
other channels are analyzed to gain insights and make smart decisions that drive up the bottom
line. Machine-generated data from sensors and social media texts are main sources of big data.

Big data has been one of the most disruptive forces businesses have seen in a long time.
But when an enterprise harnesses its data and is able to act on analytic insights, it can turn the
challenges into opportunities.

Digitization Across industries, companies are attempting to transform their disconnected
or disjointed approaches to customers, products, services, and operating models to an
always-on, real-time, and information-rich marketplace. Some leaders are redesigning their
capabilities and operating models to take full advantage of digital technologies to keep step
with the “connected” consumer and attract talent. Others are creating qualitatively new
business models—and tremendous value—around disruptive digital opportunities. In doing so,
these companies secure not only continued relevance but also superior returns.

Digitization often requires that old wisdom be combined with new skills, for example, by
training a merchandising manager to program a pricing algorithm and creating new roles, such
as user-experience designer. The benefits of digitizing processes, through business process
reengineering, are huge. By digitizing information-intensive processes, costs can be cut by up
to 90% and turnaround times improved by several orders of magnitude.

Examples span multiple industries. For example, one bank digitized its mortgage appli-
cation and decision process, cutting the cost per new mortgage by 70% and slashing time to
preliminary approval from several days to just one minute. A telecommunications company
created a self-serve, prepaid service where customers could order and activate phones without
back-office involvement. A shoe retailer built a system to manage its in-store inventory that
enabled it to know immediately whether a shoe and size was in stock—saving time for cus-
tomers and sales staff. An insurance company built a digital process to automatically adjudi-
cate a large share of its simple claims.

In addition, replacing paper and manual processes with software allows businesses to
automatically collect data that can be mined to better understand process performance, cost
drivers, and causes of risk. Real-time reports and dashboards on digital-process performance
enable managers to address problems before they get out of control. For example, quality
issues in a company’s supply chain can be identified and remedied more rapidly by monitoring
customer buying behavior and feedback in digital channels.

Machine-to-Machine Technology Sensors can be embedded in most products.
Objects that connect themselves to the Internet include cars, heart monitors, stoplights, and
appliances. Sensors are designed to detect and react, such as Ford’s rain-sensing front wipers that
use an advanced optical sensor to detect the intensity of rain or snowfall and adjust wiper speed

Digitization is the process of
transforming any kind of activity
or information into a digital
format that can be collected,
stored, searched, and analyzed
electronically—and efficiently.

16 CHAPTER 1 Disruptive IT Impacts Companies, Competition, and Careers

accordingly. Machine-to-machine (M2M) technology enables sensor-embedded products to
share reliable real-time data via radio signals. M2M and the Internet of Things (IoT) are widely
used to automate business processes in industries ranging from transportation to health care. By
adding sensors to trucks, turbines, roadways, utility meters, heart monitors, vending machines,
and other equipment they sell, companies can track and manage their products remotely.

When devices or products are embedded with sensors, companies can track their move-
ments or monitor interactions with them. Business models can be adjusted to take advantage
of what is learned from this behavioral data. For example, an insurance company offers to install
location sensors in customers’ cars. By doing so, the company develops the ability to price the
drivers’ policies on how a car is driven and where it travels. Pricing is customized to match the
actual risks of operating a vehicle rather than based on general proxies—driver’s age, gender, or
location of residence. Table 1.2 lists a number of opportunities for improvement through the
application of embedded physical things.

Lessons Learned from Companies Using Disruptive
Technologies
Those companies who have adapted to change by exploiting digital technology and software
are outperforming their peers. According to a survey conducted by CA Associates, companies
who have turned the way they use technology from being a cost center and operational
function to being a genuine competitive differentiator are reaping the benefits. Many reported
doubling their revenue growth, experiencing a higher profit by a factor of 2.5 and increasing
new business-based revenue by a factor of 1.5 (Vaughn-Brown, 2014). The five factors to which
companies attribute these benefits can be summed up in the following Lessons Learned:

1. Exploit the power of software Become “app-centric” and extend core business func-
tions to include software development.

2. Develop, deliver, disrupt—quickly! Embrace agile development techniques and
broadly implement DevOps.

3. Boost speed and efficiency with automated programming interfaces (APIs) Take a
managed approach to use APIs for building full-function Web applications (particularly
mobile apps) and for integrating back-office systems.

Internet of Things (IoT) refers
to a set of capabilities enabled
when physical things are
connected to the Internet
via sensors.

TABLE 1.2 Improvement Opportunities from Embedded Sensors

Industry Sector Application Payoff
Oil and gas Exploration and development rely on extensive sensor

networks placed in the earth’s crust. Sensors can
produce accurate readings of the location, structure,
and dimensions of potential fields

Lower development costs and improved oil flows

Health care Sensors and data links can monitor patients’ behavior
and symptoms in real time and at low cost allowing
physicians to more precisely diagnose disease and
prescribe treatment regimens

Reduce hospitalization and treatment costs by $1 billion
per year in the United States

Retail Sensors can capture shoppers’ profile data stored in
their membership cards to help close purchases

Additional information and discounts at point of sale

Farming Ground sensors can take into account crop and field
conditions and adjust the amount of fertilizer that is
spread on areas that need more nutrients

Reduction in time and cost

Advertising Billboards can scan people passing by, assessing
how they fit consumer profiles, and instantly change
displayed messages based on those assessments

Better targeted marketing campaigns; flexibility;
increased revenues

Automotive Systems can detect imminent collisions and take evasive
action, such as automatic braking systems

Potential accident reduction savings of more than
$100 billion annually

IT and You 17

4. Leverage third-party innovation Take a more managed approach to use APIs for
integrating third-party services into applications and enable external develop access to
systems and data.

5. Maximize returns with smarter IT investments Get smarter at assessing and prioritiz-
ing IT investments to maximize return on investment and put portfolio management in
place to prioritize and track IT programs.

Business opportunities and challenges presented by today’s technology innovations are on an
unprecedented scale. Cloud services, big data, mobility, digitization, and the IoT are likely to
disrupt many industries and shake up competitive positions.

Innovation is necessary for any company that wants to remain relevant, retain customers,
and increase profits. Increased competition, expanded global markets, and empowered
customers define today’s on-demand business environment.

Questions

1. What are the benefits of cloud computing?

2. What is M2M technology? Give an example of a business process that could be automated with M2M.

3. Describe the relationships in the SMAC model.

4. What impacts is the SMAC model having on business?

5. Why have mobile devices given consumers more power in the marketplace?

6. Explain why connectivity is important in today’s on-demand economy.

7. In what ways is IT disrupting business?

1.4 IT and You
Today, IT and information systems touch nearly all aspects of our lives. IT is a part of our social
life, our work, and every business process, and it is no longer the sole responsibility of the IT
department. Just think about much of your day you spend interacting with technology—your
iPad, PC, and smartphone. It has been reported that the average American checks his/her
phone 46 times every day! That’s an increase of 35% over the 33 looks per day reported in a
similar study just one year earlier. Aggregated across the 185 million American smartphone
users, that’s 8 billion “looks” per day (Eadicicco, 2015).

IT impacts the way you work, the way you learn, the way you communicate and socialize
and the way you entertain yourself. Today, success in any field, be it health care, marketing,
finance, accounting, law, education, sports, entertainment, etc. requires much more than a cur-
sory knowledge of IT. IT is and will remain the foundation of the global economy and is espe-
cially important in the on-demand economy.

On-Demand Workers
A recent survey of on-demand economy (Chriss, 2016) in the United States and online talent
recruiters reported an increase in people working in the on-demand economy who are enjoy-
ing a new way of working. Other facts and stats from the survey reveal the U.S. On-Demand
workforce as a community of 45 million workers, the majority of whom are younger, optimistic,
and urban-based (Table 1.3).

The survey also revealed that fewer and fewer people are looking for traditional
employment. For example, 91% like the control over where, when, and how they work that the
on-demand jobs offer them. The motivation for most is not to replace a traditional job, but to
earn supplemental income (Table 1.4).

Cloud service is any computing
resource that is provided over the
Internet on demand.

18 CHAPTER 1 Disruptive IT Impacts Companies, Competition, and Careers

The data also shows a strong entrepreneurial drive behind people choosing on-demand work.
Just over one-third of respondents owned full- or part-time business and approximately one
quarter reporting they are working in the on-demand economy to build a business. This entre-
preneurial spirit is reflected in the ways that on-demand worker are compensated. While the
40-hour workweek is still alive and well, sources of income have changed. Instead of one pay-
check, on-demand workers typically receive their income from three different sources:

1. On-demand work
2. Contracting and consulting
3. Running a business

Changes in Work Status While the on-demand economy provides positive opportunities,
it can also offer limited benefits and inferior infrastructure. Take, for example, the “ contractor”
model that companies like Uber use. Initially, Uber set the standard for on-demand business
by labeling its drivers “independent contractors” and essentially claiming that all of its 160,000
drivers were self-employed. This pushed many of the costs of doing business onto the independent
contractors’ shoulders and deprived them of baseline labor protections such as worker’s
compensation, social security contributions, minimum wage, and discrimination protections.

This business model also allowed companies using the Uber model to sidestep federal,
state, and county taxes and insurance premiums and undercuts competitors that used a
traditional W-2 hiring model. However, not all on-demand businesses use the Uber model.

Some companies treat their workers as employees from the start, while others have
switched to the W-2 model and both approaches are reaping benefits. Shyp CEO Kevin Gibbon
posted on LinkedIn that the move to employee status was “an investment in a longer-time
relationship with our couriers, which we believe will ultimately create the best experience
for our customers.” After moving to the W-2 model, Shyp had only 1 out of 245 employees
quit and customer complaints decreased at the package delivery company. And Instacart, a
food shopping and delivery service, offered its shoppers the option to convert to part-time
employees so they could offer training to ensure a consistent customer experience and greater
customer satisfaction (National Employment Law Project, 2016).

Regardless of their work status, it would appear that overall on-demand workers are
highly satisfied with their work environment, perhaps because it fits a unique need. Intuit’s on-
demand economy survey reported the following:

• 70% of on-demand workers are satisfied with their work.
• 81% plan to continue working with the same provider over the next year.
• 63% are happier to be working in the on-demand economy.

Overall, on-demand workers are forward-looking, eager to embrace new opportunities, and
want to take charge of their careers.

TABLE 1.3 Profile of U.S. On-Demand Workers

Characteristic Number
Financial situation had improved over the past year 23 million

Expected their financial situation to improve over the coming year 28.8 million

Under 35 years of age 23 million

Live in urban areas 18.45 million

TABLE 1.4 Motivation to Work in the On-Demand Economy

Motivation Percent
Earn supplemental income 63

Create and control their own schedule 46

Turned to on-demand work because they couldn’t find another traditional job 11

IT and You 19

IT Adds Value to Your Performance and Career
Whether you join the ranks of the on-demand workers, or choose to stay in a traditional job, IT
can greatly enhance your performance at work and the ways you move through your career path.

Staying current in emerging technologies affecting markets is essential to the careers of
knowledge workers, entrepreneurs, managers, and business leaders—not just IT and CIO.

In the current marketplace, organizations are finding it particularly difficult to find quali-
fied IT Talent, as illustrated in IT at Work 1.2.

IT at Work 1.2

Scott Zulpo Is Facing Stiff Competition
He’s adding a senior project manager, a network analyst, and
a help desk worker to his 55-member IT staff at BCU, a Vernon
Hills, Illinois-based credit union where he is vice president of IT.
And, Zulpo will need to add even more people to keep up with an
increasing demand for tech-driven innovations.

“The challenge is twofold—first finding talent, and then deter-
mining if that talent has the skills, experience and personality to
thrive in the position,” says Zulpo, who’s mindful that “the cost and
impact of not hiring an ‘A’ player is huge.”

Zulpo has his work cut out for him. He’s hiring at time
when very few IT professionals are out of work. Consequently,
competition for tech talent is fierce. The unemployment rate
for tech workers is about 2%, according to reports on recent
data from the U.S. Bureau of Labor Statistics (Bureau of Labor
Statistics, 2016).

And, Zulpo isn’t the only one who’s having a difficult time
finding good IT talent. Many of his fellow IT leaders are seeking
the same skills. Computerworld’s Forecast 2017 survey of 196 IT
professionals found that both project management and technical

support were among the top 10 most sought-after skills among
companies that plan to recruit in the new year.

“The IT labor market is still very hot. The candidate is very
much in the driver’s seat,” says Jason Hayman, market research
manager for IT staffing firm TEKsystems.

Hayman cites a government report that estimates that 500,000
to 1 million IT jobs go unfilled every year, but notes that some ana-
lysts say the figure is closer to 2 million. He says there’s a classic
supply-and-demand scenario working here, with demand for talent
far exceeding supply.

The takeaway is that there are not enough IT workers!
Compiled from: Bureau of Labor Statistics (2016), Pratt (2016),

and Computerworld (2017).

IT at Work Questions
1. What are two reasons why Zulpo had trouble finding quali-

fied IT talent?
2. What of position was Zulpo trying to fill?
3. What methods would you recommend to Zulpo help him in

his efforts to recruit new IT personnel?

IT as a Career Fueled by corporate growth, systems expansion, need for competitive or
unique services to increase business and security initiatives, companies are increasing their
IT hires. Companies need new tech hires who have a background in both technology and
business and who can articulate IT’s value in meeting business goals. In particular, companies
are seeking IT employees with skills in programming, application development, technical
support, security, cloud, business intelligence, Web development, database administration,
and project management.

According to the U.S. Department of Labor (2016), IT job growth is estimated at 12%
from 2014 to 2024, faster than the average for all other occupations. This means about 488,500
new jobs. The median annual wage for computer and IT occupations was $81,430 in May 2015,
which was considerably higher than the median annual wage of $36,200 for all other occupa-
tions. Here are some common IT jobs and their activities:

IT managers Play a vital role in the implementation and administration of digital tech-
nology. They plan, coordinate, and direct research on the computer-related activities of
firms. In consultation with other managers, they help determine the goals of an organiza-
tion and then implement technology to meet those goals.
Chief technology officers (CTOs) Evaluate the newest and most innovative technolo-
gies and determine how they can be applied for competitive advantage. CTOs develop
technical standards, deploy technology, and supervise workers who deal with the daily IT
issues of the firm. When innovative and useful new ITs are launched, the CTO determines
implementation strategies, performs cost–benefit or SWOT analysis, and reports those
strategies to top management, including the CIO.

20 CHAPTER 1 Disruptive IT Impacts Companies, Competition, and Careers

IT project managers Develop requirements, budgets, and schedules for their firm’s
IT projects. They coordinate such projects from development through implementation,
working with their organization’s IT workers, as well as clients, vendors, and consultants.
These managers are increasingly involved in projects that upgrade the information secu-
rity of an organization.

Data scientists Manage and analyze massive sets of data for purposes such as target mar-
keting, trend analysis, and the creation of individually tailored products and services. Enter-
prises that want to take advantage of big data use real-time data from tweets, sensors, and
their big data sources to gain insights into their customers’ interests and preference, to create
new products and services, and to respond to changes in usage patterns as they occur. Big data
analytics has increased the demand for data scientists, as described in Career Insight 1.1.

Career Insight 1.1

Data Scientists Analyze Business Data
for Actionable Business Intelligence
Online searches for data scientist are outpacing the number of job
postings by more than 20% and the large business consulting firm,
Price-Waterhouse-Cooper, recently announced they would be add-
ing more than 1,000 data scientists during the next 2 years.

Big data, analytics tools, powerful networks, and greater
processing power have contributed to growth of the field of data
science. According to Glassdoor data (glassdoor.com,  2017), the
median annual salary for data scientists in the United States is
$113,436 and experienced data scientists who manage teams of 5
to 10 people are earning more than $250,000 per annum.

But, it’s not just about the money—data scientists enjoy what
they do. The job is interesting, spanning many different aspects of
the organization and in some cases involves analyzing community
outreach programs supported by organizations.

What Does a Data Scientist Do?
Enterprises need people who are capable of analyzing and finding
insights in data captured from a range of sources, including customer
transactions, click streams, sensors, social media, log files, and GPS
plots. Their mission is to unlock valuable and predictive insights that
will influence business decisions and spur a competitive advantage.
According to Gregg Gordon, VP of the Big Data practice group at Kro-
nos, provider of workforce management solutions in the cloud,

It’s not sitting in a room all day – we take our work
and apply it to customer problems. We’re working and
interacting with customers on a daily basis talking
about real problems, then attempting to replicate,
model and solve them.

An interesting example of what a data scientist can achieve
can be found by studying Jonathan Goldman, the person who
transformed LinkedIn. At the time Goldman joined, LinkedIn had
less than 8 million members. Goldman noticed that existing mem-
bers were inviting their friends and colleagues to join, but they were
not making connections with other members at the rate executives
had expected. A LinkedIn manager said, “It was like arriving at a
conference reception and realizing you don’t know anyone. So you
just stand in the corner sipping your drink—and you probably leave

early.” Goldman began analyzing the data from user profiles and
looked for patterns to predict whose networks a given profile would
land in. While most LinkedIn managers saw no value in Goldman’s
work, Reid Hoffman, LinkedIn’s cofounder and CEO at the time,
understood the power of analytics because of his experiences at
PayPal. With Hoffman’s approval, Goldman applied data analytics to
test what would happen if a member were presented with names of
other members they had not yet connected with, but seemed likely
to know. He displayed the three best new matches for each member
based on his or her LinkedIn profile. Within days, the click-through
rate on those matches skyrocketed and things really took off. Thanks
to this one feature, LinkedIn’s growth increased dramatically.

Artist or Scientist?
The most successful—and sought-after—data scientists possess a
combination of analytical skills, technical prowess and business
acumen needed to effectively analyze massive data sets while
thinking critically and shifting assumptions on the fly, ultimately
transforming raw intelligence into concise and actionable insights.

The LinkedIn example shows that good data scientists do much
more than simply try to solve obvious business problems. Creative
and critical thinking are part of their job—that is, part analyst and
part artist. They dig through incoming data with the goal of discov-
ering previously hidden insights that could lead to a competitive
advantage or detect a business crisis in enough time to prevent it.
Data scientists often need to evaluate and select those opportunities
and threats that would be of greatest value to the enterprise or brand.

Questions
1. What types of IT career have the most potential in the

current hiring market?

2. What factors does Zulpo take into consideration when he’s
evaluating job applicants?

3. Why is IT a major enabler of business performance
and success?

4. Explain why it is beneficial to be an informed user of IT.

5. Do you think IT job prospects are strong? Explain.

Sources: Darrow (2015), Marr (2016), U.S. Department of Labor (2016), and
Bureau of Labor Statistics (2016).

IT and You 21

IT Job Prospects In 2017, only 2% of all IT workers are unemployed. Workers with spe-
cialized technical knowledge and strong communications and business skills, as well as those
with an MBA with a concentration in an IT area, will have the best prospects. Job openings will
be the result of employment growth and the need to replace workers who transfer to other
occupations or leave the labor force (Bureau of Labor Statistics, 2016)

Dow Chemical set up its own social network to help managers identify the talent they need
to carry out projects across its diverse business units and functions. To expand its talent pool,
Dow extended the network to include former employees and retirees.

Other companies are using networks to tap external talent pools. These networks include
online labor markets such as Amazon Mechanical Turk and contest services such as InnoCen-
tive that help solve business problems.

• Amazon Mechanical Turk is a marketplace for work that requires human intelligence. Its
Web service enables companies to access a diverse, on-demand workforce.

• InnoCentive is an “open innovation” company that takes R&D problems in a broad range of
areas such as engineering, computer science, and business and frames them as “challenge
problems” for anyone to solve. It gives cash awards for the best solutions to solvers who
meet the challenge criteria.

Becoming an Informed IT User
Knowing how best to you use IT and how and when to interact with IT personnel, and they with
you, will help you perform better at home and at work and enable you to become an informed
user of technology.

The department or functional area that handles the collection, processing, storing,
analysis and distribution of information using a computer-based tool can be referred to
by many names—some companies refer to it as information technology (IT), while others
refer to it as information systems (IS), management information systems (MIS), IT support,
IT services or computer information systems (CIS). Whatever the name, its purpose is the
same—to support a company’s information needs by developing, operating, securing, and
maintaining one or more information systems.

To become an informed IT user, you will learn how the six components of an information
system—hardware, software, procedures, people, networks, and data—interact to provide you
with the information that you need, when you need it, and in the format you need. These com-
ponents will be discussed in detail in Chapter 2.

By reading this book, you will become an informed user and you gain more value from
IT to improve your performance and widen your career opportunities. For example, you
will learn to

• Understand how using IT can improve organizational performance
• Understand how and why IT can benefit organizational growth
• Understand how business can use IT to enhance the customer experience
• Use how companies use IT to analyze business data and offer important insights
• Be able to offer input into the development and use of IT
• Be able to recommend and select IT applications at work
• Know how to find emerging technologies to make radical improvement in business

processes
• Understand how IT can facilitate teamwork and improve individual productivity
• Appreciate the importance of ethical behavior when using IT and explain the associated

risks and responsibilities
• Foster your entrepreneurial tendencies to start your own on-demand business.

Informed user is a person
knowledgeable about information
systems and IT.

22 CHAPTER 1 Disruptive IT Impacts Companies, Competition, and Careers

Assuring Your Learning

Discuss: Critical Thinking Questions

1. Why are businesses experiencing a digital transformation?

2. More data are collected in a day now than existed in the world
10 years ago. What factors have contributed to this volume of data?

3. Assume you had no smartphone, other mobile device, or mobile
apps to use for 24 hours. How would that mobile blackout disrupt your
ability to function?

4. Name three highly disruptive digital technologies. Give an example
of one disruption for each technology.

5. Why are enterprises adopting cloud computing?

6. What is the value of M2M technology? Give two examples.

7. Starbucks monitors tweets and other sources of big data. How
might the company increase revenue from big data analytics?

8. Select three companies in different industries, such as banking,
retail store, supermarket, airlines, or package delivery, that you
do business with. What digital technologies does each company

use to engage you, keep you informed, or create a unique customer
experience? How effective is each use of digital technology to keeping
you a loyal customer?

9. Describe two examples of the influence of SMAC on the financial
industry.

10. What is the potential impact of the IoT on the health-care industry?

11. Why does reducing the cycle time of a business process also help
to reduce errors?

12. Research firm Gartner defines competitive advantage as a
difference between a company and its competitors that matters to
customers. Describe one use of M2M technology that could provide a
manufacturer with a competitive advantage.

13. What IT careers are forecasted to be in high demand? Explain why.

14. Why or how would understanding the latest IT trends influence
your career?

Key Terms
agility 11
barriers to entry 23
big data 15
business model 3
business process 9
business process management (BPM) 13
business process reengineering (BPR) 10
business-to-business 15
chief technology officers (CTOs) 19
cloud computing 14
cloud services 17
competitive advantage 11
critical success factor (CSF) 10

cross-functional business process 9
customer experience 7
cycle time 22
dashboards 12
data analytics 3
data science 20
deliverables 9
digital business model 7
digitization 15
flexibility 11
informed user 21
Internet of Things (IoT) 16
IT consumerization 11

IT project managers 20
machine-to-machine (M2M) technology 16
mega trends 14
objectives 8
on-demand economy 4
productivity 7
responsiveness 11
social, mobile, analytics and cloud (SMAC) 13
standard operating procedures (SOPs) 9
SWOT analysis 19
unstructured data 15
wearable technology 12

Explore: Online and Interactive Exercises

1. Research the growing importance of the IoT. Find two forecasts of its
growth. What do they forecast?

2. Go to “9 Successful Digital Disruption Examples” on the IT Business
Edge website. Close the pop-up to view the slideshow and read the
descriptions of each of the ways in which technology is disrupting our
lives. Answer the following questions:

a. Which of the following disruptions resonated best with you and
your lifestyle? Explain.

b. Which of the disruptions was most surprising to you? Why?

c. Rank order the disruptions in their order of importance to you?
Write a short report explaining your rankings.

Analyze & Decide: Apply IT Concepts to Business Decisions

1. A transportation company is considering investing in a truck tire
with embedded sensors. Outline the benefits of this investment.
Would this investment create a long-term competitive advantage for
the transportation company?

2. Visit the website of UPS (ups.com), Federal Express (fedex.com),
and one other logistics and delivery company.

a. At each site, describe what information is available to customers
before and after they send a package?

b. Compare the three customer experiences. Which one do you
prefer? Why?

3. Visit Dell.com and Apple.com to simulate buying a laptop
computer. Compare and contrast the selection process, degree of
customization, and other buying features. What are the barriers to
entry into this market, based on what you learned from this exercise?

Case 1.2
Business Case: The Internet of Things Comes
to the NFL
People love sports statistics and the more the better. Responding to
this customer demand, the NFL increased the quality and quantity of
statistics available to coaches and fans with radio frequency identifica-
tion (RFID) chips.

Player RFID Project
When the 2015 National Football League played its first game in New
England, each player was equipped with a set of RFID sensors. Each
sensor, about the size of a quarter, is embedded in players’ shoulder
pads and emits a unique radio frequency. Every stadium used by the
NFL is equipped with 20 receivers to pick up the RFID signals and pin-
point every player on the field. It also records speed, distance traveled.
acceleration in real time, and the direction the player is facing.

The NFL plans to use the data it collects to power an Xbox One
and Windows NFL apps to allow fans to call up stats for each player
tied into the highlight clips posted on the app. The data will also be
fed to broadcasters, leveraged for in-stadium displays, and provided
to coaching staff and players.

“We’ve always had these traditional NFL stats,” says Matt
Swensson, senior director of Emerging Products and Technology at
the NFL. “The league has been very interested in trying to broaden that
and bring new statistics to the fans. Along the way, there’s been more
realization about how the data can be leveraged to make workflow
more efficient around the game.”

Zebra Technologies Software Vendor
The NFL’s technology partner in its IoT push was Zebra Technologies of
Lincolnshire, Illinois.

Zebra was well known for its manufacturing and selling marking,
tracking and printing technologies such as thermal barcode label and
receipt printers, RFID smart label printer/encoders, and card and kiosk

printers. As it moved into IoT and M2M applications, Zebra launched
its MotionWorks Sports Solution, which powers the NFL IoT initiative.
Zebra was able to develop RFID tags that blink up to 85 times per sec-
ond to track motion of athletes in subseconds. Then it had to find a cus-
tomer for the product—so it turned to the biggest fish in the pond—the
NFL. Zebra trialed the tags by equipping more than 2,000 players, 18
NFL stadiums and officials, markers, and pylons. Over the course of the
season, more than 1.7 billion sets of XY player coordinates were meas-
ured, transmitted, and stored during the games. Every stadium was
connected to a command station in San Jose, California, that controls
when the data are collected, where they are sent, and stores them in
the cloud.

The Need for the Right People
An important lesson that Zebra learned is that generic data scien-
tists weren’t sufficient to gain insight into the data. Zebra needed
football experts. “When you look at analytics in football, you really
need people. We had to go out and hire football people. The analyt-
ics from manufacturing weren’t the same as the analytics from foot-
ball. We could see correlations in the data that seemed important and
then found out they weren’t. We had to bring in people that had the
football expertise who could say ‘Look, this is why it matters’,” said
Jill Stelfox, Zebra Technologies Vice President and General Manager,
Location Solutions.

The latest development in this IoT initiative is its integration with
NFL’s fantasy football offerings.

Questions
1. Why did NFL equip its players with RFID tags?

2. What factors contributed to the success of the IoT initiative
at the NFL?

3. What other types of IoT applications can you think of that could
be used in sports stadiums?

Case 1.2 23

24 CHAPTER 1 Disruptive IT Impacts Companies, Competition, and Careers

Case 1.3
Video Case: Knowing More and Doing More

Teradata is a leading provider of big data and data analytics solutions.
In a video, Teradata explains that when you know the right thing to
do, you can do more of what truly matters for your business and your
customers. Visit Teradata’s website, search for and view the video
entitled “Manufacturing: What Would You Do If You Knew?”™ (the
video runs for 1:26 minutes).

Questions
1. What did you learn from the video?

2. What is the value of knowing more?

References
Accenture. Accenture Technology Vision 2013.
Ashcroft, S. “Fitbit sold 4.5 million trackers last quarter and smashed

financial estimates.” Wareable.com, August 6, 2015.
Barry Libert, B., Y. Wind, and M.B. Fenley. “What Airbnb, Uber, and Alib-

aba Have in Common.” Harvard Business Review, November 20, 2014.
Basu, C. “The Six Important Business Objectives of Information Tech-

nology.” Chron, 2017. Accessible from http://smallbusiness.chron
.com/six-important-business-objectives-information-technology-
25220.html

Boland, M. “What’s Driving the Local on Demand Economy?” BIA/Kel-
sey blog, May 5, 2015.

Bureau of Labor Statistics. Occupational Outlook Handbook. U.S.
Department of Labor, 2016–2017.

Chandler, N. “How FitBit Works.” HowStuffWorks.com, May 2, 2012.
Chriss, A. “How the On-Demand Economy Is Reshaping the 40 Hour

Work Week.” 2016.
Computerworld. “2017 Tech Forecast: IT Sharpens Its Focus.”

Computerworld, 2017.
Darrow, B. “Data Science Is Still White Hot, but Nothing Lasts For-

ever.” Fortune, 2015.
Eadicicco, L. “American Check Their Phones 8 Billion Times a Day.”

Time, December 15, 2015.
glassdoor.com. “Data Scientist Salaries.” April 7, 2017.
Grand View Research. “Wearable Technology Market Analysis By

Product (Wrist-Wear, Eye-Wear, Foot-Wear, Neck-Wear, Body-Wear),
by Application (Fitness & Wellness, Healthcare, Infotainment, Defense,
Enterprise and Industrial) and Segment Forecasts to 2022.” Grand
View Research, 2016.

Hammer, M. and J. Champy. “Re-engineering the Corporation: A
Manifesto for Business Revolution.” Updated and revised edition.
Harper Business Essentials, 2006.

Jaconi, M. “The ‘On-Demand Economy’ Is Revolutionizing Consumer
Behavior—Here’s How.” Business Insider, July 13, 2014.

Kappelman, L., E. McLean, V. Johnson, R. Torres, Q. Nguyen,
C. Maurer, and M. Snyder. “The 2016 SIM IT Key Issues and Trends
Study.” MIS Quarterly Executive, March 2017.

MacIver, K. “Digital Business in an Era of Disruptive Innovation.” I-CIO
.com, November 2015.

Marr, B. “Is Being a Data Scientist Really the Best Job in America?”
Forbes, February 25, 2016.

National Employment Law Project. “Employers in the On-Demand
Economy: Why Treating Workers as Employees is Good for Business.”
http://www.nelp.org/content/uploads/Fact-Sheet- Employers-in-
the-On-Demand-Economy , 2016.

Nusca, A. “The Numbers Are in: Apple Is No. 2 in Wearables.” Fortune,
August 27, 2015.

Pratt, M. “10 Hottest Tech Skills for 2017.” Computerworld, Decem-
ber 7, 2016.

Primack, D. “GrubHub Makes Major Move in Restaurant Delivery
Wars.” Fortune, February 5, 2015.

Schmidt-Subramanian, M., H. Manning, J. Knott, and Murphy. “The
Business Impact of Customer Experience, 2013.” Forrester Research,
June 10, 2013.

Storbaek, D. “The 5-Step Uber Playbook That Will Disrupt the On-
Demand Economy.” TechCrunch.com, October 15, 2015.

U.S. Department of Labor (2016). “Computer and Information
Technology Occupations”. https://www.bls.gov/ooh/computer-
and-information-technology/home.htm

Vaughn-Brown, J. “The Digital Transformation Journey: Key Technol-
ogy Considerations.” CA Technologies, 2014.

Winkler, R. and D. Macmillan. “The Secret Math of Airbnb’s $24 Billion
Valuation.” The Wall Street Journal, June 17, 2015.

25

CHAPTER 2

Information Systems,
IT Architecture, Data Governance,
and Cloud Computing

LEARNING OBJECTIVES

2.1 Name the six components of an information system and
match the various types of information systems to the type of
support needed by business operations and decision-makers.

2.2 Describe an IT infrastructure, an IT architecture, and an
enterprisewide architecture (EA) and compare and contrast
their roles in guiding IT growth and sustaining long-term
performance.

2.3 Explain the business benefits of information management and
understand the importance of data governance and master
data management in providing trusted data that is available
when and where needed to support sustainability.

2.4 Understand the concepts of data centers and cloud computing
and understand how they add value in an organization.

2.5 Describe the different types of cloud services and the various
forms of virtualization and understand how they add value in
an organization.

CHAPTER OUTLINE

Case 2.1 Opening Case: Detoxing Location-Based
Advertising Data at MEDIATA

2.1 IS Concepts and Classifications

2.2 IT Infrastructure, IT Architecture, and
Enterprise Architecture

2.3 Information Management and Data
Governance

2.4 Data Centers and Cloud Computing

2.5 Cloud Services and Virtualization

Case 2.2 Business Case: Data Chaos Creates Risk

Case 2.3 Video Case: Cloud Computing at
Coca-Cola Is Changing Everything

26 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

Introduction
One of the most popular business strategies for achieving success is the development of a
competitive advantage. Competitive advantage exists when a company has superior resources
and capabilities than its competitors that allow it to achieve either a lower cost structure
or a differentiated product. For long-term business success, companies strive to develop
sustainable competitive advantages, or competitive advantages that cannot be easily copied
by the competition (Porter, 1998). To stay ahead, corporate leaders must constantly seek new
ways to grow their business in the face of rapid technology changes, increasingly empowered
consumers and employees, and ongoing changes in government regulation. Effective ways
to thrive over the long term are to launch new business models and strategies or devise new
ways to outperform competitors. Because these new business models, strategies, and per-
formance capabilities will frequently be the result of advances in technology, the company’s
ability to leverage technological innovation over time will depend on its approach to enter-
prise IT architecture, information management, and data governance. The enterprisewide IT
architecture, or simply the enterprise architecture (EA), guides the evolution, expansion, and
integration of information systems (ISs), digital technology, and business processes. This guid-
ance enables companies to more effectively leverage their IT capability to achieve maximum
competitive advantage and growth over the long term. Information management guides the
acquisition, custodianship, and distribution of corporate data and involves the management
of data systems, technology, processes, and corporate strategy. Data governance, or informa-
tion governance, controls enterprise data through formal policies and procedures. One goal of
data governance is to provide employees and business partners with high-quality data they
can trust and access on demand.

Bad decisions can result from the analysis of inaccurate data, which is widely referred to
as dirty data, and lead to increased costs, decreased revenue, and legal, reputational, and
performance-related consequences. For example, if data is collected and analyzed based on
inaccurate information because advertising was conducted in the wrong location for the
wrong audience, marketing campaigns can become highly skewed and ineffective. Com-
panies must then begin costly repairs to their datasets to correct the problems caused by
dirty data. This creates a drop in customer satisfaction and a misuse of resources in a firm.
One example of an organization taking strides to clean the dirty data collected through inac-
curate marketing is the data management platform, MEDIATA, which runs bidding systems
and ad location services for firms looking to run ads on websites (see Table 2.1). Let’s see
how they did this.

Dirty data are data of such poor
quality that they cannot be trusted
or relied upon for decisions.

Introduction 27

Case 2.1 Opening Case

DIRTY DATA
AHEAD

C
ou

rte
sy

o
f B

ill
y

R
ay

Detoxing Location-Based Advertising Data
at MEDIATA

Company Overview
MEDIATA uses its audience and media delivery platform to deliver
thousands of successful online advertising campaigns across Australia,
Hong Kong, and New Zealand. Known as a “programmatic solution
specialist,” the MEDIATA platform is truly cutting-edge. It runs bidding
systems and ad location services for companies that are looking to run
ads on websites and provides its clients with high-impact, fully man-
aged, 100% transparent advertising campaigns that produce results.
MEDIATA is committed to shaking up the online advertising industry
and is evolving into a fast-growing international business. MEDIATA
clients include Qantas, LG, Virgin Money, Konica Minolta, Optus, Carls-
berg, Honda, ACCOR Hotels, Air New Zealand, Heinz, Woolworths, Citi-
bank, and JP Morgan.

The Problem
MEDIATA uses IP address data to locate customers and ad effectiveness.
Unfortunately, as much as 80% of ad inventories come with an incor-
rect location and MEDIATA realized that this “dirty data” was adversely
affecting their business. Location-based advertising provides organi-
zations and companies alike with massive benefits. Target customers
can be reached easily and effectively through marketing campaigns
tailored specifically for them. For example, utility companies and
internet service providers usually have certain areas or regions that
they service. Using location-based targeting (see Figure  2.1), these
companies can target television, newspaper, and online display ads to
attract new customers. Another benefit includes the reduced waste of
running marketing campaigns in unprofitable areas. Firms can choose
precisely where their advertisements are displayed without wasting
resources on customer segments that will not respond because of
location or preference discrepancies.

Advanced data analytics in location-based advertising also allows
companies like MEDIATA to reach customers where and when they are
in decision-making mode using programmatic bidding algorithms and
ad inventories. Browser-based ads use these algorithms to predict
which customer segments will click on certain ads at certain times of
the day. Automated bidding then ensues, with the ad spot on the page
going to the highest bidder (Cailean,  2016). However, the data must
be accurate to be useful and MEDIATA realized that their data could be
much better than it was. Given the importance of this technology to
advertisers and digital advertising agencies, there are overwhelming
issues to overcome.

The issues stem from outdated methods of locating Internet users
through IP addresses. These old systems do not pinpoint where exactly
traffic is coming from, rather they give advertising agencies broad geo-
graphic regions to work with, and the ads go to random coordinates
within the regions. Since the value of these activities comes from having
accurate targeting, the inaccuracies of the antiquated systems severely
impact profitability. As targeting regions shrink, information becomes
more valuable and accurate, but even small inaccuracies dilute the value
of demographic information applied to an audience.

The Solution
In 2016, MEDIATA established a data governance program in which it
partnered with Skyhook, a U.S. global location software company to

TABLE 2.1 Opening Case Overview

Company MEDIATA was launched as Valued Interactive
Media (VIM) in 2009. Rebranded in 2013 as
MEDIATA

Industry Communications; Advertising

Product Lines Wide range of programmatic solutions and
products to provide practical solutions for
digital marketing campaigns to deliver
successful online advertising campaigns to
organizations across Australia, Hong Kong,
and New Zealand

Digital Technology Information management and data
governance to increase trust and
accessibility of data to facilitate a
company’s vision

Business Vision Shake up the online advertising industry.
Improve transparency and foster greater
cooperation between partners

Website www.mediataplatform.com

28 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

FIGURE 2.1 Location-based advertising.

2.1 IS Concepts and Classification
Before we being to explore the value of information systems (ISs) to an organization, it’s use-
ful to understand what an IS is, what it does, and what types of ISs are typically found at differ-
ent levels of an organization.

In addition to supporting decision-making, coordination, and control in an organization,
ISs also help managers and workers analyze problems, visualize complex sets of data, and cre-
ate new products. ISs collect (input) and manipulate data (process), and generate and dis-
tribute reports (output) and based on the data-specific IT services, such as processing customer
orders and generating payroll, are delivered to the organization. Finally, the ISs save (store) the
data for future use. In addition to the four functions of IPOS, an information needs feedback
from its users and other stakeholders to help improve future systems as demonstrated in
Figure 2.2.

The following example demonstrates how the components of the IPOS work together: To
access a website, Amanda opens an Internet browser using the keyboard and enters a Web
address into the browser (input). The system then uses that information to find the correct web-
site (processing) and the content of the desired site is displayed in the Web browser (output).
Next, Amanda bookmarks the desired website in the Web browser for future use (storage).
The system then records the time that it took to produce the output to compare actual versus
expected performance (feedback).

Information systems (ISs) is
a combination of information
technology and people’s activities
using the technology to support
business processes, operations,
management, and decision-
making at different levels of the
organization.

IPOS is the cycle of inputting,
processing, outputting, and
storing information in an
information system.

improve the effectiveness of MEDIATA’s user profile data by more pre-
cisely locating IP addresses resolving MEDIATA’s challenges related to
dirty data. Skyhook’s Context Accelerator Hyperlocal IP uses big data
analytics to provide over 1 billion IP addresses to advertising platforms
and cleaned MEDIATA’s dirty data to pinpoint customers within 100
meters, thus increasing ad effectiveness for its clients. Hyperlocal IP
achieves this by using big data analytics to provide over 1 billion IP
addresses to advertising platforms.

Now, every time a device like a cell phone or laptop requests a
location, the on-device software scans for Wi-Fi, GPS, or cell tower
data. Combining all of these data points allows Skyhook to provide
extremely accurate coordinates and pass this information along to
MEDIATA to use.

While this approach still is not perfect, it allows MEDIATA’s adver-
tisements to become closer than ever to their target customers. A
nine-month study conducted after implementing Skyhook showed
that MEDIATA saw a 20% increase in marketing campaign effectiveness.

Creating and employing this data governance system allowed MEDIATA
to clean its datasets and create new, effective methods to reach target
audiences.

Questions
1. What business challenges did MEDIATA face because of its

dirty data?

2. What is the function of location-based advertising?

3. Why is it important to maintain accurate location data?

4. How did Skyhook and data governance enable MEDIATA to
achieve its vision?

5. What benefits did MEDIATA achieve as a result of implementing
data governance?

Sources: Compiled from Cailean (2016), Schneider (2014), and Schneider (2015).

IS Concepts and Classification 29

Components of an IS
A computerized IS consists of six interacting components. Regardless of type and where and by
whom they are used within an organization, the components of an IS must be carefully man-
aged to provide maximum benefit to an organization (see Figure 2.3).

PROCESSING
Programs

Equipments
Storage

FEEDBACK
Error Report

Performance Metrics

Hard Drive
Server
USB

INPUT
Data

Information
Knowledge
Instructions

STORAGE
OUTPUT
Reports
Graphics

Calculations

FIGURE 2.2 IPOS cycle.

People

DATAD

Procedures

Network Softwarek So

Hardware

FIGURE 2.3 Components of an IS.

1. Hardware Any physical device used in a computerized IS. Examples include central pro-
cessing unit (CPU), sound card, video card, network card, hard drive, display, keyboard,
motherboard, processor, power supply, modem, mouse, and printer.

2. Software A set of machine-readable instructions (code) that makes up a computer
application that directs a computer’s processor to perform specific operations. Computer
software is nontangible, contrasted with system hardware, which is the physical compo-
nent of an IS. Examples include Internet browser, operating system (OS), Microsoft Office,
Skype, and so on.

3. People Any person involved in using an IS. Examples include programmers, operators
help desk, and end-users.

4. Procedures Documentation containing directions on how to use the other components
of an IS. Examples include operational manual and user manual.

5. Network A combination of lines, wires, and physical devices connected to each other
to create a telecommunications network. In computer networks, networked computing

30 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

devices exchange data with each other using a data link. The connections between nodes
are established using either cable media or wireless media. Networks can be internal
or external. If they are available only internally within an organization, they are called
“intranets.” If they are available externally, they are called “internets.” The best-known
example of a computer network is the World Wide Web.

6. Data Raw or unorganized facts and figures (such as invoices, orders, payments, customer
details, product numbers, product prices) that describe conditions, ideas, or objects.

Data, Information, Knowledge, and Wisdom
As you can see in Figure 2.3, data is the central component of any information system. Without
data, an IS would have no purpose and companies would unable to conduct business. Gener-
ally speaking, ISs process data into meaningful information that produces corporate knowl-
edge and ultimately creates wisdom that fuels corporate strategy.

Data are the raw material from which information is produced; the quality, reliability, and
integrity of the data must be maintained for the information to be useful. Data are the raw facts
and figures that are not organized in any way. Examples are the number of hours an employee
worked in a certain week or the number of new Ford vehicles sold from the first quarter (Q1) of
2015 through the second quarter (Q2) of 2017 (Figure 2.4).

Information is an organization’s most important asset, second only to people. Information
provides the “who,” “what,” “where,” and “when” of data in a given context. For example,

Data describe products,
customers, events, activities, and
transactions that are recorded,
classified, and stored.

Information is data that have
been processed, organized, or
put into context so that they have
meaning and value to the person
receiving them.

Knowledge adds understanding,
experience, accumulated learning,
and expertise as they apply to a
current problem or activity, to
information.

Creatively assess knowledge to develop
innovative policies and procedures to

reverse downward trend in sales

Use information to determine reasons
for consistent downward trend in sales

from June 2016 to June 2017

17, 25, 54, 12, 68, 19, 39, 42, 72
Number of new vehicles sold

DATA
(Raw figures)

INFORMATION
(who, what, where, when)

KNOWLEDGE
(how)

WISDOM
(why)

Q1 2
01

5

Q2 2
01

5

Q3 2
01

5

Q4 2
01

5

Q1 2
01

6

Q2 2
01

6

Q3 2
01

6

Q4 2
01

6

Q1 2
01

7

Q2 2
01

7

New Vehicle Sales by Quarter

6

5

4

3

2

1

0

FIGURE 2.4 Examples of data, information, knowledge, and wisdom.

IS Concepts and Classification 31

summarizing the quarterly sales of new Ford vehicles from Q1 2015 through Q2 2017 provides
information that shows sales have steadily decreased from Q2 2016.

Knowledge is used to answer the question “how.” In our example, it would involve deter-
mining how the trend can be reversed, for example, customer satisfaction can be improved,
new features can be added, and pricing can be adjusted.

Wisdom is more abstract than data and information (that can be harnessed) and
knowledge (that can be shared). Wisdom adds value and increases effectiveness. It answers the
“why” in a given situation. In the Ford example, wisdom would be corporate strategists evalu-
ating the various reasons for the sales drop, creatively analyzing the situation as a whole, and
developing innovative policies and procedures to reverse the recent downward trend in new
vehicle sales.

ISs collect or input and process data to create and distribute reports or other outputs based
on information gleaned from the raw data to support decision-making and business processes
that, in turn, produce corporate knowledge that can be stored for future use. Figure 2.5 shows
the input-processing-output-storage (IPOS) cycle.

Wisdom is a collection of
values, ethics, moral codes, and
prior experiences that form an
evaluated understanding or
common-sense judgment.

Storage
Temporary memory (RAM), hard disks, flash memory, cloud

People
Users, clients, customers, operators, technicians, governments, companies

Sending
results,

collecting
data,

feedback

Communication
Working with
information,
changing,

calculating,
manipulating

Processing
Data collected,

captured,
scanned,

snapped from
transactions

Input
Showing
results on
screen,

hardcopy, digital
copy, archive

Output

FIGURE 2.5 Input-processing-output-storage model.

Types of ISS
An IS may be as simple as a single computer and a printer used by one person, or as complex
as several thousand computers of various types (tablets, desktops, laptops, mainframes) with
hundreds of printers, scanners, and other devices connected through an elaborate network
used by thousands of geographically dispersed employees. Functional ISs that support busi-
ness analysts and other departmental employees range from simple to complex, depending on
the type of employees supported. The following examples show the support that IT provides to
major functional areas.

1. Marketing Utilizing IBM software, Bolsa de Comercio de Santiago, a large stock exchange
in Chile, is able to process its ever-increasing, high-volume trading in microseconds. The
Chilean stock exchange system can do the detective work of analyzing current and past
transactions and market information, learning, and adapting to market trends and con-
necting its traders to business information in real time. Immediate throughput in combina-
tion with analytics allows traders to make more accurate decisions.

2. Sales According to the New England Journal of Medicine, one in five patients suffers
from preventable readmissions, which cost taxpayers over $17 billion a year. In the past,
hospitals have been penalized for high readmission rates with cuts to the payments they
receive from the government (Zuckerman et al.,  2016). Using effective management
information systems (MISs), the health-care industry can leverage unstructured informa-
tion in ways not possible before, according to Matt McClelland, manager of information

32 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

governance for Blue Cross Blue Shield of North Carolina. “With proper support, informa-
tion governance can bridge the gaps among the need to address regulation and litiga-
tion risk, the need to generate increased sales and revenue, and the need to cut costs
and become more efficient. When done right, information governance positively impacts
every facet of the business,” McClelland said in the Information Governance Initiative
(Jarousse, 2016).

Figure  2.6 illustrates the classification of the different types of ISs used in organiza-
tions, the typical level of workers who use them and the types of input/output (I/O) pro-
duced by each of the ISs. At the operational level of the organization, line workers use
transaction processing systems (TPSs) to capture raw data and pass it along (output) to
middle managers. The raw data is then input into office automation (OA) and MISs by middle
managers to produce information for use by senior managers. Next, information is input into
decision support systems (DSSs) for processing into explicit knowledge that will be used
by senior managers to direct current corporate strategy. Finally, corporate executives input
the explicit knowledge provided by the DSSs into executive information systems (EISs)
and apply their experience, expertise, and skills to create wisdom that will lead to new cor-
porate strategies.

Executives

Senior Managers

Middle Managers

Line Workers

Executive Information Systems
(EIS)

Decision Support Systems
(DSS)

Management Information Systems
(MIS)

Transaction Processing Systems
(TPS)

Wisdom

Knowledge

Information

Data

FIGURE 2.6 Hierarchy of ISs, input/output, and user levels.

Transaction Processing System (TPS)
A TPS is designed to process specific types of data input from ongoing transactions. TPSs can
be manual, as when data are typed into a form on a screen, or automated by using scanners or
sensors to capture barcodes or other data (Figure 2.7). TPSs are usually operated directly by
frontline workers and provide the key data required to support the management of operations.

Organizational data are processed by a TPS, for example, sales orders, reservations, stock
control, and payments by payroll, accounting, financial, marketing, purchasing, inventory con-
trol, and other functional departments. The data are usually obtained through the automated or
semiautomated tracking of low-level activities and basic transactions. Transactions are either:

• internal transactions that originate within the organization or that occur within the orga-
nization, for example, payroll, purchases, budget transfers, and payments (in accounting
terms, they are referred to as accounts payable); or

• external transactions that originate from outside the organization, for example, from cus-
tomers, suppliers, regulators, distributors, and financing institutions.

TPSs are essential systems. Transactions that are not captured can result in lost sales, dis-
satisfied customers, unrecorded payments, and many other types of data errors with financial

IS Concepts and Classification 33

impacts. For example, if the accounting department issued a check to pay an invoice (bill)
and it was cashed by the recipient, but information about that transaction was not captured,
then two things happen. First, the amount of cash listed on the company’s financial state-
ments is incorrect because no deduction was made for the amount of the check. Second, the
accounts payable (A/P) system will continue to show the invoice as unpaid, so the accounting
department might pay it a second time. Likewise, if services are provided, but the transactions
are not recorded, the company will not bill for them and thus lose service revenue.

Batch versus Online Real-Time Processing Data captured by a TPS are pro-
cessed and stored in a database; they then become available for use by other systems.
Processing of transactions is done in one of two modes:

1. Batch processing A TPS in batch processing mode collects all transaction for a day,
shift, or other time period, and then processes the data and updates the data stores. Pay-
roll processing done weekly or bi-weekly is an example of batch mode.

2. Online transaction processing (OLTP) or real-time processing The TPS processes each
transaction as it occurs, which is what is meant by the term real-time processing. In order
for OLTP to occur, the input device or website must be directly linked via a network to the
TPS. Airlines need to process flight reservations in real time to verify that seats are available.

Batch processing costs less than real-time processing. A disadvantage is that data are inaccu-
rate because they are not updated immediately, in real time.

Processing Impacts Data Quality As data are collected or captured, they are vali-
dated to detect and correct obvious errors and omissions. For example, when a customer sets
up an account with a financial services firm or retailer, the TPS validates that the address, city,
and postal code provided are consistent with one another and also that they match the credit
card holder’s address, city, and postal code. If the form is not complete or errors are detected,
the customer is required to make the corrections before the data are processed any further.

Data errors detected later may be time-consuming to correct or cause other problems. You
can better understand the difficulty of detecting and correcting errors by considering identity
theft. Victims of identity theft face enormous challenges and frustration trying to correct data
about them.

Management Information System (MIS)
An MIS is built on the data provided by TPS. MISs are management-level systems that are used
by middle managers to help ensure the smooth running of an organization in the short to
medium term. The highly structured information provided by these systems allows managers

©
J

an
_N

ev
ill

e/
iS

to
ck

ph
ot

o

FIGURE 2.7 Scanners automate the input of data into a
transaction processing system (TPS).

34 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

to evaluate an organization’s performance by comparing current with previous outputs. Func-
tional areas or departments―accounting, finance, production/operations, marketing and
sales, human resources, and engineering and design―are supported by ISs designed for their
particular reporting needs. General-purpose reporting systems are referred to as management
information systems (MISs). Their objective is to provide reports to managers for tracking
operations, monitoring, and control.

Typically, a functional system provides reports about such topics as operational efficiency,
effectiveness, and productivity by extracting information from databases and processing it
according to the needs of the user. Types of reports include the following:

• Periodic These reports are created or run according to a pre-set schedule. Examples are
daily, weekly, and quarterly. Reports are easily distributed via e-mail, blogs, internal web-
sites (called intranets), or other electronic media. Periodic reports are also easily ignored if
workers do not find them worth the time to review.

• Exception Exception reports are generated only when something is outside the norm,
either higher or lower than expected. Sales in hardware stores prior to a hurricane may be
much higher than the norm. Or sales of fresh produce may drop during a food contamina-
tion crisis. Exception reports are more likely to be read because workers know that some
unusual event or deviation has occurred.

• Ad hoc, or on demand Ad hoc reports are unplanned reports. They are generated to a
mobile device or computer on demand as needed. They are generated on request to learn
more about a situation, problem, or opportunity.

Reports typically include interactive data visualizations, such as column and pie charts, as
shown in Figure 2.8.

©
D

am
ir

K
ar

an
/iS

to
ck

ph
ot

o

FIGURE 2.8 Sample report produced by an MIS.

Decision Support System (DSS)
A DSS is a knowledge-based system used by senior managers to facilitate the creation of knowl-
edge and allow its integration into the organization. More specifically, a DSS is an interactive
application that supports decision-making by manipulating and building upon the information
from an MIS and/or a TPS to generate insights and new information.

Configurations of a DSS range from relatively simple applications that support a single
user to complex enterprisewide systems. A DSS can support the analysis and solution of a
specific problem, evaluate a strategic opportunity, or support ongoing operations. These sys-
tems support unstructured and semistructured decisions, such as make-or-buy-or-outsource
decisions, or what products to develop and introduce into existing markets.

Degree of Structure of Decisions Decisions range from structured to unstruc-
tured. Structured decisions are those that have a well-defined method for solving and the

IS Concepts and Classification 35

data necessary to reach a sound decision. An example of a structured decision is determining
whether an applicant qualifies for an auto loan, or whether to extend credit to a new customer―
and the terms of those financing options. Structured decisions are relatively straightforward
and made on a regular basis, and an IS can ensure that they are done consistently.

At the other end of the continuum are unstructured decisions that depend on human
intelligence, knowledge, and/or experience―as well as data and models to solve. Examples
include deciding which new products to develop or which new markets to enter. Semistruc-
tured decisions fall in the middle of the continuum. DSSs are best suited to support these types
of decisions, but they are also used to support unstructured ones. To provide such support,
DSSs have certain characteristics to support the decision-maker and the overall decision-
making process.

The main characteristic that distinguishes a DSS from an MIS is the inclusion of models.
Decision-makers can manipulate models to conduct experiments and sensitivity analyses, for
example, what-if and goal seeking. What-if analysis refers to changing assumptions or data
in the model to observe the impacts of those changes on the outcome. For example, if sales
forecasts are based on a 5% increase in customer demand, a what-if analysis would replace the
5% with higher and/or lower estimates to determine what would happen to sales if demand
changed. With goal seeking, the decision-maker has a specific outcome in mind and needs
to determine how that outcome could be achieved and whether it is feasible to achieve that
desired outcome. A DSS can also estimate the risk of alternative strategies or actions.

California Pizza Kitchen (CPK) uses a DSS to support inventory decisions. CPK has over 200
locations in 32 U.S. states and 13 other countries, including 17 California Pizza Kitchen non-
traditional, franchise concepts designed for airports, universities, and stadiums. Maintaining
optimal inventory levels at all its restaurants was challenging and time-consuming. The original
MIS was replaced by a DSS to make it easy for the chain’s managers to maintain updated records,
generate reports as and when needed, and make corporate- and restaurant-level decisions.
Many CPK restaurants reported a 5% increase in sales after the DSS was implemented.

Executive Information System (EIS)
EISs are strategic-level information systems that help executives and senior managers analyze
the environment in which the organization exists. They typically are used to identify long-term
trends and to plan appropriate courses of action. The information in such systems is often
weakly structured and comes from both internal and external sources. EISs are designed to be
operated directly by executives without the need for intermediaries and easily tailored to the
preferences of the individual using them. An EIS organizes and presents data and information
from both external data sources and internal MIS or TPS in an easy-to-use dashboard format to
support and extend the inherent capabilities of senior executives.

Initially, EISs were custom-made for an individual executive. However, a number of
off-the-shelf EIS packages now exist and some enterprise-level systems offer a customizable
EIS module.

The ways in which the different characteristics of the various types of ISs are classified is
shown in Table 2.2.

Here’s an example of how these ISs are used together to add value in an organization.
Day-to-day transaction data collected by the TPS are converted into prescheduled summa-
rized reports by middle managers using an MIS. The findings in these reports are then analyzed
by senior managers who use a DSS to support their semistructured or unstructured decision-
making. DSSs contain models that consist of a set of formulas and functions, such as statistical,
financial, optimization, and/or simulation models. Corporations, government agencies, the
military, health care, medical research, major league sports, and nonprofits depend on their
DSSs to answer what-if questions to help reduce waste in production operations, improve
inventory management, support investment decisions, and predict demand and help sustain
a competitive edge.

Customer data, sales, and other critical data produced by the DSS are then selected for
further analysis, such as trend analysis or forecasting demand and are input into an EIS for

36 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

TABLE 2.2 Characteristics of Types of Information Systems

Type Characteristics
TPS Used by operations personnel

Produce information for other ISs
Use internal and external data
Efficiency oriented

MIS Used by lower and middle managers
Based on internal information
Support structured decisions
Inflexible
Lack analytical capabilities
Focus on past and present data

DSS Used by senior managers
Support semistructured or unstructured decisions
Contain models or formulas that enable sensitivity analysis, what-if analysis, goal seeking,
and risk analysis
Use internal and external data plus data added by the decision-maker who may have
insights relevant to the decision situation
Predict the future

EIS Used by C-level managers
Easy-to-use, customizable interface
Support unstructured decisions
Use internal and external data sources
Focus on effectiveness of the organization
Very flexible
Focus on the future

use by top level management, who add their experience and expertise to make unstructured
decisions that will affect the future of the business.

Figure  2.9 shows how the major types of ISs relate to one another and how data flow
among them. In this example,

1. Data from online purchases are captured and processed by the TPS and then stored in the
transactional database.

2. Data needed for reporting purposes are extracted from the database and used by the MIS
to create periodic, ad hoc, or other types of reports.

3. Data are output to a DSS where they are analyzed using formulas, financial ratios,
or models.

ISS Exist within Corporate Culture
It is important to remember that ISs do not exist in isolation. They have a purpose and a social
(organizational) context. A common purpose is to provide a solution to a business problem. The
social context of the system consists of the values and beliefs that determine what is admis-
sible and possible within the culture of the organization and among the people involved. For
example, a company may believe that superb customer service and on-time delivery are critical
success factors. This belief system influences IT investments, among other factors.

The business value of IT is determined by the people who use them, the business processes
they support, and the culture of the organization. That is, IS value is determined by the

IT Infrastructure, IT Architecture, and Enterprise Architecture 37

relationships among ISs, people, and business processes―all of which are influenced strongly
by organizational culture.

In an organization, there may be a culture of distrust between the technology and business
employees. No enterprise IT architecture methodology or data governance can bridge this
divide unless there is a genuine commitment to change. That commitment must come from
the highest level of the organization―senior management. Methodologies cannot solve people
problems; they can only provide a framework in which those problems can be solved.

Questions

1. Name the six components of an IS.

2. Describe the differences between data, information, knowledge, and wisdom.

3. Define TPS and give an example.

4. Explain why TPSs need to process incoming data before they are stored.

5. Define MIS and DSS and give an example of each.

6. What characteristic distinguishes a DSS from an MIS?

7. What level of personnel typically uses an EIS?

8. What factors determine IS value?

2.2 IT Infrastructure, IT Architecture,
and Enterprise Architecture
Every enterprise has a core set of ISs and business processes that execute the transactions that
keep it in business. Transactions include processing orders, order fulfillment and delivery, pur-
chasing inventory and supplies, hiring and paying employees, and paying bills. To most effec-
tively utilize its IT assets, an organization must create an IT infrastructure, IT architecture, and
an enterprise architecture (EA) as shown in Figure 2.10.

Data

Data Data

Data are extracted,
transformed, &
loaded (ETL)

Data from online
purchases

of transactional
data

Database

Reporting
MIS

Models applied to
data for analysis

DSS

Processes raw
data

TPS

Analytical processing
of data to discover
trends and learn

insights

Data Warehouse

FIGURE 2.9 Flow of data from point of sale (POS) through processing, storage, reporting,
decision support, and analysis. Also shows the relationships among different types of ISs.

38 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

IT infrastructure is an inventory of the physical IT devices that an organization owns and
operates. The IT infrastructure describes an organization’s entire collection of hardware, soft-
ware, networks, data centers, facilities, and other related equipment used to develop, test,
operate, manage, and support IT services. It does NOT include the people or process compo-
nents of an information system.

IT architecture guides the process of planning, acquiring, building, modifying, inter-
facing, and deploying IT resources in a single department within an organization. The IT
architecture should offer a way to systematically identify technologies that work together
to satisfy the needs of the departments’ users. The IT architecture is a blueprint for how
future technology acquisitions and deployment will take place. It consists of standards,
investment decisions, and product selections for hardware, software, and communications.
The IT architecture is developed first and foremost based on department direction and
business requirements.

Enterprise architecture (EA) reviews all the information systems across all departments in
an organization to develop a strategy to organize and integrate the organization’s IT infrastruc-
tures to help it meet the current and future goals of the enterprise and maximize the value of
technology to the organization. In this way, EA provides a holistic view of an organization with
graphic and text descriptions of strategies, policies, information, ISs, and business processes
and the relationships between them.

The EA adds value in an organization in that it can provide the basis for organizational
change just as architectural plans guide a construction project. Since a poorly crafted enterprise
architecture (EA) can also hinder day-to-day operations and efforts to execute business strategy,
it is more important than ever before to carefully consider the EA within your organization when
deciding on an approach to business, technology, and corporate strategy. Simply put, EA helps
solve two critical challenges: where an organization is going, and how it will get there.

The success of EA is measured not only in financial terms, such as profitability and return
on investment (ROI), but also in nonfinancial terms, for example, improved customer satisfac-
tion, faster speed to market, and lower employee turnover as diagrammed in Figure 2.11 and
demonstrated in IT at Work 2.1.

EA Helps to Maintain Sustainability
As you read in Chapter 1, the volume, variety, and speed of data being collected or generated
have increased dramatically over the past decade. As enterprise ISs become more complex,

IT
Infrastructure

IT
Architecture

HR ACCTG.

PRODUCTION

SALES

FINANCE

ORGANIZATIONAL
STRATEGY
TO MAXIMIZE
IT VALUE

Enterprise
Architecture

Policy

FIGURE 2.10 Comparing IT infrastructure, IT architecture, and enterprise architecture.

IT Infrastructure, IT Architecture, and Enterprise Architecture 39

SUCCESS
(PROFITABILITY, ROI,

INCREASED CUSTOMER SATISFACTION,
FASTER SPEED TO MARKET,

LOWER EMPLOYEE TURNOVER)

CREATING
IT

LEVERAGING
IT

MAINTAINING
IT

EA

FIGURE 2.11 Enterprise architecture success.

IT at Work 2.1

A New Enterprise Architecture Improves Data
Quality and EIS Use
Executives at a large chemical corporation were supported by an IS
specifically designed for their needs—called an executive information
system (EIS). The EIS was designed to provide senior managers with
internal and external data and key performance indicators (KPIs) that
were relevant to their specific needs. Tech Note 2.1 describes KPIs.
As with any system, the value of the EIS depends on the data quality.

Too Much Irrelevant Data
Unfortunately, the EIS was a failure. Executives soon discovered that
only half of the data available through the EIS related to their level of
analysis and decision-making at the corporate level. A worse prob-
lem was that the data they needed were not available when and
how they wanted them. For example, executives needed to analyze
current detailed sales revenue and cost data for every strategic busi-
ness unit (SBU), product line, and operating business to compare
performance. But, data were not in standardized format as needed,
making analysis difficult or impossible. A large part of the problem
was that SBUs reported sales revenues in different time frames (e.g.,
daily, weekly, monthly, or quarterly), and many of those reports
were not available when needed. As a result, senior management
could not get a trusted view of the company’s current overall perfor-
mance and did not know which products were profitable.

There were two reasons for the failure of the EIS:

1. IT architecture was not designed for customized reporting
The design of the IT architecture had been based on financial
accounting rules. That is, the data were organized to make it
easy to collect and consolidate the data needed to prepare
financial statements and reports that had to be submitted
to the SEC (Securities and Exchange Commission) and other
regulatory agencies. These statements and reports have
well-defined or standardized formats and only need to be pre-
pared at specific times during the year, typically annually or
quarterly. The organization of the data (for financial reporting)

did not have the flexibility needed for the customized ad hoc
(unplanned) data needs of the executives. For example, it was
nearly impossible to generate customized sales performance
(nonfinancial) reports or do ad hoc analyses, such as com-
paring inventory turnover rates by product for each region
for each sales quarter. Because of lags in reports from various
SBUs, executives could not trust the underlying data.

2. Complicated user interface Executives could not easily review
the KPIs. Instead, they had to sort through screens packed with
too much data—some of interest and some irrelevant. To com-
pensate for poor interface design, several IT analysts themselves
had to do the data and KPI analyses for the executives—delaying
response time and driving up the cost of reporting.

Solution: New Enterprise Architecture with
Standardized Data Formats
The CIO worked with a task force to design and implement an entirely
new EA. Data governance policies and procedures were imple-
mented to standardize data formats companywide. Data govern-
ance eliminated data inconsistencies to provide reliable KPI reports
on inventory turns, cycle times, and profit margins of all SBUs.

The new architecture was business-driven instead of financial
reporting-driven. It was easy to modify reports—eliminating the
costly and time-consuming ad hoc analyses. Fewer IT resources are
needed to maintain the system. Because the underlying data are
now relatively reliable, EIS use by executives increased significantly.

IT at Work Questions
1. Why was an EIS designed and implemented at the large

chemical corporation?
2. What problems did the executives have with the EIS?
3. What were the two reasons for those EIS problems?
4. How did the CIO improve the EIS?
5. What are the benefits of the new IT enterprise architecture?
6. What are the benefits of data governance?

40 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

long-range IT planning is critical. Companies cannot simply add storage, new apps, or data ana-
lytics on an as-needed basis and expect those additional IT assets to work with existing systems.

The relationship between complexity and planning for the future is easier to see in physical
things such as buildings and transportation systems. For example, if you are constructing a
simple holiday cabin in a remote area, there is no need to create a detailed plan for future
expansion. On the other hand, if you are building a large commercial development in a highly
populated area, you’re not likely to succeed without a detailed project plan. Relating this to
the case of enterprise ISs, if you are building a simple, single-user, nondistributed system, you
would not need to develop a well-thought-out growth plan. However, this approach would not
be feasible to enable you to successfully manage big data, copious content from mobiles and
social networks, and data in the cloud. Instead, you would need a well-designed set of plans,
or blueprints, provided by an EA to align IT with business objectives by guiding and controlling
hardware acquisition, software add-ons and upgrades, system changes, network upgrades,
choice of cloud services, and other digital technology investments that you will need to make
your business sustainable.

There are two specific strategic issues that the EA is designed to address:

1. IT systems’ complexity IT systems have become unmanageably complex and expensive
to maintain.

2. Poor business alignment Organizations find it difficult to keep their increasingly expen-
sive IT systems aligned with business needs.

Business and IT Benefits of EA Having the right EA in place is important for the fol-
lowing reasons:

• EA cuts IT costs and increases productivity by giving decision-makers access to information,
insights, and ideas where and when they need them.

• EA determines an organization’s competitiveness, flexibility, and IT economics for the next
decade and beyond. That is, it provides a long-term view of a company’s processes, sys-
tems, and technologies so that IT investments do not simply fulfill immediate needs.

• EA helps align IT capabilities with business strategy―to grow, innovate, and respond
to market demands, supported by an IT practice that is 100% in accord with business
objectives.

• EA can reduce the risk of buying or building systems and enterprise applications that are
incompatible or unnecessarily expensive to maintain and integrate.

Tech Note 2.1

Key Performance Indicators (KPIs)
KPIs are a set of quantifiable measures used to evaluate factors that
are crucial to the success of an organization. KPIs present data in easy-
to-comprehend and comparison-ready formats to gauge or compare
performance in terms of meeting an organization’s operational and
strategic goals. KPIs are used in four main areas: increasing revenue;
reducing costs; improving process cycle-time; and improving cus-
tomer satisfaction. Examples of key comparisons include actual versus
budget, actual versus forecasted, and the ROI for this year versus prior
years. KPIs help reduce the complex nature of organizational perfor-
mance to a small number of understandable measures, including:

• Financial KPIs: accounts payable turnover; inventory turn-
over; net profit margin; sum of difference between planned
and actual project budgets

• Social media KPIs: social traffic and conversions (number of
visitors who are converted to customers); likes; new followers
per week; social visits and leads

• Sales and marketing KPIs: cost per lead; how much revenue
a marketing campaign generates; number of customer
complaints; cycle time from customer request to delivery,
percentage of correspondence replied to on time

• Operational and supply chain KPIs: units per transaction;
carrying cost of inventory; order status; back order rate

• Environmental and carbon-footprint KPIs: energy, water, or
other resource use; spend by utility; weight of landfill waste.

IT Infrastructure, IT Architecture, and Enterprise Architecture 41

TABLE 2.3 Components of an Enterprise Architecture

Business
architecture

How the business works. Includes broad business strategies and plans for
moving the organization from where it is now to where it wants to be. Processes
the business uses to meet its goals.

Application
architecture

Portfolio of organization’s applications. Includes descriptions of automated
services that support business processes; descriptions of interactions and
interdependencies between the organization’s ISs.

Information
architecture

What the organization needs to know to perform its business processes and
operations. Includes standard data models; data management policies and
descriptions of patterns of information production and use in an organization.

Technology
architecture

Hardware and software that supports the organization. Examples include
desktop and server software; OSs; network connectivity components;
printers, modems.

IT at Work 2.2

EA Must Be Dynamic and Evolving
In order to keep IT aligned with the business, the EA must be
a dynamic plan. As shown in the model in Figure  2.12, the EA
evolves toward the target architecture, which represents the
company’s future IT needs. According to this model, EA defines
the following:

1. The organization’s mission, business functions, and future
direction

2. Information and information flows needed to perform the
mission

3. The current baseline architecture

4. The desired target architecture

5. The sequencing plan or strategy to progress from the baseline
to the target architecture.

Baseline Transition Target

Im
pl

em
en

ta
tio

n
S

ta
tu

s

Baseline architecture

Sequencing plan

Target architecture

FIGURE 2.12 The importance of viewing EA as a
dynamic and evolving plan The purpose of the EA is
to maintain IT–business alignment. Changes in priorities
and business are reflected in the target architecture to
help keep IT aligned with them (Bloomberg, 2016).

Developing an Enterprise Architecture (EA)
Developing an EA starts with the organization’s goals, for example, where does it want to be in
three years? and identifies the strategic direction in which it is heading and the business driv-
ers to which it is responding. The goal is to make sure that everyone understands and shares
a single vision. As soon as managers have defined this single shared vision of the future, they
then consider the impact this vision will have on the business, technical, information, and solu-
tions architectures of the enterprise. This shared vision of the future will dictate changes in all
these architectures, assign priorities to those changes, and keep those changes grounded in
business value.

According to Microsoft, the EA should include the four different perspectives shown in
Table 2.3.

It is important to recognize that the EA must be dynamic, not static. To sustain its effective-
ness, it should be an ongoing process of aligning the creation, operation, and maintenance of IT
across the organization with the ever-changing business objectives. As business needs change,
so must the EA, as demonstrated in IT at Work 2.2.

42 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

Questions

1. What is the purpose of the IT infrastructure?

2. How is the IT infrastructure different from the IT architecture?

3. What is the purpose of an EA?

4. What are the business benefits of EA?

5. Explain why it is necessary to ensure that an EA maintains alignment between IT and business strategy?

6. Explain KPIs and give an example.

2.3 Information Management and Data
Governance
As shown in Figure 2.3, data is the heart of the business and the central component of an IS.
Most business initiatives succeed or fail based on the quality of their data. Effective planning
and decision-making depend on systems being able to make data available in usable formats
on a timely basis. Almost everyone manages information. You manage your social and cloud
accounts across multiple mobile devices and computers. You update or synchronize (“synch”)
your calendars, appointments, contact lists, media files, documents, and reports. Your pro-
ductivity depends on the compatibility of devices and applications and their ability to share
data. Not being able to transfer and synch whenever you add a device or app is bothersome
and wastes your time. For example, when you switch to the latest mobile device, you might
need to reorganize content to make dealing with data and devices easier. To simplify add-ons,
upgrades, sharing, and access, you might leverage cloud services such as iTunes, Instagram,
Diigo, and Box.

This is just a glimpse at some of the information management situations that organiza-
tions face today and shows why a continuous plan is needed to guide, control, and govern IT
growth. As with building construction (Figure  2.13), blueprints and models help guide and
govern future IT and digital technology investments.

Information management is
the use of IT tools and methods
to collect, process, consolidate,
store, and secure data from
sources that are often fragmented
and inconsistent.

Career Insight 2.1

Essential Skills of an Enterprise Architect (EA)
Enterprise architects need much more than technology skills. On a
daily basis, an enterprise architect’s activities can change quickly
and significantly. Ideally, enterprise architects should come from
a highly technical background. Even though enterprise architects
deal with many other factors besides technology, it is still impor-
tant to keep technical skills current. The job performance and suc-
cess of such an architect―or anyone responsible for large-scale IT
projects―depend on a broad range of skills.

• Interpersonal or people skills The job requires interacting
with people and getting their cooperation.

• Ability to influence and motivate A large part of the job is
motivating users to comply with new processes and practices.

• Negotiating skills The project needs resources―time,
money, and personnel―that must be negotiated to get things
accomplished.

• Critical-thinking and problem-solving skills Architects face
complex and unique problems. Being able to expedite solu-
tions prevents bottlenecks.

• Business and industry expertise Knowing the business
and industry improves the outcomes and the architect’s
credibility.

• Process orientation Thinking in terms of process is essential
for an enterprise architect. Building repeatable and reusable
processes as artifacts from the work they do and how they
work themselves.

The most common function an enterprise architect will perform
is that of overseeing a large-scale program. Programs are a
group of related projects and as such, managing EA implementa-
tions requires someone who is able to handle multiple aspects
of a project at one time. Project management is covered in
Chapter 13.

Information Management and Data Governance 43

Information Management Harnesses Scattered Data
Business information is generally scattered throughout an enterprise, stored in separate sys-
tems dedicated to specific purposes, such as operations, supply chain management, or cus-
tomer relationship management. Major organizations have over 100 data repositories (storage
areas). In many companies, the integration of these disparate systems is limited―as is users’
ability to access all the information they need. As a result, despite all the information flowing
through companies, executives, managers, and workers often struggle to find the information
they need to make sound decisions or do their jobs. The overall goal of information manage-
ment is to eliminate that struggle through the design and implementation of a sound data gov-
ernance program and a well-planned EA.

Providing easy access to large volumes of information is just one of the challenges facing
organizations. The days of simply managing structured data are over. Now, organizations must
manage semistructured and unstructured content from social and mobile sources even though
that data may be of questionable quality.

Information management is critical to data security and compliance with continually
evolving regulatory requirements, such as the Sarbanes-Oxley Act, Basel III, the Computer
Fraud and Abuse Act (CFAA), the USA PATRIOT Act, and the Health Insurance Portability and
Accountability Act (HIPAA).

Issues of information access, management, and security must also deal with information
degradation and disorder―where people do not understand what data mean or how the data
can be useful.

Reasons for Information Deficiencies
Organizational information and decision support technologies have developed over many dec-
ades. During that time management teams’ priorities have changed along with their under-
standing of the role of IT within the organization; technology has advanced in unforeseeable
ways, and IT investments have been increased or decreased based on competing demands on
the budget. Other common reasons why information deficiencies are still a problem include:

1. Data silos Information can be trapped in departmental data silos (also called information
silos), such as marketing or production databases. Data silos are illustrated in Figure 2.14.
Since silos are unable to share or exchange data, they cannot consistently be updated.
When data are inconsistent across multiple enterprise applications, data quality cannot
(and should not) be trusted without extensive verification. Data silos exist when there is no
overall IT architecture to guide IT investments, data coordination, and communication.
Data silos support a single function and, as a result, do not support an organization’s cross-
functional needs.

Data silo are stand-alone
data stores. Their data are not
accessible by other ISs that need it
or outside that department.

©
M

ar
tin

B
ar

ra
ud

/A
la

m
y

FIGURE 2.13 Blueprints and models, like those used for
building construction, are needed to guide and govern an
enterprise’s IT assets.

44 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

For example, most health-care organizations are drowning in data, yet they cannot
get reliable, actionable insights from these data. Physician notes, registration forms, dis-
charge summaries, documents, and more are doubling every five years. Unlike structured
machine-ready data, these are messy data that take too much time and effort for health-
care providers to include in their business analysis. So, valuable messy data are routinely
left out. Millions of insightful patient notes and records sit inaccessible or unavailable in
separate clinical data silos because historically there has been no easy way to analyze the
information they contain.

2. Lost or bypassed data Data can get lost in transit from one system to another. Or, data
might never get captured because of inadequately tuned data collection systems, such
as those that rely on sensors or scanners. Or, the data may not get captured in sufficient
detail, as described in Tech Note 2.2.

3. Poorly designed interfaces Despite all the talk about user-friendly interfaces, some ISs
are horrible to deal with. Poorly designed interfaces or formats that require extra time
and effort to figure out increase the risk of errors from misunderstanding the data or
ignoring them.

4. Nonstandardized data formats When users are presented with data in inconsistent or
nonstandardized formats, errors increase. Attempts to compare or analyze data are more dif-
ficult and take more time. For example, if the Northeast division reports weekly gross sales
revenues per product line and the Southwest division reports monthly net sales per product,
you cannot compare their performance without converting the data to a common format.
Consider the extra effort needed to compare temperature-related sales, such as air condition-
ers, when some temperatures are expressed in degrees Fahrenheit and others in Centigrade.

5. Difficult to hit moving targets The information that decision-makers want keeps
changing―and changes faster than ISs can respond to because of the first four reasons in
this list. Tracking tweets, YouTube hits, and other unstructured content requires expensive
investments―which managers find risky in an economic downturn.

These are the data challenges managers have to face when there is little or no information
management. Companies undergoing fast growth or merger activity or those with decentral-
ized systems (each division or business unit manages its own IT) will end up with a patchwork
of reporting processes. As you would expect, patchwork systems are more complicated to
modify, too rigid to support an agile business, and more expensive to maintain.

Information Requirements:
Understandable
Relevant
Timely
Accurate
Secure

Parts Replenish

Procuring

Design

Build

Ship

Sales

Fulfillment

Billing

Support

Customer data
Product data
Procurement data
Contract data
Data order
Parts inventory data
Engineering data
Logistics data

Data Types

Operations
silos

Sourcing
silos

Customer-facing
silos

FIGURE 2.14 Data (or information) silos are ISs that do not have the
capability to exchange data with other systems, making timely coordination
and communication across functions or departments difficult.

Information Management and Data Governance 45

Factors Driving the Shift from Silos to Sharing
and Collaboration
Senior executives and managers are aware of the problems associated with their data silos
and information management problems, but they also know about the huge cost and
disruption associated with converting to newer IT architectures. The “silo effect” occurs when
different departments of an organization do not share data and/or communicate effectively
enough to maintain productivity. Surprisingly, 75% of employers believe team work and
collaboration are essential, but only 18% of employees receive communication evaluations
during performance critiques (Marchese, 2016). In the new age of efficiency of service, many
companies like Formaspace, an industrial manufacturing and service corporation, must work
toward complete cloud integration of old silos to increase customer service and generate more
revenue. Enabling applications to interact with one another in an automated fashion to gain
better access to data increases meaningful productivity and decreases time and effort spent in
manual collaboration efforts. In an illustration of how silo integration is essential for a modern
corporation, IT technician at Formaspace, Loddie Alspach, claims that in 2015, the company
managed to increase revenues by 20% using Amazon-based cloud technology (Shore, 2015).
However, companies are struggling to integrate thousands of siloed global applications, while
aligning them to business operations. To remain competitive, they must be able to analyze and
adapt their business processes quickly, efficiently, and without disruption.

Greater investments in collaboration technologies have been reported by the research
firm Forrester (Keitt, 2014). A recent study identified four main factors that have influenced the
increased use of cloud technologies, as shown in Table 2.4 (Rai et al., 2015).

Tech Note 2.2

Need to Measure in Order to Manage
A residential home construction company had two divisions: stand-
ard homes and luxury homes. The company was not capturing
material, labor, and other costs associated with each type of con-
struction. Instead, these costs were pooled, making it impossible
to allocate costs to each type of construction and then to calculate
the profit margins of each division. They had no way of calculating
profit margins on each type of home within the divisions. Without
the ability to measure costs, they did not have any cost control.

After upgrading their ISs, they began to capture detailed data
at the division level. They discovered a wide profit margin on stand-
ard homes, which was hiding the losses occurring in the luxury
home division. Without cost control data, the profitable standard
homes division had been subsidizing the luxury home division for
many years. Based on the cost control data, the company decided
to focus more on standard homes and adjust their pricing on luxury
homes. This new cost control strategy increased the company’s
long-term performance.

TABLE 2.4 Key Factors Leading to Increased Migration to the Cloud

Cost Savings

Efficient Use of Resources

Unlimited Scalability of Resources

Lower Maintenance

Business Benefits of Information Management
Based on the examples you have read, the obvious benefits of information management are:

1. Improves decision quality Decision quality depends on accurate and complete data.
2. Improves the accuracy and reliability of management predictions It is essential for

managers to be able to predict sales, product demand, opportunities, and competitive

46 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

threats. Management predictions focus on “what is going to happen” as opposed to finan-
cial reporting on “what has happened.”

3. Reduces the risk of noncompliance Government regulations and compliance require-
ments have increased significantly in the past decade. Companies that fail to comply with
laws on privacy, fraud, anti-money laundering, cybersecurity, occupational safety, and so
on face harsh penalties.

4. Reduces the time and cost of locating and integrating relevant information.

Data Governance: Maintaining Data Quality
and Cost Control
The success of every data-driven strategy or marketing effort depends on data governance.
Data governance policies must address structured, semistructured, and unstructured data
(discussed in Section 2.3) to ensure that insights can be trusted.

Enterprisewide Data Governance With an effective data governance program,
managers can determine where their data are coming from, who owns them, and who is
responsible for what―in order to know they can trust the available data when needed. Data
governance is an enterprisewide project because data cross boundaries and are used by people
throughout the enterprise. New regulations and pressure to reduce costs have increased the
importance of effective data governance. Governance eliminates the cost of maintaining and
archiving bad, unneeded, or inaccurate data. These costs grow as the volume of data grows.
Governance also reduces the legal risks associated with unmanaged or inconsistently managed
information.

Three industries that depend on data governance to comply with regulations or reporting
requirements are the following:

• Food industry In the food industry, data governance is required to comply with food
safety regulations. Food manufacturers and retailers have sophisticated control systems in
place so that if a contaminated food product, such as spinach or peanut butter, is detected,
they are able to trace the problem back to a particular processing plant or even the farm at
the start of the food chain.

• Financial services industry In the financial services sector, strict reporting requirements
of the Dodd−Frank Wall Street Reform and Consumer Protection Act of 2010 are leading
to greater use of data governance. The Dodd−Frank Act regulates Wall Street practices by
enforcing transparency and accountability in an effort to prevent another significant finan-
cial crisis like the one that occurred in 2008.

• Health-care industry Data are health care’s most valuable asset. Hospitals have moun-
tains of electronic patient information. New health-care accountability and reporting obli-
gations require data governance models for transparency to defend against fraud and to
protect patients’ information.

Master Data and Master Data Management (MDM) Master data is the term
used to describe business-critical information on customers, products and services,
vendors, locations, employees, and other things needed for operations and business trans-
actions. Master data are fundamentally different from the high volume, velocity, and vari-
ety of big data and traditional data. For example, when a customer applies for automobile
insurance, data provided on the application become the master data for that customer.
In contrast, if the customer’s vehicle has a device that sends data about his or her driving

Data governance is the control
of enterprise data through formal
policies and procedures to help
ensure data can be trusted and
are accessible.

Information Management and Data Governance 47

behavior to the insurer, those machine-generated data are transactional or operational, but
not master data.

Data are used in two ways―both depend on high-quality trustworthy data:

1. For running the business Transactional or operational use
2. For improving the business Analytic use

Master data are typically quite stable and typically stored in a number of different sys-
tems spread across the enterprise. Master data management (MDM) links and synchronizes
all critical data from those disparate systems into one file called a master file, to provide a
common point of reference. MDM solutions can be complex and expensive. Given their com-
plexity and cost, most MDM solutions are out of reach for small and medium companies. Ven-
dors have addressed this challenge by offering cloud-managed MDM services. For example, in
2013, Dell Software launched its next-generation Dell Boomi MDM. Dell Boomi provides MDM,
data management, and data quality services (DQS)―and they are 100% cloud-based with near
real-time synchronization.

Data governance and MDM manage the availability, usability, integrity, and security of data
used throughout the enterprise. Strong data governance and MDM are needed ensure data are
of sufficient quality to meet business needs. The characteristics and consequences of weak or
nonexistent data governance are listed in Table 2.5.
Data governance and MDM are a powerful combination. As data sources and volumes continue
to increase, so does the need to manage data as a strategic asset in order to extract its full
value. Making business data consistent, trusted, and accessible across the enterprise is a criti-
cal first step in customer-centric business models. With data governance, companies are able
to extract maximum value from their data, specifically by making better use of opportunities
that are buried within behavioral data.

TABLE 2.5 Characteristics and Consequences of Weak or Nonexistent Data Governance
and MDM

• Data duplication causes isolated data silos.
• Inconsistency exists in the meaning and level of detail of data elements.
• Users do not trust the data and waste time verifying the data rather than analyzing them for appro-

priate decision-making.
• Leads to inaccurate data analysis.
• Bad decisions are made on perception rather than reality, which can negatively affect the company

and its customers.
• Results in increased workloads and processing time.

Questions

1. What is information management?

2. What is the “silo effect” and how does it affect business performance?

3. What three factors are driving collaboration and information sharing?

4. What are the business benefits of information management?

5. Explain why it is important to develop an effective data governance program?

6. Explain the purposes of master data management.

7. Why has interest in data governance and MDM increased?

48 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

2.4 Data Centers and Cloud Computing
Data centers and cloud computing are types of IT infrastructures or computing systems. Data
center also refers to the building or facility that houses the servers and equipment. In the past,
there were few IT infrastructure options. Companies owned their servers, storage, and network
components to support their business applications and these computing resources were on
their premises. Now, there are several choices for an IT infrastructure strategy―including cloud
computing. As is common to IT investments, each infrastructure configuration has strengths,
weaknesses, and cost considerations.

Data Centers
Traditionally, data and database technologies were kept in data centers that were typically run
by an in-house IT department (Figure 2.15) and consisted of on-premises hardware and equip-
ment that store data within an organization’s local area network.

Today, companies may own and manage their own on-premises data centers or pay for the
use of their vendors’ data centers, such as in cloud computing, virtualization, and software-as-
a-service arrangements (Figure 2.16).

©
O

le
ks

iy
M

ar
k/

Sh
ut

te
rs

to
ck

FIGURE 2.15 A row of network servers in a data center.

©
M

ic
ha

el
D

B
ro

w
n/

Sh
ut

te
rs

to
ck

FIGURE 2.16 Data centers are the infrastructure
underlying cloud computing, virtualization, networking,
security, delivery systems, and software-as-a-service.

Data Centers and Cloud Computing 49

In an on-premises data center connected to a local area network, it is easier to restrict
access to applications and information to authorized, company-approved people and equip-
ment. In the cloud, the management of updates, security, and ongoing maintenance are out-
sourced to a third-party cloud provider where data is accessible to anyone with the proper
credentials and Internet connection. This arrangement can make a company more vulnerable
since it increases exposure of company data at many more entry and exit points. Here are some
examples of data centers.

• National Climatic Data Center The National Climatic Data Center is an example of a
public data center that stores and manages the world’s largest archive of weather data.

• U.S. National Security Agency The National Security Agency’s (NSA) data center, shown
in Figure 2.17 is located in Bluffdale, UT. It is the largest spy data center for the NSA. People
who think their correspondence and postings through sites like Google, Facebook, and
Apple are safe from prying eyes should rethink that belief. You will read more about reports
exposing government data collection programs in Chapter 5.

• Apple Apple has a 500,000-square-foot data center in Maiden, NC, that houses servers for
various iCloud and iTunes services. The center plays a vital role in the company’s back-end
IT infrastructure. In 2014 Apple expanded this center with a new, smaller 14,250-square-
foot tactical data center that also includes office space, meeting areas, and breakrooms.

Since only the company owns the infrastructure, a data center is more suitable for organiza-
tions that run many different types of applications and have complex workloads. A data center,
like a factory, has limited capacity. Once it is built, the amount of storage and the workload the
center can handle does not change without purchasing and installing more equipment.

©
e

pa
e

ur
op

ea
n

pr
es

sp
ho

to
a

ge
nc

y
b.

v.
/A

la
m

y

FIGURE 2.17 The NSA data center in Bluffdale, UT.

When a Data Center Goes Down, so Does Business Data center failures dis-
rupt all operations regardless of who owns the data center. Here are two examples.

• Uber The startup company Uber experienced an hour-long outage in February 2014 that
brought its car-hailing service to a halt across the country. The problem was caused by an
outage at its vendor’s West Coast data center. Uber users flooded social media sites with
complaints about problems kicking off Uber’s app to summon a driver-for-hire.

• WhatsApp WhatsApp also experienced a server outage in early 2014 that took the ser-
vice offline for 2.5 hours. WhatsApp is a smartphone text-messaging service that had been
bought by Facebook for $19 billion. “Sorry we currently experiencing server issues. We
hope to be back up and recovered shortly,” WhatsApp said in a message on Twitter that
was retweeted more than 25,000 times in just a few hours. The company has grown rapidly
to 450 million active users within five years, nearly twice as many as Twitter. More than
two-thirds of these global users use the app daily. WhatsApp’s server failure drove millions
of users to a competitor. Line, a messaging app developed in Japan, added 2 million new
registered users within 24 hours of WhatsApp’s outage―the biggest increase in Line’s user
base within a 24-hour period.

50 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

These outages point to the risks of maintaining the complex and sophisticated technology
needed to power digital services used by millions or hundreds of millions of people.

Integrating Data to Combat Data Chaos
An enterprise’s data are stored in many different or remote locations―creating data chaos
at times. And some data may be duplicated so that they are available in multiple locations
that need a quick response. Therefore, the data needed for planning, decision-making, opera-
tions, queries, and reporting are scattered or duplicated across numerous servers, data cent-
ers, devices, and cloud services. Disparate data must be unified or integrated in order for the
organization to function.

Data Virtualization As organizations have transitioned to a cloud-based infrastruc-
ture, data centers have become virtualized. For example, Cisco offers data virtualization, which
gives greater IT flexibility. The process of data virtualization involves abstracting, transforming,
merging, and delivering data from disparate sources. The main goal of data virtualization is to
provide a single point of access to the data. By aggregating data from a wide range of sources
users can access applications without knowing their exact location. Using data virtualization
methods, enterprises can respond to change more quickly and make better decisions in real
time without physically moving their data, which significantly cuts costs. Cisco Data Virtualiza-
tion makes it possible to:

• Have instant access to data at any time and in any format.
• Respond faster to changing data analytics needs.
• Cut complexity and costs.

Compared to traditional (nonvirtual) data integration and replication methods, data virtu-
alization accelerates time to value with:

• Greater agility Speeds 5–10 times faster than traditional data integration methods
• Streamlined approach 50–75% time savings over data replication and consolida-

tion methods
• Better insight Instant access to data

Software-Defined Data Center Data virtualization has led to the latest development
in data centers—the software-defined data center (SDDC). An SDDC facilitates the integration
of the various infrastructures of the SDDC silos within organizations and optimizes the use
of resources, balances workloads, and maximizes operational efficiency by dynamically dis-
tributing workloads and provisioning networks. The goal of the SDDC is to decrease costs
and increase agility, policy compliance, and security by deploying, operating, managing, and
maintaining applications. In addition, by providing organizations with their own private cloud,
SDDCs provide greater flexibility by allowing organizations to have on-demand access to their
data instead of having to request permission from their cloud provider (see Figure 2.18).

The base resources for the SDDC are computation, storage, networking, and security. Typi-
cally, the SDDC includes limited functionality of service portals, applications, OSs, VM hardware,
hypervisors, physical hardware, software-defined networking, software-defined storage, a
security layer, automation and management layers, catalogs, a gateway interface module, and
third-party plug-ins (Figure 2.19).

It is estimated that the market share for SDDCs will grow from the current level of $22
billion to more than $77 billion in the next five years. As the use of SDDCs grows at this extraor-
dinary rate, data center managers will be called upon to scale their data centers exponentially
at a moment’s notice. Unfortunately, this is impossible to achieve using the traditional data
center infrastructure. In the SDDC, software placement and optimization decisions are based
on business logic, not technical provisioning directives. This requires changes in culture,

Data Centers and Cloud Computing 51

©
K

itt
ic

ha
i/S

hu
tte

rs
to

ck
FIGURE 2.18 Corporate IT infrastructures can consist of an on-premises data center
and off-premises cloud computing.

Catalogs

3rd

Party
Plug-Ins

Gateway
Interface
Module

A
ut

om
at

ed
P

ol
ic

ie
s

M
an

ag
em

en
t

P
ro

vi
si

on
in

g
&

M
on

ito
rin

g

Security

Software-defined Networking

Software-defined Storage

Physical & Virtual
Compute

Limited Functionality of Service Portals

FIGURE 2.19 SDDC infrastructure (adapted from Sturm et al., 2017).

processes, structure, and technology. The SDDC isolates the application layer from the physical
infrastructure layer to facilitate faster and more effective deployment, management, and moni-
toring of diverse applications. This is achieved by finding each enterprise application an optimal
home in a public or private cloud environment or draw from a diverse collection of resources.

From a business perspective moving to a SDDC is motivated by the need to improve secu-
rity, increase alignment of the IT infrastructure with business objectives and provision of appli-
cations more quickly.

52 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

Traditional data centers had dedicated, isolated hardware that results in poor utilization
of resources and very limited flexibility. Second-generation virtualization data cases improved
resource use by consolidating virtualized servers. By reducing the steps needed to decrease
the time it takes to deploy workloads, facilitating the definition of applications and resource
needs, the SDDC creates an even more flexible environment in which enterprise applications
can be quickly reconfigured and supported to provide infrastructure-as a service (IaaS). Tran-
sitioning to an SDDC enables an organization to optimize its resource usage, provide capacity
on demand, improve business-IT alignment, improve agility and flexibility of operations, and
save money (Figure 2.20).

TRADITIONAL DATA
CENTER

• Hardware silos
• Limited utilization
• Limited flexibility

VIRTUALIZED DATA CENTER

• Virtualized servers
• Better resource use
• Automate/Balance workload
• Sub-optimal
performance

SOFTWARE-DEFINED
DATA CENTER (SDDC)

• IaaS
• Optimized resource use
• Increased business-
IT alignment
• Improved agility & flexibility
• Capacity on demand
• Cost savings

FIGURE 2.20 Evolution of data centers (adapted from Sturm et al., 2017).

Cloud Computing
In a business world where first movers gain the advantage, IT responsiveness and agility pro-
vide a competitive edge and lead to sustainable business practices. Yet, many IT infrastructures
are extremely expensive to manage and too complex to easily adapt. A common solution is
cloud computing. Cloud computing is the general term for infrastructures that use the Inter-
net and private networks to access, share, and deliver computing resources. More specifically,
IBM defines cloud computing as “the delivery of on-demand computing resources—everything
from applications to data centers—over the Internet on a pay-for-use basis” (IBM, 2016).

Cloud computing is the delivery of computing and storage resources as a service to end-users
over a network. Cloud systems are scalable. That is, they can be adjusted to meet changes in
business needs. At the extreme, the cloud’s capacity is unlimited depending on the vendor’s offer-
ings and service plans. A drawback of the cloud is control because a third party manages it. Unless
the company uses a private cloud within its network, it shares computing and storage resources
with other cloud users in the vendor’s public cloud. Public clouds allow multiple clients to access
the same virtualized services and utilize the same pool of servers across a public network. In con-
trast, private clouds are single-tenant environments with stronger security and control for reg-
ulated industries and critical data. In effect, private clouds retain all the IT security and control
provided by traditional IT infrastructures with the added advantages of cloud computing.

Selecting a Cloud Vendor
Because cloud is still a relatively new and evolving business model, the decision to select a
cloud service provider should be approached with even greater diligence than other IT deci-
sions. As cloud computing becomes an increasingly important part of the IT delivery model,
assessing and selecting the right cloud provider also become the most strategic decisions that
business leaders undertake. Providers are not created equally, so it is important to investigate
each provider’s offerings prior to subscribing. When selecting and investing in cloud services,
there are several service factors a vendor needs to address. These evaluation factors are listed
in Table 2.6.

Data Centers and Cloud Computing 53

Vendor Management and Cloud Service Agreements (CSAs) The move to
the cloud is also a move to vendor-managed services and cloud service agreements (CSAs).
Also referred to as cloud service level agreements (SLAs), the CSA or SLA is a negotiated
agreement between a company and service provider that can be a legally binding contract or
an informal contract. You can review a sample CSA used by IBM by visiting http://www-05.
ibm.com/support/operations/files/pdf/csa_us .

Staff experienced in managing outsourcing projects may have the necessary expertise for
managing work in the cloud and policing SLAs with vendors. The goal is not building the best CSA
terms, but negotiating the terms that align most closely with the business needs. For example, if
a server becomes nonoperational and it does not support a critical business operation, it would
not make sense to pay a high premium for reestablishing the server within one hour. On the
other hand, if the data on the server support a business process that would effectively close
down the business for the period of time that it was not accessible, it would be prudent to nego-
tiate the fastest possible service in the CSA and pay a premium for that high level of service.

In April 2015, the Cloud Standards Customer Council (CSCC) published the Practical Guide
to Cloud Service Agreements, Version 2.0, to reflect changes that have occurred since 2012 when
it first published the Practical Guide to Cloud Service Level Agreements. The new guide provides
a practical reference to help enterprise IT and business decision-makers analyze CSAs from dif-
ferent cloud service providers. The main purpose of a CSA is to set clear expectations for service
between the cloud customer (buyer) and the cloud provider (seller), but CSAs should also exist
between a customer and other cloud entities, such as the cloud carrier, the cloud broker, and
even the cloud auditor. Although the various service delivery models, that is, IaaS, PaaS, SaaS,
and so on, may have different requirements, the guide focuses on the requirements that are
common across the various service models (Cloud Standards Customer Council, 2015, p. 4).

Implementing an effective management process is an important step in ensuring internal
and external user satisfaction with cloud services. Table 2.7 lists the 10 steps that should be
taken by cloud customers to evaluate cloud providers’ CSAs in order to compare CSAs across
multiple providers or to negotiate terms with a selected provider.

TABLE 2.6 Service Factors to Consider when Evaluating Cloud Vendors or Service
Providers

Factors Examples of Questions to Be Addressed
Delays What are the estimated server delays and network delays?

Workloads What is the volume of data and processing that can be handled
during a specific amount of time?

Costs What are the costs associated with workloads across multiple
cloud computing platforms?

Security How are data and networks secured against attacks? Are data
encrypted and how strong is the encryption? What are network
security practices?

Disaster recovery and business
continuity

How is service outage defined? What level of redundancy is in
place to minimize outages, including backup services in differ-
ent geographical regions? If a natural disaster or outage occurs,
how will cloud services be continued?

Technical expertise and
understanding

Does the vendor have expertise in your industry or business
processes? Does the vendor understand what you need to do
and have the technical expertise to fulfill those obligations?

Insurance in case of failure Does the vendor provide cloud insurance to mitigate user
losses in case of service failure or damage? This is a new and
important concept.

Third-party audit or an unbiased
assessment of the ability to rely on
the service provided by the vendor

Can the vendor show objective proof with an audit that it can
live up to the promises it is making?

54 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

Cloud Infrastructure
The cloud has greatly expanded the options for enterprise IT infrastructures because any device
that accesses the Internet can access, share, and deliver data. Cloud computing is a valuable
infrastructure because:

1. It is dynamic, not static and provides a way to make applications and computing power
available on demand. Applications and power are available on demand because they are
provided as a service. For example, any software that is provided on demand is referred
to as software as a service (SaaS). Typical SaaS products are Google Apps and www.
Salesforce.com. Section 2.5 discusses SaaS and other cloud services.

2. Helps companies become more agile and responsive while significantly reducing IT costs
and complexity through improved workload optimization and service delivery.

Move to Enterprise Clouds A majority of large organizations have hundreds or thou-
sands of software licenses that support business processes, such as licenses for Microsoft
Office, Oracle database management, IBM CRM (customer relationship management), and var-
ious network security software. Managing software and their licenses involves deploying, provi-
sioning, and updating them―all of which are time-consuming and expensive. Cloud computing
overcomes these problems.

Issues in Moving Workloads from the Enterprise
to the Cloud
Building a cloud strategy is a challenge, and moving existing applications to the cloud is stress-
ful. Despite the business and technical benefits, the risk exists of disrupting operations or cus-
tomers in the process. With the cloud, the network and WAN (wide area network) become an
even more critical part of the IT infrastructure. Greater network bandwidth is needed to sup-
port the increase in network traffic. And, putting part of the IT architecture or workload into
the cloud requires different management approaches, different IT skills, and knowing how to
manage vendor relationships and contracts.

TABLE 2.7 Ten Steps to Evaluate a CSA

1. Understand roles and responsibilities of the CSA customer and provider

2. Evaluate business-level policies and compliance requirements relevant to the CSA customer

3. Understand service and deployment model differences

4. Identify critical performance objectives such as availability, response time, and processing speed.
Ensure they are measurable and auditable

5. Evaluate security and privacy requirements for customer information that has moved into the
provider’s cloud and applications, functions, and services being operated in the cloud to provide
required service to the customer

6. Identify service management requirements such as auditing, monitoring and reporting,
measurement, provisioning, change management, and upgrading/patching

7. Prepare for service failure management by explicitly documenting cloud service capabilities and
performance expectations with remedies and limitations for each

8. Understand the disaster recovery plan

9. Develop a strong and detailed governance plan of the cloud services on the customer side

10. Understand the process to terminate the CSA

Cloud Services and Virtualization 55

Infrastructure Issues There is a big difference because cloud computing runs on a
shared infrastructure, so the arrangement is less customized to a specific company’s require-
ments. A comparison to help understand the challenges is that outsourcing is like renting an
apartment, while the cloud is like getting a room at a hotel.

With cloud computing, it may be more difficult to get to the root of performance problems,
like the unplanned outages that occurred with Google’s Gmail and Workday’s human resources
apps. The trade-off is cost versus control.

Increasing demand for faster and more powerful computers, and increases in the number
and variety of applications are driving the need for more capable IT architectures.

Questions

1. What is a data center?

2. What is the difference between on-premises data centers and cloud computing?

3. What is an SDDC?

4. What are the advantages of using an SDDC?

5. How can cloud computing solve the problems of managing software licenses?

6. What factors should be considered when selecting a cloud vendor or provider?

7. When are private clouds used instead of public clouds?

8. Explain three issues that need to be addressed when moving to cloud computing or services.

2.5 Cloud Services and Virtualization
Managers want streamlined, real-time, data-driven enterprises, yet they may face budget cuts.
Sustaining performance requires the development of new business applications and analytics
capabilities, which comprise the front end and the data stores and digital infrastructure, or back
end, to support them. The back end is where the data reside. The problem is that data may have
to navigate through a congested IT infrastructure that was first designed decades ago. These
network or database bottlenecks can quickly wipe out the competitive advantages from big
data, mobility, and so on. Traditional approaches to increasing database performance―manu-
ally tuning databases, adding more disk space, and upgrading processors―are not enough
when you are you are dealing with streaming data and real-time big data analytics. Cloud ser-
vices help to overcome these limitations. Cloud services are outsourced to a third-party cloud
provider who manages the updates, security, and ongoing maintenance.

At first glance, virtualization and cloud computing may appear to be quite similar. How-
ever, cloud computing and virtualization are inherently different. Unlike cloud computing that
involves multiple computers or hardware devices sending data through vendor-provided net-
works, virtualization is the replacement of a tangible physical component with a virtual one.
Each of these concepts are described and discussed in the following sections.

Anything as a Service (XAAS) Models
The cloud computing model for on-demand delivery of and access to various types of com-
puting resources also extends to the development of business apps. Figure 2.21 shows four
“as a service” (XaaS) solutions based on the concept that the resource―software, platform,
infrastructure, or data—can be provided on demand regardless of geolocation. As these as ser-
vice solutions develop, the focus is changing from massive technology implementation costs to
business-reengineering programs that enable XaaS platforms (Fresht, 2014).

Cloud services are services made available to users on demand via the Internet from
a cloud computing provider’s servers instead of being accessed through an organization’s

56 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

on-premises servers. Cloud services are designed to provide easy, scalable access to applica-
tions, resources, and services, and are fully managed by a cloud services provider.

Cloud computing is often referred to as a “stack” or broad range of services built on top of
each other under the name cloud. These cloud services can be defined as follows:

• Software as a service (SaaS) is a widely used model in which software is available to users
from a service provider as needed. A provider licenses a SaaS application to customers as
an on-demand service, through a subscription, a pay-as-you-go model, or free of charge
(where revenue can be generated by other means, such as through sale of advertisements).

• Platform as a service (PaaS) is a computing platform that enables the quick and easy
creation, testing, and deployment of web applications without the necessity of buying and
maintaining the software and infrastructure underneath it. It is a set of tools and services
that make coding and deploying these applications faster and more efficient.

• Infrastructure as a service (IaaS) is a way of delivering servers, storage, networks, work-
load balancers, and OSs as an on-demand service.

• Data as a service (DaaS) is an information provision and distribution model in which data
files (including text, images, sounds, and videos) are made available to customers over a
network by a service provider.

Software as a Service (SaaS) SaaS is a rapidly growing method of delivering soft-
ware and is particularly useful in applications in which there are considerable interactions bet-
ween the organization and external entities that do not confer a competitive advantage, for
example, e-mail and newsletters. It is also useful when an organization is going to be needing
a particular type of software for a short period of time or for a specific project, and for software
that is used periodically, for example, tax, payroll, or billing software. SaaS is not appropriate
for accessing applications that require fast processing of real-time data or applications where
regulation does not permit data being hosted externally.

Other terms for SaaS are on-demand computing and hosted services. The idea is basically
the same: Instead of buying and installing expensive packaged enterprise applications, users
can access software applications over a network, using an Internet browser. To use SaaS, a
service provider hosts the application at its data center and customers access it via a standard
Web browser.

The SaaS model was developed to overcome the common challenge to an enterprise of
being able to meet fluctuating demands on IT resources efficiently. It is used in many business
functions, primarily customer relationship management (CRM), accounting, human resources
(HR), service desk management, communication, and collaboration.

©
V

al
le

pu
/S

hu
tte

rs
to

ck

FIGURE 2.21 Four as a service solutions: software, platform,
infrastructure, and data as a service.

Cloud Services and Virtualization 57

There are thousands of SaaS vendors. www.Salesforce.com is one of the most widely
known SaaS providers. Other examples are Google Docs and collaborative presentation soft-
ware Prezi. For instance, instead of installing Microsoft Word on your own computer, and
then loading Word to create a document, you use a browser to log into Google Docs. Only the
browser uses your computer’s resources.

Platform as a Service (PaaS) PaaS provides a standard unified platform for devel-
oping, testing, and deploying software over the Web. This computing platform allows the
creation of Web applications quickly and easily without the complexity of buying and main-
taining the underlying infrastructure. Without PaaS, the cost of developing some applications
would be prohibitive. Examples of PaaS include databases, Web servers, development tools,
and execution runtime. PaaS is particularly useful when multiple software developers are
working on a software development project of when other external parties need to interact
with the development process and for when developers want to automate testing and deploy-
ment services. It is less useful in those instances where application performance needs to be
customized to the underlying hardware and software or an application needs to be highly por-
table in terms of where it is hosted. Some examples of PaaS include Microsoft Azure Service,
www.Force.com, and Google App Engine.

Infrastructure as a Service (IaaS) Rather than purchasing all the components of
its IT infrastructure, organizations buy their computing resources as a fully outsourced Infra-
structure as a Service (IaaS) on demand. Generally, IaaS can be acquired as a Public or Private
infrastructure or a combination of the two (Hybrid). A public IaaS is one that consists of shared
resources deployed on a self-service basis over the Internet. On the other hand, a private IaaS is
provided on a private network. And, a hybrid IaaS is a combination of both public and private.
IaaS is useful where organizations experience significant highs and lows in terms of demand on
the infrastructure, for new or existing organizations who have budgetary constraints on hardware
investment and in situations where an organization has temporary infrastructure needs. Some
IaaS providers you may be familiar with include Amazon Web Services (AWS) and Rackspace.

Data as a Service (DaaS)—The New Kid on the Block DaaS is the newest
entrant into the XaaS arena. DaaS enables data to be shared among clouds, systems, apps, and
so on regardless of the data source or where they are stored. Data files, including text, images,
sound, and video, are made available to customers over a network, typically the Internet. DaaS
makes it easier for data architects to select data from different pools, filter out sensitive data,
and make the remaining data available on demand.

A key benefit of DaaS is that it transfers the risks and responsibilities associated with data
management to a third-party cloud provider. Traditionally, organizations stored and managed
their data within a self-contained storage system, however, as data become more complex, it is
increasingly difficult and expensive to maintain using the traditional data model. Using DaaS,
organizational data are readily accessible through a cloud-based platform and can be delivered
to users despite organizational or geographical constraints. This model is growing in popularity
as data become more complex, difficult, and expensive to maintain. Some of the most common
business applications currently using DaaS are CRM and enterprise resource planning (ERP).
For an example of Daas, see IT at Work 2.3.

IT at Work 2.3

Slack
Slack, the successful social chat app for companies and their
executives and/or employees, has announced a “deep product
partnership” with Salesforce (Lunden,  2016). The partnership
includes a new data sharing platform for businesses to easily share

information about conversations they are having within the app.
More specifically, businesses will be able to share details about
client accounts in real time with automatic updates for new leads
about the accounts. The new partnership will allow Slack and its
users to be even more effective in collaboration and data sharing
across many platforms and departments (Lunden, 2016).

58 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

As a Service Models Are Enterprisewide and Can Trigger Lawsuits The
various As a Service models are used in various aspects of business. You will read how these
specific services, such as CRM and HR management, are being used for operational and strategic
purposes in later chapters. Companies are frequently adopting software, platform, infrastructure,
data management, and starting to embrace mobility as a service and big data as a service because
they typically no longer have to worry about the costs of buying, maintaining, or updating their
own data servers. Both hardware and human resources expenses can be cut significantly. Ser-
vice arrangements all require that managers understand the benefits and trade-offs―and how
to negotiate effective SLAs and CSAs. Regulations mandate that confidential data be protected
regardless of whether the data are on-premises or in the cloud. Therefore, a company’s legal
department needs to get involved in these IT decisions. Put simply, moving to cloud services is
not simply an IT decision because the stakes around legal and compliance issues are very high.

Going Cloud
Cloud services can advance the core business of delivering superior services to optimize busi-
ness performance. Cloud can cut costs and add flexibility to the performance of critical busi-
ness apps. And, it can improve responsiveness to end-consumers, application developers, and
business organizations. But to achieve these benefits, there must be IT, legal, and senior man-
agement oversight because a company still must meet its legal obligations and responsibilities
to employees, customers, investors, business partners, and society.

Virtualization and Virtual Machines
There are many types of virtualization, such as virtual storage devices, virtual desktops, virtual
OSs, and virtual servers for network virtualization. You can think of virtualization as a model for
a physical component that is built into computer code, to create a software program that acts
in the same way as the physical component it is modeling. For example, a virtual machine is
a software representation of a computer, rather than an actual computer and a virtual server
sends and receives signals just like a physical one, even though it doesn’t have its own circuitry
and other physical components.

You might ask why organizations want to virtualize their physical computing and net-
working devices. The answer is a gross underutilization of inefficient use of resources. Computer
hardware had been designed to run a single OS and a single app, which leaves most computers
vastly underutilized. Virtualization is a technique that creates a virtual (i.e., nonphysical) layer
and multiple virtual machines (VMs) to run on a single physical machine. The virtual (or virtual-
ization) layer makes it possible for each VM to share the resources of the hardware. Figure 2.22
shows the relationship among the VMs and physical hardware.

Application

Virtualization Layer

Hardware Layer

Operating
System

Application
Operating

System

Application
Operating

System

Virtual Machines

FIGURE 2.22 Virtual machines running on a
simple computer hardware layer.

Cloud Services and Virtualization 59

What Is a Virtual Machine? Just as virtual reality is not real, but a software-created
world, a virtual machine is a software-created computer. Technically, a virtual machine
(VM) is created by a software layer, called the virtualization layer, as shown in Figure 2.22.
That layer has its own Windows or other OS and apps, such as Microsoft Office, as if it
were an actual physical computer. A VM behaves exactly like a physical computer and con-
tains its own virtual―that is, software-based―CPU, RAM (random access memory), hard
drive, and network interface card (NIC). An OS cannot tell the difference between a VM
and a physical machine, nor can applications or other computers on a network tell the
difference. Even the VM thinks it is a “real” computer. Users can set up multiple real com-
puters to function as a single PC through virtualization to pool resources to create a more
powerful VM.

Virtualization is a concept that has several meanings in IT and therefore several defini-
tions. The major type of virtualization is hardware virtualization, which remains popular and
widely used. Virtualization is often a key part of an enterprise’s disaster recovery plan. In gen-
eral, virtualization separates business applications and data from hardware resources. This
separation allows companies to pool hardware resources―rather than dedicate servers to
applications―and assign those resources to applications as needed.

Different types of virtualization include:

• Storage virtualization is the pooling of physical storage from multiple network storage
devices into what appears to be a single storage device managed from a central
console.

• Server virtualization consolidates multiple physical servers into virtual servers that run on
a single physical server.

• Desktop virtualization is software technology that separates the desktop environ-
ment and associated application software from the physical machine that is used to
access it.

• Application virtualization is the practice of running software from a remote server rather
than on the user’s computer.

• Network virtualization combines the available resources in a network by splitting the
network load into manageable parts, each of which can be assigned (or reassigned) to a
particular server on the network.

• Hardware virtualization is the use of software to emulate hardware or a total computer
environment other than the one the software is actually running in. It allows a piece of
hardware to run multiple OS images at once. This kind of software is sometimes known as
a virtual machine.

Virtualization Characteristics and Benefits Virtualization increases the flexi-
bility of IT assets, allowing companies to consolidate IT infrastructure, reduce maintenance
and administration costs, and prepare for strategic IT initiatives. Virtualization is not primarily
about cost-cutting, which is a tactical reason. More importantly, for strategic reasons, virtual-
ization is used because it enables flexible sourcing and cloud computing.

The characteristics and benefits of virtualization are as follows:

1. Memory-intensive VMs need a huge amount of RAM (random access memory, or pri-
mary memory) because of their massive processing requirements.

2. Energy-efficient VMs minimize energy consumed running and cooling servers in the
data center―representing up to a 95% reduction in energy use per server.

3. Scalability and load balancing When a big event happens, such as the Super Bowl,
millions of people go to a website at the same time. Virtualization provides load balanc-
ing to handle the demand for requests to the site. The VMware infrastructure automati-
cally distributes the load across a cluster of physical servers to ensure the maximum
performance of all running VMs. Load balancing is key to solving many of today’s IT
challenges.

60 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

Virtualization consolidates servers, which reduces the cost of servers, makes more effi-
cient use of data center space, and reduces energy consumption. All of these factors reduce the
total cost of ownership (TCO). Over a three-year life cycle, a VM costs approximately 75% less to
operate than a physical server. IT at Work 2.4 describes one example of how virtualization can
help organizations provide higher levels of customer service and improve productivity.

Key Terms
ad hoc report 34
batch processing 33
cloud computing 52
cloud service agreements (CSAs) 53
customer-centric 47
data 30
data as a service (DaaS) 56
data center 38
data governance 46

data silo 43
database 33
decision support systems (DSS) 32
dirty data 26
enterprise architecture (EA) 26
exception report 34
executive information systems (EISs) 32
goal seeking 35
information 30

information management 42
information systems (ISs) 28
infrastructure as a service (IaaS) 56
IT infrastructure 38
IPOS 28
knowledge 30
management information systems (MIS) 34
master data 46
master data management (MDM) 47

IT at Work 2.4

Business Continuity with Virtualization
Liberty Wines supplies to restaurants, supermarkets, and indepen-
dent retailers from its headquarters in central London. Recipient
of multiple international wine awards—including the Interna-
tional Wine Challenge on Trade Supplier of the Year for two years
running—Liberty Wines is one of the United Kingdom’s foremost
wine importers and distributors.

IT Problems and Business Needs
As the business expanded, the existing servers did not have the
capacity to handle increased data volumes, and maintenance of
the system put a strain on the IT team of two employees. Existing
systems were slow and could not provide the responsiveness that
employees expected.

Liberty Wines had to speed up business processes to meet
the needs of customers in the fast-paced world of fine dining. To
provide the service their customers expect, employees at Liberty
Wines needed quick and easy access to customer, order, and stock

information. In the past, the company relied on 10 physical servers
for applications and services, such as order processing, reporting,
and e-mail.

Virtualized Solution
Liberty Wines deployed a virtualized server solution incorporating
Windows Server 2008 R2. The 10 servers were replaced with 3 physical
servers, running 10 virtual servers. An additional server was used as
part of a backup system, further improving resilience and stability.

By reducing the number of physical servers from 10 to 4, power
use and air conditioning costs were cut by 60%. Not only was the
bottom line improved, but the carbon footprint was also reduced,
which was good for the environment.

The new IT infrastructure cut hardware replacement costs by
£45,000 (U.S. $69,500) while enhancing stability with the backup
system. Applications now run faster, too, so employees can pro-
vide better customer service with improved productivity. When
needed, virtual servers can be added quickly and easily to support
business growth.

Questions

1. What is SaaS?

2. What is PaaS?

3. What is IaaS?

4. How might companies risk violating regulation or compliance requirements with cloud services?

5. In what ways is a virtualized information system different from a traditional information system?

6. Describe the different types of virtualization.

7. What is load balancing and why is it important?

Assuring Your Learning 61

master file 47
model 26
online transaction processing (OLTP) 33
platform as a service (PaaS) 56
private cloud 52
public cloud 52

real-time processing 33
service level agreement (SLA) 61
software as a service (SaaS) 54
software-defined data center (SDDC) 50
stack 56
structured decisions 35

transaction processing systems (TPS) 32
unstructured decisions 35
virtualization 59
virtual machine (VM) 59
what-if analysis 35
wisdom 31

Assuring Your Learning

Discuss: Critical Thinking Questions

1. Why is a strong market position or good profit performance only
temporary?

2. Assume you had:

a. A tall ladder with a sticker that lists a weight allowance only
five pounds more than you weigh. You know the manufacturer
and model number.

b. Perishable food with an expiration date two days into the
future.

c. A checking account balance that indicates you have sufficient
funds to cover the balance due on an account.

In all three cases, trusting the data to be exactly correct could have
negative consequences. Explain the consequences of trusting the data
in each instance. How might you determine the correct data for each
instance? Which data might not be possible to verify? How does dirty data
impact your decision-making?
3. If business data are scattered throughout the enterprise and not
synched until the end of the month, how does that impact day-to-day
decision-making and planning?

4. Assume a bank’s data are stored in silos based on financial
product―checking accounts, saving accounts, mortgages, auto
loans, and so on. What problems do these data silos create for
the bank’s managers?

5. Why do managers and workers still struggle to find information that
they need to make decisions or take action despite advances in digital
technology? That is, what causes data deficiencies?

6. According to a Tech CEO Council Report, Fortune 500 companies
waste $480 billion every year on inefficient business processes. What
factors cause such huge waste? How can this waste be reduced?

7. Explain why organizations need to implement EA and data
governance.

8. What two problems can EA solve?

9. Name two industries that depend on data governance to comply
with regulations or reporting requirements. Given an example of each.

10. Why is it important for data to be standardized? Give an example
of unstandardized data.

11. Why are TPSs critical systems?

12. Discuss why the cloud acts as the great IT delivery frontier.

13. What are the functions of data centers?

14. What factors need to be considered when selecting a cloud vendor?

15. What protection does an effective SLA or CSA provide?

16. Why is an SLA or a CSA a legal document?

17. How can virtualization reduce IT costs while improving performance?

Explore: Online and Interactive Exercises

1. When selecting a cloud vendor to host your enterprise data and
apps, you need to evaluate the service level agreement (SLA).

a. Research the SLAs of two cloud vendors, such as Rackspace,
Amazon, or Google.

b. For the vendors you selected, what are the SLAs’ uptime per-
centages? Expect them to be 99.9% or less.

c. Does each vendor count both scheduled downtime and
planned downtime toward the SLA uptime percentage?

d. Compare the SLAs in terms of two other criteria.

e. Decide which SLA is better based on your comparisons.

f. Report your results and explain your decision.

2. Many organizations initiate data governance programs because of
pressing compliance issues that impact data usage. Organizations may

need data governance to be in compliance with one or more regula-
tions, such as the Gramm−Leach Bliley Act (GLB), HIPAA, Foreign Cor-
rupt Practices Act (FCPA), Sarbanes−Oxley Act, and several state and
federal privacy laws.

a. Research and select two U.S. regulations or privacy laws.

b. Describe how data governance would help an enterprise com-
ply with these regulations or laws.

3. Visit www.eWeek.com Cloud Computing Solutions Center for news
and reviews at www.eweek.com/c/s/Cloud-Computing. Select one
of the articles listed under Latest Cloud Computing News. Prepare an
executive summary of the article.

4. Visit Rackspace.com and review the company’s three types of cloud
products. Describe each of those cloud solutions.

62 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

5. Visit Oracle.com. Describe the types of virtualization services of-
fered by Oracle.

6. Visit YouTube.com and search for two videos on virtualization. For

each video, report what you learned. Specify the complete URL, video
title, who uploaded the video and the date, video length, and num-
ber of views.

Case 2.2
Business Case: Data Chaos Creates Risk
Data chaos often runs rampant in service organizations, such as health
care and the government. For example, in many hospitals, each line
of business, division, and department has implemented its own IT
applications, often without a thorough analysis of its relationship with
other departmental or divisional systems. This arrangement leads
to the hospital having IT groups that specifically manage a particu-
lar type of application suite or data silo for a particular department
or division.

Data Management
When applications are not well managed, they can generate terabytes
of irrelevant data, causing hospitals to drown in such data. This data
chaos could lead to medical errors. In the effort to manage excessive
and massive amounts of data, there is increased risk of relevant infor-
mation being lost (missing) or inaccurate—that is, faulty or dirty data.
Another risk is data breaches.

• Faulty data By 2015, 96% of health-care organizations had
adopted electronic health records, or EHRs (Office of the National
Coordinator for HIT, 2016). It is well known that an unintended
consequence of EHR is faulty data. According to a study pub-
lished in the Journal of the American Medical Association, data in
EHR systems may not be as accurate and complete as expected
(Conn,  2016). Incorrect lab values, imaging results, or physi-
cian documentation lead to medical errors, harm patients, and
damage the organization’s accreditation and reputation.

• Data breaches More than 25 million people have been affected
by health-care system data breaches since the Office for Civil
Rights, a division of the U.S. Department of Health and Human
Services, began reporting breaches in 2009. Most breaches
involved lost or stolen data on laptops, removable drives, or
other portable media. Breaches are extremely expensive and
destroy trust.

Accountability in health-care demands compliance with strong
data governance efforts. Data governance programs verify that data
input into EHR, clinical, financial, and operational systems are accu-
rate and complete—and that only authorized edits can be made
and logged.

Vanderbilt University Medical Center Adopts EHR
and Data Governance
Vanderbilt University Medical Center (VUMC) in Nashville, TN, was
an early adopter of EHR and implemented data governance in 2009.
VUMC’s experience provides valuable lessons.

VUMC consists of three hospitals and the Vanderbilt Clinic, which
have 918 beds, discharge 53,000 patients each year, and count 1.6 mil-
lion clinic visits each year. On average, VUMC has an 83% occupancy
rate and has achieved HIMSS Stage 6 hospital EHR adoption. HIMSS
(Healthcare Information and Management Systems Society, himss.org)
is a global, nonprofit organization dedicated to better health-care out-
comes through IT. There are seven stages of EHR adoption, with Stage
7 being a fully paperless environment. That means all clinical data are
part of an electronic medical record and, as a result, can be shared

Analyze & Decide: Apply IT Concepts to Business Decisions

1. Financial services firms experience large fluctuations in business
volumes because of the cyclical nature of financial markets. These
fluctuations are often caused by crises―such as the subprime mort-
gage problems, the discovery of major fraud, or a slowdown in the
economy. These fluctuations require that executives and IT leaders
have the ability to cut spending levels in market downturns and
quickly scale up when business volumes rise again. Research SaaS
solutions and vendors for the financial services sector. Would invest-
ment in SaaS help such firms align their IT capacity with their business
needs and also cut IT costs? Explain your answer.

2. Despite multimillion-dollar investments, many IT organizations
cannot respond quickly to evolving business needs. Also, they cannot

adapt to large-scale shifts like mergers, sudden drops in sales, or
new product introductions. Can cloud computing help organizations
improve their responsiveness and get better control of their IT costs?
Explain your answer.

3. Describe the relationship between enterprise architecture and
organizational performance.

4. Identify four KPIs for a major airline (e.g., American, United, Delta)
or an automobile manufacturer (e.g., GM, Ford, BMW). Which KPI
would be the easiest to present to managers on an online dashboard?
Explain why.

Case 2.3 63

across and outside the enterprise. At Stage 7, the health-care organi-
zation is getting full advantage of the health information exchange
(HIE). HIE provides interoperability so that information can flow back
and forth among physicians, patients, and health networks (NextGen
Healthcare, 2016).

VUMC began collecting data as part of its EHR efforts in 1997. By
2009, the center needed stronger, more disciplined data management. At
that time, hospital leaders initiated a project to build a data governance
infrastructure.

Data Governance Implementation
VUMC’s leadership team had several concerns.

1. IT investments and tools were evolving rapidly, but they were
not governed by HIM (Healthcare Information and Manage-
ment) policies.

2. As medical records became electronic so they might be trans-
mitted and shared easily, they became more vulnerable
to hacking.

3. As new uses of electronic information were emerging, the medi-
cal center struggled to keep up.

Health Record Executive Committee
Initially, VUMC’s leaders assigned data governance to their traditional
medical records committee, but that approach failed. Next, they hired
consultants to help develop a data governance structure and organ-
ized a health record executive committee to oversee the project. The
committee reports to the medical board and an executive commit-
tee to ensure executive involvement and sponsorship. The commit-
tee is responsible for developing the strategy for standardizing health
record practices, minimizing risk, and maintaining compliance. Mem-
bers include the chief medical information officer (CMIO), CIO, legal
counsel, medical staff, nursing informatics, HIM, administration, risk
management, compliance, and accreditation. In addition, a legal
medical records team was formed to support additions, corrections,
and deletions to the EHR. This team defines procedures for removal of

duplicate medical record numbers and policies for data management
and compliance.

Costs of Data Failure
Data failures incur the following costs:

• Rework

• Loss of business

• Patient safety errors

• Malpractice lawsuits

• Delays in receiving payments because billing or medical codes
data are not available.

Benefits Achieved from Data Governance
As in other industries, in health care, data are the most valuable asset.
The handling of data is the real risk. EHRs are effective only if the data
are accurate and useful to support patient care. Effective ongoing data
governance has achieved that goal at VUMC.

Questions
1. What might happen when each line of business, division, and

department develops its own IT apps?

2. What are the consequences of poorly managed apps?

3. What two risks are posed by data chaos? Explain why.

4. What are the functions of data governance in the health-care
sector?

5. Why is it important to have executives involved in data gover-
nance projects?

6. List and explain the costs of data failure.

7. Why are data the most valuable asset in health care?

Case 2.3
Video Case: Cloud Computing at Coca-Cola Is
Changing Everything
When organizations say they are “using the cloud,” they can mean
a number of very different things. Using an IaaS service such as
Amazon EC2 or Terremark is different from using Google Apps to
outsource e-mail, which is different again from exposing an API
in Facebook.

In this video Alan Boehme, CIO of the Coca-Cola Company discusses
how Coca-Cola uses cloud computing to more effectively interact with its
customers and describes the challenges Coca-Cola is facing in establish-
ing SaaS partnerships with new start-ups.

Complete these three steps:

1. Visit https://www.youtube.com/watch?v=hCxmsSED2DY
2. View the 13-minute video.

3. Answer each of the three parts of the following question.

Question
1. Explain the value of Coca-Cola’s cloud partnerships with start-up

companies to:
a. Coca-Cola

b. The start-up companies

c. Coca-Cola’s customers

Sources: Compiled from NextGen Healthcare (2016), Office of the National Coor-
dinator for HIT (2016), and Conn (2016).

64 CHAPTER 2 Information Systems, IT Architecture, Data Governance, and Cloud Computing

IT Toolbox

Accurately Measuring the Value of Data
Governance
When developing a data governance program, it’s important to pre-
sent a strong business case to get buy-in from top executives and
stakeholders. A crucial part of the business case is an estimate of
the data governance program’s return on investment (ROI) to show
how it will add value to the company. You will need to justify the
ROI based on both business and IT strategy to ensure that available
funds are used to best meet the business objectives.

To do this you will need to carefully analyze the IT infrastruc-
ture with regard to how different components of the IT infrastructure
work together to support business processes, how data needed by
one system can be received and used by another, how easily data can
be communicated and/or repurposed. You will also need to factor in
risks and adverse events such as costs associated with rework in data
collection, costs associated with unreliable or unfit data, and delays
associated with untimely or unavailable data. Now, all of these costs
must be quantified and your level of confidence in the corporate data
has to be calculated to ensure your business case accurately reflects
the value of a data governance program.

One metric used to make this calculation is the confidence in data-
dependent assumptions metric, or CIDDA (Reeves & Bowen, 2013). The
CIDDA identifies specific areas of deficiency.

So, to sum up, when building a data governance model, it is
necessary to:

1. Establish a leadership team

2. Define the program’s scope

3. Calculate the ROI using the CIDDA.

CIDDA is computed by multiplying three confidence estimates
using the following formula:

CIDDA G M TS

where

G = Confidence that data are good enough for their intended
purpose

M = Confidence that data mean what you think they do

TS = Confidence that you know where the data come from and
trust the source.

CIDDA is a subjective metric for which there are no industry
benchmarks, yet it can be evaluated over time to gauge improve-
ments in data quality confidence.

To ensure your understanding of this IT Toolbox item, calculate
the CIDDA of Company A over time, using the stated levels of confi-
dence in the different aspects of its corporate data over Q1–Q4 2017:

Q1_2017 : 40%, 50%, 20%
Q2_2017 : 50%, 55%, 30%
Q3_2017 : 60%, 60%, 40%
Q4_2017 : 60%, 70%, 45%

G M TS
G M TS
G M TS
G M TS

References
Bloomberg, J. “Change as Core Competency: Transforming the Role

of the Enterprise Architect.” Forbes, June 16, 2016.
Cailean, I. “What Role Do Algorithms Play in Programmatic Advertis-

ing?” Trade Mod, January 6, 2016. http://www.trademob.com/what-
role-do-algorithms-play-in-programmatic-advertising

Cloud Standards Customer Council. Practical Guide to Cloud Service
Agreements, Version 2.0. April 2015. http://www.cloud-council.org/
deliverables/CSCC-Practical-Guide-to-Cloud-Service-Agreements

Conn, J. “EHRs vs. Paper: A Split-decision on Accuracy.” Modern
Healthcare, July 8, 2016.

Fresht, P. “The Ten Tenets Driving the As-a-service Economy.” Horses
for Sources, October 6, 2014. http://www.horsesforsources.com/as
a service-economy_100614

IBM. “What is Cloud Computing?” IBM, June 6, 2016. https://www.
ibm.com/cloud-computing/learn-more/what-is-cloud-computing

Jarousse, L. A. “Information Governance for Hospitals.” Hospitals &
Health Networks, February 18, 2016.

Keitt, T. J. “Collaboration Technology Should Be Part of Your Cus-
tomer Experience Tool Kit.” Forrester.com, June 30, 2014.

Lunden, I. “Enterprise Chat App Slack Ties up with Salesforce in
a Deep Product Partnership.” Tech Crunch, September 27, 2016.
https://techcrunch.com/2016/09/27/enterprise-chat-app-slack-ties-
up-with-salesforce-in-a-deep-platform-partnership

Marchese, L. “How the ‘Silo Effect’ Is Hurting Cross Team Collabora-
tion.” Trello, May 10, 2016.

NextGen Healthcare. “Health Information Exchange (HIE).” NextGen
Healthcare, March 31, 2016.

Office of the National Coordinator for Health Information Technology.
“Percent of Hospitals, By Type, that Possess Certified Health IT.”
Office of the National Coordinator for Health Information Technology,
May 31, 2016.

Porter, M. Competitive Advantage: Creating and Sustaining Superior
Performance. Free Press, 1998.

Rai, R., G. Sahoo, and S. Mehfuz. “Exploring the Factors Influencing
the Cloud Computing Adoption: A Systematic Study on Cloud Migra-
tion.” Springerplus, April 25, 2015, 4, 197.

Reeves, M. G. and R. Bowen. “Developing a Data Governance Model
in Health Care.”Healthcare Financial Management, February 2013,
67(2): 82–86.

Schneider, M. “Case Study: How MEDIATA Increased Campaign Perfor-
mance with Hyperlocal Targeting.” Skyhook Wireless, July 22, 2014.

Schneider, M. “Solving the Dirty Data Problem in Location-Based
Advertising.” Street Fight, January 7, 2015.

Shore, J. “Cloud-Based Integration Seeks to Tear Down Data Silos.”
Tech Target, August 19, 2015.

Sturm, R., C. Pollard, and J. Craig. Application Performance Manage-
ment in the Digital Enterprise. Elsevier, March 2017.

Zuckerman, M.P.H., Sheingold, Ph.D., Orav, Ph.D., Ruhter, M.P.P.,
M.H.S.A., and Epstein, M.D. “Readmissions, Observation, and the
Hospital Readmissions Reduction Program.” The New England Jour-
nal of Medicine, April 21, 2016.

65

CHAPTER 3

Data Management, Data Analytics,
and Business Intelligence

LEARNING OBJECTIVES

3.1 Describe the purpose and benefits of data management
and how database technologies help support business
processes.

3.2 Describe the differences between centralized and
distributed database architectures and the importance of
creating and maintaining data that can be trusted.

3.3 Understand the concepts of data analytics and
data warehousing and evaluate their tactical and
strategic benefits.

3.4 Explain benefits of data and text mining and business
intelligence and how they benefit an organization.

3.5 Describe electronic records management and how it helps
companies to meet their compliance, regulatory, and legal
obligations.

CHAPTER OUTLINE

Case 3.1 Opening Case: Coca-Cola Strategically
Manages Data to Retain Customers and
Reduce Costs

3.1 Data Management and
Database Technologies

3.2 Centralized and Distributed Database
Architectures

3.3 Data Warehouses

3.4 Data Analytics and Data Discovery

3.5 Business Intelligence and Electronic
Records Management

Case 3.2 Business Case: Big Data Analytics is the
“Secret Sauce” for Revitalizing McDonald’s

Case 3.3 Video Case: Verizon Improves its
Customer Experience with Data-Driven
Decision-Making

66 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

Introduction
As discussed in Chapter 2, collecting and maintaining trusted data is a critical aspect of any
business. Knowing how and where to find data, store it efficiently, analyze it in new ways to
increase the organization’s competitive advantage, and enable the right people to access it at
the right time are all fundamental components of managing the ever-increasing amounts of
corporate data. Indeed, data analytics is the primary differentiator when doing business in the
21st century. Transactional, social, mobile, cloud, Web, and sensor data offer enormous poten-
tial. But without tools to analyze these data types and volumes, there would not be much dif-
ference between business in the 20th century and business today—except for mobile access.
High-quality data and human expertise are essential to the value of analytics.

Human expertise is necessary because analytics alone cannot explain the reasons for
trends or relationships; know what action to take; or provide sufficient context to determine
what the numbers represent and how to interpret them.

Database, data warehouse, data analytics, and business intelligence (BI) technologies
interact to create a new biz-tech ecosystem. Data analytics and BI discover insights or rela-
tionships of interest that otherwise might not have been recognized. They make it possible for
managers to make decisions and act with clarity, speed, and confidence. Data analytics is not
just about managing more or varied data. Rather, it is about asking new questions, formulating
new hypotheses, exploration and discovery, and making data-driven decisions. Ultimately, a
big part of data analysis efforts is the use of new analytics techniques.

Mining data or text taken from day-to-day business operations reveals valuable
information, such as customers’ desires, products that are most important, or processes that
can be made more efficient. These insights expand the ability to take advantage of opportu-
nities, minimize risks, and control costs.

While you might think that physical pieces of paper are a relic of the past, in most offices
the opposite is true. Aberdeen Group’s survey of 176 organizations worldwide found that the
volume of physical documents is growing by up to 30% per year. Document management tech-
nology archives digital and physical data to meet business needs, as well as regulatory and
legal requirements (Eisenhauer, 2015).

Case 3.1 Opening Case

©
K

at
he

rin
e

W
el

le
s/

Sh
ut

te
rs

to
ck

Fe
rn

an
do

M
ad

ei
ra

/S
hu

tte
rs

to
ck

gu
ye

rw
oo

d/
G

et
ty

Im
ag

es

Coca-Cola Strategically Manages Data to Retain
Customers and Reduce Costs

Coca-Cola’s Data Management Challenges
The Coca-Cola Company is a Fortune 100 company with over $43.7
billion in sales revenue and $7.35 billion in profit (Figure 3.1). The
market leader manages and analyzes several petabytes (Pb) of
data generated or collected from more than 500 brands and con-
sumers in 206 countries. To understand the size of one petabyte of
data, it would take 223,000 DVDs (4.7 Gb each) to hold 1 Pb of data!

Coca-Cola’s bottling partners provide sales and shipment data,
while retail customers transmit transaction and merchandising data.
Other data sources are listed in Table 3.1. Before the introduction of
its newest BI system, Coca-Cola knew there were BI opportunities
in the mountains of data its bottlers were storing, but finding and
accessing all of that data for analytics proved to be nearly impossi-
ble. The disparate data sources caused long delays in getting analyt-
ics reports from IT to sales teams. The company decided to replace
the legacy software at each bottling facility and standardize them
on a new BI system—a combination of MicroStrategy and Microsoft
BI products.

Introduction 67

World’s largest nonalcoholic
beverage company with more
than 500 brands of beverages,
ready-to-drink coffees, juices, and
juice drinks.

Has the world’s largest beverage
distribution system, with
consumers in more than 200
countries.

Products consumed at a rate of
1.9 billion servings a day
worldwide.

Brand

Business Ethics &
Sustainability

Focused on initiatives that reduce their
environmental footprint; support
active, healthy living; create a safe
work environment; and enhance the
economic development of the
communities where they operate.

Digital Technology
Centralized database
Enterprise data warehouse (EDW)
Big data analytics
Decision models
70 million Facebook followers

The Coca-Cola
Company

FIGURE 3.1 The Coca-Cola Company overview.

TABLE 3.1 Opening Case Overview

Company • The Coca-Cola Company, www.coca-cola.com
• Sustainability: www.coca-colacompany.com/sustainability
• $43.7 billion in sales revenue and profits of $7.35 billion, 2016

Industry • The global company manufactures, sells, and distributes nonalcoholic beverages

Product lines • More than 500 brands of still and sparkling beverages, ready-to-drink coffees, juices, and juice drinks

Digital technology • Enterprise data warehouse (EDW)
• Big data and analytics
• Business intelligence
• In 2014, moved from a decentralized approach to a centralized approach, where the data are combined cen-

trally and available via the shared platforms across the organization

Business challenges • Coca-Cola had 74 unique databases, many of them used different software to store and analyze data. Dealing
with incompatible databases and reporting systems was a major problem. Coca-Cola had to take a strategic
approach instead of a tactical approach with big data

Global data sources • Transaction and merchandising data
• Data from nationwide network of more than 900 bottlers and manufacturing facilities
• Multichannel retail data
• Customer profile data from loyalty programs
• Social media data
• Supply chain data
• Competitor data
• Sales and shipment data from bottling partners

Taglines “Taste the feeling!”

Website www.coca-cola.com

Enterprise Data Management
Like most global companies, Coca-Cola relies on sophisticated enter-
prise data management, BI, and analytic technologies to sustain its
performance in fiercely competitive markets. Data are managed in a
centralized database. They use data warehousing, data analytics,

data modeling, and social media to respond to competitors’ activity,
market changes, and consumer preferences.

To support its business strategy and operations, Coca-Cola
changed from a decentralized database approach to a centralized
database approach (Figure 3.2). Now its data are combined centrally

68 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

and accessible via shared platforms across the organization to help
its major retail customers such as Walmart sell more Coca-Cola prod-
ucts and to improve the consumer experience and implemented a
data governance program to ensure that cultural data sensitivities are
respected.

Sustaining Business Performance
All data are standardized through a series of master data management
(MDM) processes. An enterprise data warehouse (EDW) generates a sin-
gle view of all multichannel retail data and creates a trusted view of
customers, sales, and transactions. This enables Coca-Cola to respond
quickly and accurately to changes in market conditions.

Throughout Coca-Cola huge volumes of data are analyzed to make
more and better time-sensitive, critical decisions about products, shop-
per marketing, the supply chain, and production. Point-of-sale (POS)
data are captured from retail channels and communicated via a central-
ized iPad reporting system to created customer profiles. POS data are
analyzed to support collaborative planning, forecasting, and replenish-
ment processes within its supply chain.

Coca-Cola’s Approach to Big Data and Decision Models Coca-
Cola takes a strategic approach instead of a tactical approach to big
data. The company is far advanced in the use of big data to manage
its products, sales revenue, and customer experiences in near real
time and reduce costs. For example, it cut overtime costs almost in
half by analyzing service center data. Big data help Coca-Cola relate
to its millions of Facebook followers—many of whom bolster the Coke
brand.

Big data play a key role in ensuring that its orange juice tastes the
same year-round and is readily available anywhere in the world. Oranges
used by Coca-Cola have a peak growing season of only three months.
Producing orange juice with a consistent taste year-round despite the

inconsistent quality of the orange supply is complex. To deal with this
complexity, an orange juice decision model was developed, the Black
Book model. A decision model quantifies the relationship between vari-
ables to reduce uncertainty. Black Book combines detailed data on the
6001 flavors that make up an orange, weather, customer preferences,
expected crop yields, cost pressures, regional consumer preferences, and
acidity or sweetness rate. The model specifies how to blend the orange
juice to create a consistent taste. Coke’s Black Book juice model is consid-
ered one of the most complex business analytics apps. It requires ana-
lyzing up to 1 quintillion (10E18) decision variables to consistently deliver
the optimal blend.

With the power of big data and decision models, Coca-Cola is
prepared for disruptions in supply far in advance. According to Doug
Bippert, Coca-Cola’s vice president of business acceleration, “If we have
a hurricane or a freeze, we can quickly re-plan the business in 5 or 10
minutes just because we’ve mathematically modeled it” (www.Business
Intelligence.com, 2013b).

Questions
1. Why does the Coca-Cola Company have petabytes of data?

2. Why is it important for Coca-Cola to be able to process POS data
in near real time?

3. How does Coca-Cola attempt to create favorable customer
experiences?

4. What is the importance of having a trusted view of the data?

5. What is the benefit of a decision model?

6. What is the Black Book model?

7. Explain the strategic benefit of the Black Book model.

Sources: Compiled from Burns (2013), BusinessIntelligence.com (2013),
CNNMoney (2014), HBS (2015), Liyakas (2015), and Ransbothom (2015).

al
ph

as
pi

rit
/S

hu
tte

rs
to

ck

FIGURE 3.2 Data from online and offline transactions are stored in databases. Data about
entities such as customers, products, orders, and employees are stored in an organized way.

Data Management and Database Technologies 69

3.1 Data Management and Database
Technologies
Due to the incredible volume of data that the typical organization creates, effective data manage-
ment is vital to prevent storage costs from spiraling out of control and controlling data growth
while supporting greater performance. Data management oversees the end-to-end lifecycle of
data from creation and initial storage to the time when it becomes obsolete and is deleted.

The objectives of data management include the following:

1. Mitigating the risks and costs of complying with regulations.
2. Ensuring legal requirements are met.
3. Safeguarding data security.
4. Maintaining accuracy of data and availability.
5. Certifying consistency in data that come from or go to multiple locations.
6. Ensuring that data conform to organizational best practices for access, storage, backup,

and disposal.

Typically, newer data, and data that is accessed more frequently, is stored on faster, but
more expensive storage media while less critical data is stored on cheaper, slower media.

The main benefits of data management include greater compliance, higher security, less
legal liability, improved sales and marketing strategies, better product classification, and
improved data governance to reduce risk. The following data management technologies keep
users informed and support the various business demands:

• Databases store data generated by business apps, sensors, operations, and transaction-
processing systems (TPS). Data in some databases can be extremely volatile. Medium
and large enterprises typically have many databases of various types—centralized and
distributed.

• Data warehouses integrate data from multiple databases and data silos across the orga-
nization, and organize them for complex analysis, knowledge discovery, and to support
decision-making. For example, data are extracted from a database, processed to stan-
dardize their format, and then loaded into data warehouses at specific times, such as
weekly. As such, data in data warehouses are nonvolatile—and are ready for analysis.

• Data marts are small-scale data warehouses that support a single function or one
department. Enterprises that cannot afford to invest in data warehousing may start with
one or more data marts.

• Business intelligence (BI)—tools and techniques process data and do statistical analysis
for insight and discovery—that is, to discover meaningful relationships in the data, keep
informed in real time, detect trends, and identify opportunities and risks.

Each of these database management technologies will be discussed in greater detail later
in this chapter.

Database Management Systems and SQL
Data-processing techniques, processing power, and enterprise performance management
capabilities have undergone revolutionary advances in recent years for reasons you are already
familiar with—big data, mobility, and cloud computing. The last decade, however, has seen the
emergence of new approaches, first in data warehousing and, more recently, for transaction
processing. Given the huge number of transactions that occur daily in an organization, the data
in databases are constantly in use or being updated. The volatility of databases makes it impos-
sible to use them for complex decision-making and problem-solving tasks. For this reason, data
are extracted from the database, transformed (processed to standardize the data), and then
loaded into a data warehouse.

Data management is the
management of the flow of data
from creation and initial storage
to the time when the data become
obsolete and are deleted.

Databases are collections of
data sets or records stored in a
systematic way.

70 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

Database management systems (DBMSs) integrate with data collection systems such as
TPS and business applications; store the data in an organized way; and provide facilities for
accessing and managing that data. Factors to consider when evaluating the performance of a
database management system are listed in Tech Note 3.1. Over the past 25 years, the relational
database has been the standard database model adopted by most enterprises. Relational data-
bases store data in tables consisting of columns and rows, similar to the format of a spreadsheet,
as shown in Figure 3.3.

Database management
systems (DBMSs) are software
used to manage the additions,
updates, and deletions of data as
transactions occur, and to support
data queries and reporting. They
are online transaction-processing
(OLTP) systems.

©
A

le
xa

nd
er

F
ed

ia
ch

ov
/A

la
m

y

FIGURE 3.3 Illustration of structured data format. Numeric
and alphanumeric data are arranged into rows and predefined
columns similar to those in an Excel spreadsheet.

Tech Note 3.1

Factors That Determine the Performance
of a DBMS
Factors to consider when evaluating the performance of a database
management system include:

• Data latency Latency is the elapsed time (or delay) between
when data are created and when they are available for a query
or report. Applications have different tolerances for latency.
Database systems tend to have shorter latency than data ware-
houses. Short latency imposes more restrictions on a system.

• Ability to handle the volatility of the data The database
has the processing power to handle the volatility of the data.
The rates at which data are added, updated, or deleted deter-
mine the workload that the database must be able to control
to prevent problems with the response rate to queries.

• Query response time The volume of data impacts response
times to queries and data explorations. Many databases

pre-stage data—that is, summarize or precalculate results—so
queries have faster response rates.

• Data consistency Immediate consistency means that as
soon as data are updated, responses to any new query will
return the updated value. With eventual consistency, not all
query responses will reflect data changes uniformly. Inconsis-
tent query results could cause serious problems for analyses
that depend on accurate data.

• Query predictability The greater the number of ad hoc or
unpredictable queries, the more flexible the database needs to
be. Database or query performance management is more diffi-
cult when the workloads are so unpredictable that they cannot
be prepared for in advance. The ability to handle the workload
is the most important criterion when choosing a database.

• Query processing capabilities Database queries are pro-
cessed in real time and results are transmitted via wired or
wireless networks to computer screen or handheld devices.

Query are ad hoc (unplanned)
user requests for specific data.

Relational management systems (RDBMSs) provide access to data using a declarative
language—structured query language (SQL). Declarative languages simplify data access by
requiring that users only specify what data they want to access without defining how access
will be achieved. The format of a basic SQL statement is

SELECT column_name(s)
FROM table_name
WHERE condition

An instance of SQL is shown in Figure 3.4.

Structured query language
(SQL) is a standardized query
language for accessing databases.

Data Management and Database Technologies 71

©
P

io
tr

A
da

m
ow

ic
z/

Sh
ut

te
rs

to
ck

FIGURE 3.4 An instance of SQL to access employee information based on date of hire.

Check and control data integrity over time

Data Filtering and Profiling

Data Integrity and Maintenance

Data Synchronization

Data Security

Data Access
Provide authorized access to data in both planned and ad hoc

ways within acceptable time

Integrate, match or link data from disparate sources

Correct, standardize and verify the consistency
and integrity of the data

Process and store data efficiently. Inspect data for errors,
inconsistencies, redundancies and incomplete information

FIGURE 3.5 DBMS functions.

DBMS Functions An accurate and consistent view of data throughout the enterprise is
needed so one can make informed, actionable decisions that support the business strategy.
Functions performed by a DBMS to help create such a view are shown in Figure 3.5.

Online Transaction Processing and Online Analytics Processing When
most business transactions occur—for instance, an item is sold or returned, an order is sent or
cancelled, a payment or deposit is made—changes are made immediately to the database.
These online changes are additions, updates, or deletions. DBMSs record and process transac-
tions in the database, and support queries and reporting. Given their functions, DBMSs are
referred to as online transaction processing (OLTP) systems. OLTP is a database design that

Online transaction processing
(OLTP) systems are designed to
manage transaction data, which
are volatile.

72 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

breaks down complex information into simpler data tables to strike a balance between
transaction-processing efficiency and query efficiency. OLTP databases process millions of
transactions per second. However, databases cannot be optimized for data mining, complex
online analytics processing (OLAP) systems, and decision support. These limitations led to
the introduction of data warehouse technology. Data warehouses and data marts are optimized
for OLAP, data mining, BI, and decision support. OLAP is a term used to describe the analysis of
complex data from the data warehouse. In summary, databases are optimized for extremely fast
transaction processing and query processing. Data warehouses are optimized for analysis.

DBMS and Data Warehousing Vendors
Respond to Latest Data Demands
One of the major drivers of change in the data management market is the increased amount
of data to be managed. Enterprises need powerful DBMSs and data warehousing solutions,
analytics, and reporting. The four vendors that dominate this market—Oracle, IBM, Microsoft,
and Teradata—continue to respond to evolving data management needs with more intelligent
and advanced software and hardware. Advanced hardware technology enables scaling to much
higher data volumes and workloads than previously possible, or it can handle specific work-
loads. Older general-purpose relational databases DBMSs lack the scalability or flexibility for
specialized or very large workloads, but are very good at what they do.

Trend Toward NoSQL Systems RDBMSs are still the dominant database
engines, but the trend toward NoSQL (short for “not only SQL”) systems is clear. NoSQL sys-
tems increased in popularity by 96% from 2014 to 2016. Although NoSQL have existed for
as long as relational DBMS, the term itself was not introduced until 2009. That was when
many new systems were developed in order to cope with the unfolding requirements for
DBMS—namely, handling big data, scalability, and fault tolerance for large Web applications.
Scalability means the system can increase in size to handle data growth or the load of an
increasing number of concurrent users. To put it differently, scalable systems efficiently meet
the demands of high-performance computing. Fault tolerance means that no single failure
results in any loss of service.

NoSQL systems are such a heterogeneous group of database systems that attempts to
classify them are not very helpful. However, their general advantages are the following:

• higher performance
• easy distribution of data on different nodes, which enables scalability and fault tolerance
• greater flexibility
• simpler administration

Starting in 2010 and continuing through 2016, Microsoft has been working on the first
rewrite of SQL Server’s query execution since Version 7 was released in 1998. The goal is to offer
NoSQL-like speeds without sacrificing the capabilities of a relational database.

With most NoSQL offerings, the bulk of the cost does not lie in acquiring the database, but
rather in implementing it. Data need to be selected and migrated (moved) to the new database.
Microsoft hopes to reduce these costs by offering migration solutions.

DBMS Vendor Rankings The top five enterprise database systems of 2016 are Oracle’s
12c Database, Microsoft SQL Server, IBM DB2, SAP Sybase ASE, and PostgreSQL:

1. Oracle 12c Database consolidates and manages databases as cloud services via Oracle’s
multitenant architecture and in-memory data processing capabilities and can be rapidly
provisioned.

Centralized and Distributed Database Architectures 73

2. Microsoft SQL Server ease of use, availability, and Windows operating system integration
make it an easy choice for firms that choose Microsoft products for their enterprises.

3. IBM DB2 is widely used in large data centers and runs on Linux, UNIX, Windows, IBM iSeries,
and mainframes.

4. SAP Sybase ASE is a major force after 25 years of success and improvements. Supports
partition locking, relaxed query limits, query plan optimization, and dynamic thread
assignment.

5. PostgreSQL is the most advanced open source database, often used by online gaming
applications and Skype, Yahoo!, and MySpace. This database runs on a wide variety of
operating systems including Linux, Windows, FreeBSD, and Solaris.

Questions

1. Describe a database and a database management system (DBMS).

2. Explain what an online transaction-processing (OLAP) system does.

3. Why are data in databases volatile?

4. Describe the functions of a DBMS.

5. Describe the purpose and benefits of data management.

6. What is a relational database management system?

3.2 Centralized and Distributed Database
Architectures
Databases can be centralized or distributed, as shown in Figure 3.6. Both types of databases
need one or more backups and should be archived on- and offsite in case of a crash or secu-
rity incident.

For decades the main database platform consisted of centralized database files on
massive mainframe computers. Benefits of centralized database configurations include the
following:

1. Better control of data quality Data consistency is easier when data are kept in one
physical location because data additions, updates, and deletions can be made in a super-
vised and orderly fashion.

2. Better IT security Data are accessed via the centralized host computer, where they can
be protected more easily from unauthorized access or modification.

A major disadvantage of centralized databases, like all centralized systems, is transmission
delay when users are geographically dispersed. More powerful hardware and networks com-
pensate for this disadvantage.

In contrast, distributed databases use client/server architecture to process information
requests. The databases are stored on servers that reside in the company’s data centers, a
private cloud, or a public cloud (Figure 3.7). Advantages of a distributed database include reli-
ability—if one site crashes, the system will keep running—and speed—it’s faster to search a part
of a database than the whole. However, if there’s a problem with the network that the distrib-
uted database is using, it can cause availability issues and the appropriate hardware and soft-
ware can be expensive to purchase.

Centralized database stores all
data in a single central compute
such as a mainframe or server.
Distributed database stores
portions of the database on
multiple computers within
a network.

74 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

Users
Los Angeles

Users
New York

Users
Kansas City

Users
Chicago

New York

Users
Los Angeles

Los Angeles

Users
Kansas City

Kansas City

Central Location

Central
Location

New York

Users
New York

(a)

(b)

New York

Users
Chicago

Chicago

FIGURE 3.6 Comparison of (a) centralized and (b) distributed databases.

Distributed databases on servers
Manufacturing

Manufacturing
clients

Headquarter
clients

Headquarters

Sales &
Marketing

Sales & marketing clients

FIGURE 3.7 Distributed database architecture for headquarters, manufacturing, and
sales and marketing.

Centralized and Distributed Database Architectures 75

Garbage In, Garbage Out
Data collection is a highly complex process that can create problems concerning the quality
of the data being collected. Therefore, regardless of how the data are collected, they need to
be validated so users know they can trust them. Classic expressions that sum up the situation
are “garbage in, garbage out” (GIGO) and the potentially riskier “garbage in, gospel out.” In the
latter case, poor-quality data are trusted and used as the basis for planning. For example, you
have probably encountered data safeguards, such as integrity checks, to help improve data
quality when you fill in an online form, such as when the form will not accept an e-mail address
or a credit card number that is not formatted correctly.

Table 3.2 lists the characteristics typically associated with dirty or poor-quality data.

TABLE 3.2 Characteristics of Poor-Quality or Dirty Data

Characteristic of Dirty Data Description
Incomplete Missing data

Outdated or invalid Too old to be valid or useful

Incorrect Too many errors

Duplicated or in conflict Too many copies or versions of the same data―and the versions
are inconsistent or in conflict with each other

Nonstandardized Data are stored in incompatible formats―and cannot be
compared or summarized

Unusable Data are not in context to be understood or interpreted
correctly at the time of access

Dirty Data Costs and Consequences As discussed in Chapter 2, too often man-
agers and information workers are actually constrained by data that cannot be trusted because
they are incomplete, out of context, outdated, inaccurate, inaccessible, or so overwhelming
that they require weeks to analyze. In such situations, the decision-maker is facing too much
uncertainty to make intelligent business decisions.

On average, an organization experiences 40% data growth annually, and 20% of that
data is found to be dirty. Each dirty data point, or record, costs $100 if not resolved (Ring-
Lead, 2015). The costs of poor-quality data spread throughout a company, affecting systems
from shipping and receiving to accounting and customer service. Data errors typically arise
from the functions or departments that generate or create the data—and not within the IT
department. When all costs are considered, the value of finding and fixing the causes of data
errors becomes clear. In a time of decreased budgets, some organizations may not have the
resources for such projects and may not even be aware of the problem. Others may be
spending most of their time fixing problems, thus leaving them with no time to work on pre-
venting them. However, the benefits of acting preventatively against dirty data are
astronomical. It costs $1 to prevent and $10 to correct dirty data. While the short-run cost of
cleaning and preventing dirty data is unrealistic for some companies, the long-term
conclusion is far more expensive (Kramer, 2015).

Bad data are costing U.S. businesses hundreds of billions of dollars a year and affecting
their ability to ride out the tough economic climate. Incorrect and outdated values, missing
data, and inconsistent data formats can cause lost customers, sales, and revenue; misalloca-
tion of resources; and flawed pricing strategies.

Consider a corporation that follows the cost structure associated with clean/dirty data
explained above with 100,000 data points. Over a three-year span, by cleaning the 20% of dirty
data during the first year and using prevention methods for the following years, the corporation
will save $8,495,000. Purely based on the quality of its data, a corporation with a large amount
of data can hypothetically increase its revenue by 70% (RingLead, 2015).

Dirty data is poor-quality data
that lacks integrity and cannot
be trusted.

76 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

The cost of poor-quality data may be expressed as a formula:

Cost of Poor-Quality Data Lost Business Cost to Prevent Errors Cost to Correct Errors

Examples of these costs include the following:

• Lost business Business is lost when sales opportunities are missed, orders are returned
because wrong items were delivered, or errors frustrate and drive away customers.

• Time spent preventing errors If data cannot be trusted, then employees need to spend
more time and effort trying to verify information in order to avoid mistakes.

• Time spent correcting errors Database staff need to process corrections to the data-
base. For example, the costs of correcting errors at U-rent Corporation are estimated
as follows:
a. Two database staff members spend 25% of their workday processing and verifying

data corrections each day:

2 people * 25% of 8 hours / day 4 hours / day correcting errors
b. Hourly salaries are $50 per hour based on pay rate and benefits:

$50 / hour * 4 hours / day $200 / day correcting errors
c. 250 workdays per year:

$200 / day * 250 days $50,000 / year to correct errors

For a particular company, it is difficult to calculate the full cost of poor-quality data and its
long-term effects. Part of the difficulty is the time delay between the mistake and when it is detected.
Errors can be very difficult to correct, especially when systems extend across the enterprise.
Another concern is that the impacts of errors can be unpredictable, far-reaching, and serious.

Data Ownership and Organizational Politics
Compliance with numerous federal and state regulations relies on rock-solid data and trusted
metrics used for regulatory reporting. Data ownership, data quality, and formally managed
data are high priorities on the agenda of CFOs and CEOs who are held personally accountable if
their company is found to be in violation of regulations.

Despite the need for high-quality data, organizational politics and technical issues make
that difficult to achieve. The source of the problem is data ownership—that is, who owns or is
responsible for the data. Data ownership problems exist when there are no policies defining
responsibility and accountability for managing data. Inconsistent data formats of various
departments create an additional set of problems as organizations try to combine individual
applications into integrated enterprise systems.

The tendency to delegate data-quality responsibilities to the technical teams who have no
control over data quality, as opposed to business users who do have such control, is another
common pitfall that stands in the way of accumulating high-quality data.

Those who manage a business or part of a business are tasked with trying to improve
business performance and retain customers. Compensation is tied to improving profitability,
driving revenue growth, and improving the quality of customer service. These key performance
indicators (KPIs) are monitored closely by senior managers who want to find and eliminate
defects that harm performance. It is strange then that so few managers take the time to under-
stand how performance is impacted by poor-quality data. Two examples make a strong case for
investment in high-quality data.

Retail banking: For retail bank executives, risk management is the number one issue. Disre-
gard for risk contributed to the 2008 financial services meltdown. Despite risk management strat-
egies, many banks still incur huge losses. Part of the problem in many banks is that their ISs enable
them to monitor risk only at the product level—mortgages, loans, or credit cards. Product-level risk
management ISs monitor a customer’s risk exposure for mortgages, or for loans, or for credit cards,
and so forth—but not for a customer for all products. With product-level ISs, a bank cannot see the
full risk exposure of a customer. The limitations of these siloed product-level risks have serious
implications for business performance because bad-risk customers cannot be identified easily,

Centralized and Distributed Database Architectures 77

and customer data in the various ISs may differ. However, banks are beginning to use big data to
analyze risk more effectively. Although they are still very limited to credit card, loan, and mort-
gage risk data, cheaper and faster computing power allows them to keep better and more inclusive
records of customer data. Portfolio monitoring offers earlier detection and predictive analytics for
potential customers, and more advanced risk models show intricate patterns unseen by the naked
eye in large data sets. Also, more fact-based inputs and standardized organizational methods are
being implemented to reduce loan and credit officer bias to take risks on undesirable customers.

Marketing: Consider what happens when each product-level risk management IS feeds
data to marketing ISs. Marketing may offer bad-risk customers incentives to take out another
credit card or loan that they cannot repay. And since the bank cannot identify its best cus-
tomers either, they may be ignored and enticed away by better deals offered by competitors.
This scenario illustrates how data ownership and data-quality management are critical to risk
management. Data defects and incomplete data can quickly trigger inaccurate marketing and
mounting losses. Banks’ increasing dependence on business modeling requires that risk man-
agers understand and manage model risk better. Although losses often go unreported, the con-
sequences of errors in the model can be extreme. For instance, a large Asia–Pacific bank lost
$4 billion when it applied interest-rate models that contained incorrect assumptions and data-
entry errors. Risk mitigation will entail rigorous guidelines and processes for developing and val-
idating models, as well as the constant monitoring and improvement of them (Harle et al., 2016).

Manufacturing: Many manufacturers are at the mercy of a powerful customer base—large
retailers. Manufacturers want to align their processes with those of large retail customers to
keep them happy. This alignment makes it possible for a retailer to order centrally for all stores
or to order locally from a specific manufacturer. Supporting both central and local ordering
makes it difficult to plan production runs. For example, each manufacturing site has to collect
order data from central ordering and local ordering systems to get a complete picture of what to
manufacture at each site. Without accurate, up-to-date data, orders may go unfilled, or manu-
facturers may have excess inventory. One manufacturer who tried to keep its key retailer happy
by implementing central and local ordering could not process orders correctly at each manu-
facturing site. No data ownership and lack of control over how order data flowed throughout
business operations had negative impacts. Conflicting and duplicate business processes at
each manufacturing site caused data errors, leading to mistakes in manufacturing, packing,
and shipments. Customers were very dissatisfied.

These examples demonstrate the consequences of a lack of data ownership and data
quality. Understanding the impact mismanaged data can have on business performance high-
lights the need to make data ownership and data accuracy a high priority.

Data Life Cycle and Data Principles
The data life cycle is a model that illustrates the way data travel through an organization as
shown in Figure 3.8. The data life cycle begins with storage in a database, to being loaded
into a data warehouse for analysis, then reported to knowledge workers or used in business

Data Sources
and Databases

Personal
Expertise &
Judgment

Data
Visualization

SCM

E-commerce

Strategy

Others

CRM

Data AnalysisData Storage Results

Business Analytics

Business
Applications

Internal
Data

External
Data

Data
Warehouse

Data
Marts

Data
Marts

OLAP,
Queries,
EIS, DSS

Data
Mining

Decision
Support

Knowledge
and its

Management

FIGURE 3.8 Data life cycle.

78 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

apps. Supply chain management (SCM), customer relationship management (CRM), and
e-commerce are enterprise applications that require up-to-date, readily accessible data to
function properly.

Three general data principles relate to the data life cycle perspective and help to guide IT
investment decisions:

1. Principle of diminishing data value The value of data diminishes as they age. This is
a simple, yet powerful principle. Most organizations cannot operate at peak performance
with blind spots (lack of data availability) of 30 days or longer. Global financial services
institutions rely on near real-time data for peak performance.

2. Principle of 90/90 data use According to the 90/90 data-use principle, a majority of
stored data, as high as 90%, is seldom accessed after 90 days (except for auditing pur-
poses). That is, roughly 90% of data lose most of their value after three months.

3. Principle of data in context The capability to capture, process, format, and distribute
data in near real time or faster requires a huge investment in data architecture (Chapter 2)
and infrastructure to link remote POS systems to data storage, data analysis systems, and
reporting apps. The investment can be justified on the principle that data must be inte-
grated, processed, analyzed, and formatted into “actionable information.”

Master Data and Master Data Management
As data become more complex and their volumes explode, database performance degrades.
One solution is the use of master data and master data management (MDM) as introduced
in Chapter 2. MDM processes integrate data from various sources or enterprise applications to
create a more complete (unified) view of a customer, product, or other entity. Figure 3.9 shows
how master data serve as a layer between transactional data in a database and analytical data
in a data warehouse. Although vendors may claim that their MDM solution creates “a single ver-
sion of the truth,” this claim is probably not true. In reality, MDM cannot create a single unified
version of the data because constructing a completely unified view of all master data is simply
not possible.

Transactional
Data

Master
Data

Enterprise
Data

Analytical
Data

Transactional data
supports the
applications

Master data describe
enterprises business
entities upon which

transactions are conducted
and dimensions (customers,
product, supplier, account,

site) around which analyses
are performed

Analytical data
support decision-

making and
planning

FIGURE 3.9 An enterprise has transactional, master, and analytical data.

Data Warehouses 79

Each department has distinct master data needs. Marketing, for example, is concerned
with product pricing, brand, and product packaging, whereas production is concerned with
product costs and schedules. A customer master reference file can feed data to all enterprise
systems that have a customer relationship component, thereby providing a more unified pic-
ture of customers. Similarly, a product master reference file can feed data to all the production
systems within the enterprise.

An MDM includes tools for cleaning and auditing the master data elements as well as tools
for integrating and synchronizing data to make them more accessible. MDM offers a solution for
managers who are frustrated with how fragmented and dispersed their data sources are.

Questions

1. Describe the data life cycle.

2. What is the function of master data management (MDM)?

3. What are the consequences of not cleaning “dirty data”?

4. Describe the differences between centralized and distributed databases.

5. Discuss how data ownership and organizational politics affect the quality of an organization’s data.

3.3 Data Warehouses
Data warehouses are the primary source of cleansed data for analysis, reporting, and business
intelligence (BI). Often the data are summarized in ways that enable quick responses to queries.
For instance, query results can reveal changes in customer behavior and drive the decision to
redevelop the advertising strategy.

Master Reference File and Data Entities Realistically, MDM consolidates data
from various data sources into a master reference file, which then feeds data back to the appli-
cations, thereby creating accurate and consistent data across the enterprise. In IT at Work 3.1,
participants in the health-care supply chain essentially developed a master reference file of
its key data entities. A data entity is anything real or abstract about which a company wants
to collect and store data. Master data entities are the main entities of a company, such as cus-
tomers, products, suppliers, employees, and assets.

IT at Work 3.1

Data Errors Increase Costs Downstream
At an insurance company, the cost of processing each claim is $1,
but the average downstream cost due to errors in a claim is $300.
The $300 average downstream costs included manual handling of
exceptions, customer support calls initiated due to errors in claims,
and reissuing corrected documents for any claims processed incor-
rectly the first time. In addition, the company faced significant soft
costs from regulatory risk, lost revenues due to customer dissat-
isfaction, and overpayment on claims due to claims-processing
errors. These soft costs are not included in the hard cost of $300.

Every day health-care administrators and others throughout
the health-care supply chain waste 24–30% of their time correct-
ing data errors. Each transaction error costs $60 to $80 to correct.

In addition, about 60% of all invoices among supply chain partners
contain errors, and each invoice error costs $40 to $400 to recon-
cile. Altogether, errors and conflicting data increase supply costs by
3–5%. In other words, each year billions of dollars are wasted in the
health-care supply chain because of supply chain data disconnects,
which refer to one organization’s IS not understanding data from
another’s IS.

IT at Work Questions
1. Why are the downstream costs of data errors so high?
2. What are soft costs?
3. Explain how soft costs might exceed hard costs. Give

an example.

80 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

Three technologies involved in preparing raw data for analytics include ETL, change
data capture (CDC), and data deduplication (“deduping the data”). CDC processes capture
the changes made at data sources and then apply those changes throughout enterprise data
stores to keep data synchronized. CDC minimizes the resources required for ETL processes by
only dealing with data changes. Deduping processes remove duplicates and standardize data
formats, which helps to minimize storage and data synch.

Building a Data Warehouse
Figure 3.11 diagrams the process of building and using a data warehouse. The organization’s
data from operational transaction processes systems are stored in operational databases

Data warehouses that pull together data from disparate sources and databases across an
entire enterprise are called enterprise data warehouses (EDWs).

Data warehouses store data from various source systems and databases across an enterprise
in order to run analytical queries against huge datasets collected over long time periods.

The high cost of data warehouses can make them too expensive for a company to imple-
ment. Data marts are lower-cost, scaled-down versions of a data warehouse that can be imple-
mented in a much shorter time, for example, in less than 90 days. Data marts serve a specific
department or function, such as finance, marketing, or operations. Since they store smaller
amounts of data, they are faster and easier to use, and navigate.

Procedures to Prepare EDW Data for Analytics
Consider a bank’s database. Every deposit, withdrawal, loan payment, or other transaction
adds or changes data. The volatility caused by constant transaction processing makes data
analysis difficult—and the demands to process millions of transactions per second consume
the database’s processing power. In contrast, data in warehouses are relatively stable, as
needed for analysis. Therefore, select data are moved from databases to a warehouse. Specifi-
cally, data are as follows:

1. Extracted from designated databases.
2. Transformed by standardizing formats, cleaning the data, integrating them.
3. Loaded into a data warehouse.

These three procedures—extract, transform, and load—are referred to by their initials ETL
(Figure 3.10). In a warehouse, data are read-only; that is, they do not change until the next ETL.

Enterprise data warehouses
(EDWs) is a data warehouse that
integrates data from databases
across an entire enterprise.

©
V

al
le

pu
/S

hu
tte

rs
to

ck

FIGURE 3.10 Data enter databases from transaction systems.
Data of interest are extracted from databases, transformed
to clean and standardize them, and then loaded into a data
warehouse. These three processes are called ETL.

Data Warehouses 81

Business Intelligence Management

Analytics

Reporting

Queries

Data Mining

InformationData Marts

Data Mart

Data
Warehouse

Data
Warehouse

Business
Intelligence
Environment

ETL
processes

Transaction
Systems

Operational
Databases

FIGURE 3.11 Database, data warehouse and marts, and BI architecture.

(left side of the figure). Not all data are transferred to the data warehouse. Frequently, only
summary data are transferred. The warehouse organizes the data in multiple ways—by
subject, functional area, vendor, and product. As shown, the data warehouse architecture
defines the flow of data that starts when data are captured by transaction systems; the
source data are stored in transactional (operational) databases; ETL processes move data
from databases into data warehouses or data marts, where the data are available for access,
reports, and analysis.

Real-Time Support from an Active Data Warehouse
Early data warehouse technology primarily supported strategic applications that did not
require instant response time, direct customer interaction, or integration with operational
systems. ETL might have been done once per week or once per month. But, demand for
information to support real time customer interaction and operations leads to real-time data
warehousing and analytics—known as an active data warehouse (ADW). Massive increases
in computing power, processing speeds, and memory made ADW possible. ADW are not
designed to support executives’ strategic decision-making, but rather to support operations.
For example, shipping companies like DHL use huge fleets of trucks to move millions of pack-
ages. Every day and all day, operational managers make thousands of decisions that affect
the bottom line, such as: “Do we need four trucks for this run?” “With two drivers delayed by
bad weather, do we need to bring in extra help?” Traditional data warehousing is not suited
for immediate operational support, but active data warehousing is. For example, companies
with an ADW are able to:

• Interact with a customer to provide superior customer service.
• Respond to business events in near real time.
• Share up-to-date status data among merchants, vendors, customers, and associates.

Here are some examples of how two companies use ADW.
Capital One. Capital One uses its ADW to track each customer’s “profitability score” to

determine the level of customer service to provide for that person. Higher-cost personalized
service is only given to those with high scores. For instance, when a customer calls Capital One,

82 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

he or she is asked to enter a credit card number, which is linked to a profitability score. Low-
profit customers get a voice response unit only; high-profit customers are connected to a live
customer service representative (CSR) because the company wants to minimize the risk of los-
ing those customers.

Travelocity. If you use Travelocity, an ADW is finding the best travel deals especially for you.
The goal is to use “today’s data today” instead of “yesterday’s data today.” The online travel
agency’s ADW analyzes your search history and destinations of interest; then predicts travel
offers that you would most likely purchase. Offers are both relevant and timely to enhance your
experience, which helps close the sale in a very competitive market. For example, when a cus-
tomer is searching flights and hotels in Las Vegas, Travelocity recognizes the interest—the cus-
tomer wants to go to Vegas. The ADW searches for the best-priced flights from all carriers, builds
a few package deals, and presents them in real time to the customer. When customers see a
personalized offer they are already interested in, the ADW helps generate a better customer
experience. The real-time data-driven experience increases the conversion rate and sales.

Data warehouse content can be delivered to decision-makers throughout the enterprise
via the cloud or company-owned intranets. Users can view, query, and analyze the data and
produce reports using Web browsers. These are extremely economical and effective data
delivery methods.

Data Warehousing Supports Action as well as Decisions Many organiza-
tions built data warehouses because they were frustrated with inconsistent data that could not
support decisions or actions. Viewed from this perspective, data warehouses are infrastruc-
ture investments that companies make to support ongoing and future operations, including
the following:

• Marketing Keeps people informed of the status of products, marketing program effec-
tiveness, and product line profitability; and allows them to take intelligent action to maxi-
mize per-customer profitability.

• Pricing and contracts Calculates costs accurately in order to optimize pricing of a
contract. Without accurate cost data, prices may be below or too near to cost; or prices
may be uncompetitive because they are too high.

• Forecasting Estimates customer demand for products and services.
• Sales Calculates sales profitability and productivity for all territories and regions; ana-

lyzes results by geography, product, sales group, or individual.
• Financial Provides real-time data for optimal credit terms, portfolio analysis, and

actions that reduce risk or bad debt expense.

Table 3.3 summarizes several successful applications of data warehouses.

TABLE 3.3 Data Warehouse Applications by Industry

Industry Applications
Airline Crew assignment, aircraft deployment, analysis of route profitability, and customer loyalty promotions

Banking and financial Customer service, trend analysis, product and service services promotions, and reduction of IS expenses

Credit card Customer service, new information service for a fee, fraud detection

Defense contracts Technology transfer, production of military applications

E-business Data warehouses with personalization capabilities, marketing/shopping preferences allowing for up-selling
and cross-selling

Government Reporting on crime areas, homeland security

Health care Reduction of operational expenses

Investment and insurance Risk management, market movements analysis, customer tendencies analysis, and portfolio management

Retail chain Trend analysis, buying pattern analysis, pricing policy, inventory control, sales promotions, and optimal
distribution channel decision

Big Data Analytics and Data Discovery 83

Questions

1. What are the differences between databases and data warehouses?

2. What are the differences between data warehouses and data marts?

3. Explain ETL.

4. Explain CDC.

5. What is an advantage of an enterprise data warehouse (EDW)?

6. Why might a company invest in a data mart instead of a data warehouse?

7. What types of decisions can benefit from a data warehouse?

3.4 Big Data Analytics and Data Discovery
Like mobile and cloud, big data and advanced data analytics are reshaping organizations and
business processes to increase efficiency and improve performance. Research firm IDC fore-
casts that big data and analytics spending will reach $187 billion in 2019 (Ovalsrud, 2016).

Data analytics is an important tool across organizations, which helps users discover
meaningful real-time insights to meet customer expectations, achieve better results and stay
competitive. These deeper insights combined with human expertise enable people to recog-
nize meaningful relationships more quickly or easily; and furthermore, realize the strategic
implications of these situations. Imagine trying to make sense of the fast and vast data gener-
ated by social media campaigns on Facebook or by sensors attached to machines or objects.
Low-cost sensors make it possible to monitor all types of physical things—while analytics
makes it possible to understand those data in order to take action in real time. For example,
sensors data can be analyzed in real time:

• To monitor and regulate the temperature and climate conditions of perishable foods as
they are transported from farm to supermarket.

• To sniff for signs of spoilage of fruits and raw vegetables and detect the risk of E. coli con-
tamination.

• To track the condition of operating machinery and predict the probability of failure.
• To track the wear of engines and determine when preventive maintenance is needed.

In this section, you will learn about the value, challenges, and technologies involved in putting
data and analytics to use to support decisions and action, together with examples of skill sets
currently in high demand by organizations expanding their efforts to train, hire and retain com-
petent data professionals (Career Insight 3.1).

Big data is an extremely large
data set that is too large or
complex to be analyzed using
traditional data processing
techniques.

Data analytics is a technique
of qualitatively or quantitatively
analyzing a data set to reveal
patterns, trends, and associations
that often relate to human
behavior and interactions,
to enhance productivity and
business gain.

Career Insight 3.1

Managing and Interpreting Big Data are High
Demand Skills
Concerns about the analytics skills gap have existed for years. It is
increasingly clear that the shortage isn’t just in data scientists, but
also data engineers, data analysts, and even the executives required
to manage data initiatives. As a result, organizations and institu-
tions are expanding their efforts to train, hire, and retain data pro-
fessionals. Here are two of those skill sets that are in high demand.

Big data specialists manage and package big data collec-
tions, analyze, and interpret trends and present their findings in
easy to understand ways to “C”-level executives. Those who can
present the data through user-friendly data visualizations will be

particularly sought after. Skills required of these big data profes-
sionals include big data visualization, statistical analysis, Big Data
reporting and presentation, Apache Hadoop, NoSQL Database
Skills, and machine learning.

Business intelligence (BI) analysts use tools and techniques
to go beyond the numbers of big data and take action based on
the findings of the big data analyses. Successful BI professionals
use self-service BI platforms, like Tableau, SAP, Oracle BI, Micro-
soft BI, and IBM Cognos, to create BI reports and visualizations
to streamline the process and reduce reliance on additional staff.
Additional skills of critical thinking, creative problem solving,
effective communication, and presentations further enhance their
attractiveness to employers (Hammond, 2015).

84 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

When the data set is too large or complex to be analyzed using traditional data processing
applications, big data analytics tools are used. One of the biggest sectors of customer rela-
tions relative to big data is customer value analytics (CVA). CVA studies the recent phenomenon
that customers are more willing to use and purchase innovative products, services, and cus-
tomer service channels while demanding an increasing amount of high-quality, personalized
products. Companies and producers use big data analytics to capture this combination to
transform the information into usable data to track and predict trends. If companies know
what customers like, what makes them spend more, and when they are happy, they can
leverage the information to keep them happy and provide better products and services.

Companies can also use big data analytics to store and use their data across the supply
chain. To maximize the effectiveness of data analytics, companies usually complete these
objectives throughout their input transformation process:

• Invest heavily in IT to collect, integrate, and analyze data from each store and sales unit.
• Link these data to suppliers’ databases, making it possible to adjust prices in real time, to

reorder hot-selling items automatically, and to shift items from store to store easily.
• Constantly test, integrate, and report information instantly available across the organization—

from the store floor to the CFO’s office.

These big data programs enable them to pinpoint improvement opportunities across the
supply chain—from purchasing to in-store availability management. Specifically, the companies
are able to predict how customers will behave and use that knowledge to be prepared to respond
quickly. According to Louis Columbus at Forbes, the market demand for big data analytics is about
to hit its largest increase in history. Software for business analytics will increase by more than
50% by 2019. Prescriptive analytics software will be worth $1.1B in 2019, compared to its value of
$415M in 2014. Since increasing the focus on customer demand trends, effectively entering new
markets and producing better business models, and enhancing organizational performance are
the most important goals for 21st-century companies, business analytics will be needed in almost
every instance. Taking advantage of the benefits of business intelligence is allowing sectors like
health care to compete in areas they would have not been able to enter before (Columbus, 2016).

To be effective in using data analysis, organization must pay attention to the four Vs of
analytics—variety, volume, velocity, and veracity—shown in Figure 3.12.

Big data can have a dramatic impact on the success of any enterprise, or they can be a low-
contributing major expense. However, success is not achieved with technology alone. Many com-
panies are collecting and capturing huge amounts of data, but spending very little effort to ensure
the veracity and value of data captured at the transactional stage or point of origin. Emphasis in
this direction will not only increase confidence in the datasets, but also significantly reduce the
efforts for analytics and enhance the quality of decision-making. Success depends also on ensuring
that you avoid invalid assumptions, which can be done by testing the assumptions during analysis.

Big data analytics process of
examining large and varied data
sets to identify hidden patterns
and correlations, market trends,
customer preferences and other
useful information to enable
better business decisions.

Variety

Volume

Velocity

Veracity

Validating data and extracting insights that
managers and workers can trust are key factors of 
successful analytics. Trust in analytics has grown
more difficult with the explosion of data sources 

Speed of access reports that are drawn from
data defines the difference between effective
and ineffective analytics

Large volumes of structured and 
unstructured data are analyzed

The analytic environment has expanded
from pulling data from enterprise systems 
to include big data unstructured sources

8 cm

12 cm

Fl
ow

er
ph

ot
os

/U
ni

ve
rs

al
Im

ag
es

G
ro

up
/G

et
ty

Im
ag

es

iQ
on

ce
pt

/S
hu

tte
rs

to
ck

6 cm
Velocity

Acceleration

FIGURE 3.12 The four Vs of data analytics.

Big Data Analytics and Data Discovery 85

Human Expertise and Judgment are Needed
Human expertise and judgment are needed to interpret the output of analytics (refer to
Figure 3.13). Data are worthless if you cannot analyze, interpret, understand, and apply the
results in context. This brings up several challenges:

• Data need to be prepared for analysis For example, data that are incomplete or dupli-
cated need to be fixed.

• Dirty data degrade the value of analytics The “cleanliness” of data is very important
to data mining and analysis projects. Analysts have complained that data analytics is like
janitorial work because they spend so much time on manual, error-prone processes to
clean the data. Large data volumes and variety mean more data that are dirty and harder
to handle.

• Data must be put into meaningful context If the wrong analysis or datasets are used,
the output would be nonsense, as in the example of the Super Bowl winners and stock
market performance. Stated in reverse, managers need context in order to understand
how to interpret traditional and big data.

Human
expertise

Data
analytics

High-quality
data

Trends or
relationships

Context to understand
what the numbers
represent and how
to interpret them

What action to take

+

+

FIGURE 3.13 Data analytics, human expertise, and high-
quality data are needed to obtain actionable information.

IT at Work 3.2 describes how big data analytics, collaboration, and human expertise have
transformed the new drug development process.

Machine-generated sensor data are becoming a larger proportion of big data (Figure 3.14),
according to a research report by IDC (2015). It is predicted that these data will increase to two-
thirds of all data by 2020, representing a significant increase from the 11% level of 2005. In
addition to its growth as a portion of analyzed data, the market for sensor data will increase to
$1.7 trillion in 2020.

On the consumer side, a significant factor in this market is the boom in wearable
technology—products like FitBit and the Apple Watch. Users no longer even have to input
data to these devices as it is automatically gathered and tracked in real time. On the
public sector and enterprise side, sensor data and the Internet of Things (IoT) are being
used in the advancement of IT-enabled business processes like automated factories and
distribution centers and IT-enabled products like the wearable tech (IDC,  2015). Federal
health reform efforts have pushed health-care organizations toward big data and ana-
lytics. These organizations are planning to use big data analytics to support revenue
cycle management, resource utilization, fraud prevention, health management, and quality
improvement.

Hadoop and MapReduce Big data volumes exceed the processing capacity of
conventional database infrastructures. A widely used processing platform is Apache Hadoop.

86 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

It places no conditions on the structure of the data it can process. Hadoop distributes com-
puting problems across a number of servers. Hadoop implements MapReduce in two stages:

1. Map stage MapReduce breaks up the huge dataset into smaller subsets; then distributes
the subsets among multiple servers where they are partially processed.

2. Reduce stage The partial results from the map stage are then recombined and made
available for analytic tools.

IT at Work 3.2

Researchers Use Genomics and Big Data in
Drug Discovery
Drug development is a high-risk business. Almost 90% of
new drugs ultimately fail to reach the market. One of the
challenges has been the amount, variety, and complexity of the
data that need to be systematically analyzed. Big data tech-
nologies and private–public partnerships have made biomedical
analytics feasible.

New Drug Development Had Been Slow and Expensive
Biotechnology advances have produced massive data on the
biological causes of disease. However, analyzing these data
and converting discoveries into treatments are much more dif-
ficult. Not all biomedical insights lead to effective drug targets,
and choosing the wrong target leads to failures late in the drug
development process, costing time, money, and lives. Devel-
oping a new drug—from early discovery through Food and
Drug Administration (FDA) approval—takes over a decade. As a
consequence, each success ends up costing more than $1 bil-
lion. Sometimes much more! For example, by the time Pfizer
Inc., Johnson & Johnson, and Eli Lilly & Co. announced their
new drugs had only limited benefit for Alzheimer’s patients in
late-stage testing, the industry had spent more than $30 billion
researching amyloid plaque in the brain.

Reducing Risk of Failure
Drug makers, governments, and academic researchers have
partnered to improve the odds of drug success and after years of
decline, the pharmaceutical industry is beginning to experience
a greater rate of success with its clinical trials. Partnerships bring
together the expertise of scientists from biology, chemistry, bio-
informatics, genomics, and big data. They are using big data to
identify biological targets for drugs and eliminate failures before
they reach the human testing stage and many anticipate that
big data and the analytics that go with it could be a key element
in further increasing the success rates in pharmaceutical R&D
(Cattell et al., 2016).

GlaxoSmithKline, the European Bioinformatics Institute (EBI),
and the Wellcome Trust Sanger Institute established the Centre for
Therapeutic Target Validation (CTTV) near Cambridge, England.
CTTV partners combine cutting-edge genomics with the ability to
collect and analyze massive amounts of biological data. By not

developing drugs that target the wrong biological pathways, they
avoid wasting billions of research dollars.

With biology now a data-driven discipline, collaborations such
as CTTV are needed to improve efficiencies, cut costs, and pro-
vide the best opportunities for success. Other private–public part-
nerships that had formed to harness drug research and big data
include the following:

• Accelerating Medicines Partnership and U.S. National Insti-
tutes of Health (NIH) In February 2014 the NIH announced
that the agency, 10 pharmaceutical companies, and nonprofit
organizations were investing $230 million in the Accelerating
Medicines Partnership.

• Target Discovery Institute and Oxford University Oxford
University opened the Target Discovery Institute in 2013. Target
Discovery helps to identify drug targets and molecular inter-
actions at a critical point in a disease-causing pathway—that is,
when those diseases will respond to drug therapy. Researchers
try to understand complex biological processes by analyzing
image data that have been acquired at the microscopic scale.

“The big data opportunity is especially compelling in com-
plex business environments experiencing an explosion in the types
and volumes of available data. In the health-care and pharma-
ceutical industries, data growth is generated from several sources,
including the R&D process itself, retailers, patients and caregivers.
Effectively utilizing these data will help pharmaceutical companies
better identify new potential drug candidates and develop them
into effective, approved and reimbursed medicines more quickly”
(Cattell et al., 2016).

IT at Work Questions
1. What are the consequences of new drug development

failures?
2. What factors have made biomedical analytics feasible?

Why?
3. Large-scale big data analytics are expensive. How can the

drug makers justify investments in big data?
4. Why would drug makers such as Glaxo and Pfizer be

willing to share data given the fierce competition in
their industry?

Sources: Compiled from Cattell et al. (2016), HealthCanal (2014), Kitamura
(2014), and NIH (2014).

Big Data Analytics and Data Discovery 87

To store data, Hadoop has its own distributed file system, Hadoop File System (HDFS),
which functions in three stages:

• Loads data into HDFS.
• Performs the MapReduce operations.
• Retrieves results from HDFS.

Figure 3.15 diagrams how Facebook uses database technology and Hadoop. IT at Work
3.3 describes how First Wind has applied big data analytics to improve the operations of its
wind farms and to support sustainability of the planet by reducing environmentally damaging
carbon emissions.

©
K

itt
ic

ha
is

/iS
to

ck
ph

ot
o

FIGURE 3.14 Machine-generated data from physical objects are becoming a much larger portion of
big data and analytics.

MySQL databases capture and
store Facebook’s data.

Results are transferred back into
MySQL for use in pages that are
loaded for members.

Members see customized Facebook
pages.

Data are loaded into Hadoop where
processing occurs, such as identifying
recommendations for you based on
your friends’ interests.

FIGURE 3.15 Facebook’s MySQL database and Hadoop technology provide customized
pages for its members.

88 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

Data and Text Mining
Data and text mining are different from DBMS and data analytics. As you have read earlier in this
chapter, a DBMS supports queries to extract data or get answers from huge databases. But, in
order to perform queries in a DBMS you must first know the question you want to be answered.
You also have read that Data Analytics describes the entire function of applying technologies,
algorithms, human expertise, and judgment. Data and text mining are specific analytic tech-
niques that allow users to discover knowledge that they didn’t know existed in the databases.

Data mining software enables users to analyze data from various dimensions or angles, cat-
egorize them, and find correlations or patterns among fields in the data warehouse. Up to 75%
of an organization’s data are nonstructured word-processing documents, social media, text mes-
sages, audio, video, images and diagrams, faxes and memos, call center or claims notes, and so on.

IT at Work 3.4 describes one example of how the U.S. government is using data mining
software to continuously improve its detection and deterrence systems.

Text mining is a broad category that involves interpreting words and concepts in con-
text. Any customer becomes a brand advocate or adversary by freely expressing opinions and
attitudes that reach millions of other current or prospective customers on social media. Text
mining helps companies tap into the explosion of customer opinions expressed online. Social
commentary and social media are being mined for sentiment analysis or to understand
consumer intent. Innovative companies know they could be more successful in meeting their
customers’ needs, if they just understood them better. Tools and techniques for analyzing
text, documents, and other nonstructured content are available from several vendors.

Combining data and text mining can create even greater value. Burns (2016) pointed out that
mining text or nonstructural data enables organizations to forecast the future instead of merely
reporting the past. He also noted that forecasting methods using existing structured data and non-
structured text from both internal and external sources provide the best view of what lies ahead.

Creating Business Value
Enterprises invest in data mining tools to add business value. Business value falls into three
categories, as shown in Figure 3.16.

IT at Work 3.3

Industrial Project Relies on Big Data Analytics
Wind power can play a major role in meeting America’s rising
demand for electricity—as much as 20% by 2030. Using more
domestic wind power would reduce the nation’s dependence on
foreign sources of natural gas and also decrease carbon dioxide
(CO2) emissions that contribute to adverse climate change.

First Wind is an independent North American renewable energy
company focused on the development, financing, construction,
ownership, and operation of utility-scale power projects in the
United States. Based in Boston, First Wind has developed and oper-
ates 980 megawatts (MW) of generating capacity at 16 wind energy
projects in Maine, New York, Vermont, Utah, Washington, and
Hawaii. First Wind has a large network of sensors embedded in the
wind turbines, which generate huge volumes of data continuously.
The data are transmitted in real time and analyzed on a 24/7 real
time basis to understand the performance of each wind turbine.

Sensors collect massive amounts of data on the temperature,
wind speeds, location, and pitch of the blades. The data are ana-
lyzed to study the operation of each turbine in order to adjust them
to maximum efficiency. By analyzing sensor data, highly refined

measurements of wind speeds are possible. In wintry conditions,
turbines can detect when they are icing up, and speed up or change
pitch to knock off the ice. In the past, when it was extremely windy,
turbines in the entire farm had been turned off to prevent damage
from rotating too fast. Now First Wind can identify the specific por-
tion of turbines that need to be shut down. Based on certain alerts,
decisions often need to be taken within a few seconds.

Upgrades on 123 turbines on two wind farms have improved
energy output by 3%, or about 120 megawatt hours per turbine
per year. That improvement translates to $1.2 million in additional
revenue a year from these two farms.

IT at Work Questions
1. What are the benefits of big data analytics to First Wind?
2. What are the benefits of big data analytics to the environ-

ment and the nation?
3. How do big data analytics impact the performance of

wind farms?

Sources: Compiled from www.FirstWind.com (2014) and U.S. Department of
Energy (2015).

Big Data Analytics and Data Discovery 89

Here are some brief cases illustrating the types of business value created by data and
text mining.

1. Using pattern analysis, Argo Corporation, an agricultural equipment manufacturer based
in Georgia, was able to optimize product configuration options for farm machinery and
real-time customer demand to determine the optimal base configurations for its machines.
As a result, Argo reduced product variety by 61% and cut days of inventory by 81% while
still maintaining its service levels.

2. The mega-retailer Walmart wanted its online shoppers to find what they were looking for
faster. Walmart analyzed clickstream data from its 45 million monthly online shoppers;
then combined that data with product- and category-related popularity scores. The popu-
larity scores had been generated by text mining the retailer’s social media streams. Lessons
learned from the analysis were integrated into the Polaris search engine used by custom-
ers on the company’s website. Polaris has yielded a 10% to 15% increase in online shop-
pers completing a purchase, which equals roughly $1 billion in incremental online sales.

3. McDonald’s bakery operation replaced manual equipment with high-speed photo analy-
ses to inspect thousands of buns per minute for color, size, and sesame seed distribution.
Automatically, ovens and baking processes adjust instantly to create uniform buns and
reduce thousands of pounds of waste each year. Another food products company also uses
photo analyses to sort every french fry produced in order to optimize quality.

4. Infinity Insurance discovered new insights that it applied to improve the performance of
its fraud operation. The insurance company text mined years of adjuster reports to look
for key drivers of fraudulent claims. As a result, the company reduced fraud by 75%, and
eliminated marketing to customers with a high likelihood of fraudulent claims.

Making more informed decisions at the time they need to be made

Discovering unknown insights, patterns, or relationships

Automating and streamlining or digitizing business processes

FIGURE 3.16 Business value falls into three buckets.

IT at Work 3.4

DoD and Homeland Security Use Data Mining Spy
Machine for Threat Intelligence
Digital Reasoning, a large player in the field of big data analytics has
upgraded its software that is currently contracted by the Department
of Defense and Homeland Security. Synthesys 4, the name for the
brand new software, allows the agencies to monitor threats in the
homeland and gather data about potential attacks. Ironically, one
of the main tactics employed with this software is to track and deter
potential employees or contractors who have access to it. Vice
President of Federal Programs Eric Hansen says that the software
excels at monitoring behavioral patterns, language, and data to act
and respond like a human detective would to a potential threat.

While Digital Reasoning also contracts out to other organiza-
tions like Goldman Sachs, the US government is probably its most
interesting and important client. Using automatic computer software

to analyze data is much more effective at hindering attacks and
quicker for analyzing large amounts of data about potential threats
domestically and abroad. For instance, the software knows exactly
what to look for without being bogged down and distracted by super-
fluous data. As available data and analytical capabilities increase, the
US government is continuously aiming to improve its detection and
deterrence systems using software like Synthesys 4 (Bing, 2016).

IT at Work Questions
1. What is Synthesys 4?
2. How does data mining help the DoD achieve its mission?
3. What are the main threats to the government’s data sources?
4. Why does the government see Synthesys 4 as essential to its

threat deterrence measures?

Sources: Compiled from Bing (2016) and syntheses.net (2017).

90 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

Text Analytics Procedure
With text analytics, information is extracted from large quantities of various types of textual
information. The basic steps involved in text analytics include the following:

1. Exploring First, documents are explored. This might occur in the form of simple word
counts in a document collection, or by manually creating topic areas to categorize docu-
ments after reading a sample of them. For example, what are the major types of issues
(brake or engine failure) that have been identified in recent automobile warranty claims? A
challenge of the exploration effort is misspelled or abbreviated words, acronyms, or slang.

2. Preprocessing Before analysis or the automated categorization of content, the text may
need to be preprocessed to standardize it to the extent possible. As in traditional analy-
sis, up to 80% of preprocessing time can be spent preparing and standardizing the data.
Misspelled words, abbreviations, and slang may need to be transformed into consistent
terms. For instance, BTW would be standardized to “by the way” and “left voice message”
could be tagged as “lvm.”

3. Categorizing and modeling Content is then ready to be categorized. Categorizing
messages or documents from information contained within them can be achieved using
statistical models and business rules. As with traditional model development, sample
documents are examined to train the models. Additional documents are then processed
to validate the accuracy and precision of the model, and finally new documents are evalu-
ated using the final model (scored). Models can then be put into production for the auto-
mated processing of new documents as they arrive.

Analytics Vendor Rankings
Analytics applications cover business intelligence functions sold as a standalone application
for decision support or embedded in an integrated solution. The introduction of intuitive deci-
sion support tools, dashboards, and data visualization (discussed in detail in Chapter 11) have

TABLE 3.4 Top Analytics Vendors

Rank Vendor Focus Products
1 SAP Market lines of analytics products that cove BI and report-

ing, predictive analysis, performance management and
governance, risk and compliance applications

SAP Business Objects Predictive Analytics
SAP Business Objects BI
SAP Business Objects Planning and Consolidation

2 SAS Offer simple desktop solutions to high performance dis-
tributed processing solutions

SAS Analytics Pro
SAS Enterprise Minder
SAS Visual Analytics
SAS Customer Intelligence 360

3 IBM Allow users to quickly discover patterns and meanings in
data with guided data discovery, automated predictive
analytics, one-click analysis, self-service dashboards, and
a natural language dialogue

Watson Analytics

4 Oracle Offer a complete solution for connecting and collabo-
rating with analytics in the cloud. Products allow users to
aggregate, experiment, manage, and analyze/act

Oracle Data Integrator
Oracle Big Data Cloud Service
Oracle R Advanced Analytics for Hadoop
BI Cloud Service
Oracle Stream Explore

5 Microsoft Provide a broad range of products from standalone solu-
tions to integrated tools that provide data preparation,
data discovery, and interactive dashboard capabilities in
a single tool

Excel
HDInsight
Machine Learning
Stream Analytics
Power BI Embedded

Business Intelligence and Electronic Records Management 91

added some interesting interactive components to big data analytics to bring the data to life
and enable nonexperts to use it.

Organizations invest in analytics, BI, and data/text mining applications based on new fea-
tures and capabilities beyond those offered by their legacy systems. Analytics vendors offer
everything from simple-to-use reporting tools to highly sophisticated software for tackling the
most complex data analysis problems. A list of the top five analytics and BI application vendors
are shown in Table 3.4.

Questions

1. Why are human expertise and judgment important to data analytics? Give an example.

2. What is the relationship between data quality and the value of analytics?

3. Why do data need to be put into a meaningful context?

4. How can manufacturers and health care benefit from data analytics?

5. How does data mining provide value? Give an example.

6. What is text mining?

7. What are the basic steps involved in text analytics?

3.5 Business Intelligence and
Electronic Records Management
Continuing developments in data analytics and business intelligence (BI) make it increas-
ingly necessary for organizations to be aware of the differences between these terms and the
different ways in which they add value in an organization. The field of BI started in the late
1980s and has been a key to competitive advantage across industries and in enterprises of all
sizes. Unlike data analytics that has predictive capabilities, BI is a comprehensive term that
refers to analytics and reporting tools that were traditionally used to determine trends in his-
torical data.

The key distinction between data analytics and BI is that analytics uses algorithms to statis-
tically determine the relationships between data whereas BI presents data insights established
by data analytics in reports, easy-to-use dashboards, and interactive visualizations. BI can also
make it easier for users to ask data-related questions and obtain results that are presented in a
way that they can easily understand.

What started as a tool to support sales, marketing, and customer service departments
has widely evolved into an enterprise wide strategic platform. While BI software is used in
the operational management of divisions and business processes, they are also used to
support strategic corporate decision-making. The dramatic change that has taken effect
over the last few years is the growth in demand for operational intelligence across multiple
systems and businesses—increasing the number of people who need access to increasing
amounts of data. Complex and competitive business conditions do not leave much slack
for mistakes.

Unfortunately, some companies are not able to use their data efficiently, creating a higher
cost to gather information than the benefits it provides. Luckily, BI software brings decision-
making information to businesses in as little as two clicks. Small businesses have a shared
interest with large corporations to enlist BI to help with decision-making, but they are usually
unequipped to build data centers and use funds to hire analysts and IT consultants. However,
small business BI software is rapidly growing in the analytics field, and it is increasingly cheaper
to implement it as a decision-making tool. Small businesses do not always have workers spe-
cialized in certain areas, but BI software makes it easy for all employees to analyze the data and
make decisions (King, 2016).

Business intelligence (BI) is
a set of tools and techniques
for acquiring and transforming
raw data into meaningful and
useful information for business
analysis purposes in the forms of
reports, dashboards, or interactive
visualizations.

92 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

Business Benefits of BI
BI provides data at the moment of value to a decision-maker—enabling it to extract crucial facts
from enterprise data in real time or near real time. A BI solution with a well-designed dash-
board, for example, provides retailers with better visibility into inventory to make better deci-
sions about what to order, how much, and when in order to prevent stock-outs or minimize
inventory that sits on warehouse shelves.

Companies use BI solutions to determine what questions to ask and find answers to them.
BI tools integrate and consolidate data from various internal and external sources and then
process them into information to make smart decisions. BI answers questions such as these:
Which products have the highest repeat sales rate in the last six months? Do customer likes on
Facebook relate to product purchase? How does the sales trend break down by product group
over the last five years? What do daily sales look like in each of my sales regions?

According to The Data Warehousing Institute, BI “unites data, technology, analytics, and
human knowledge to optimize business decisions and ultimately drive an enterprise’s suc-
cess. BI programs usually combine an enterprise data warehouse and a BI platform or tool
set to transform data into usable, actionable business information” (The Data Warehousing
Institute, 2014). For many years, managers have relied on business analytics to make better-
informed decisions. Multiple surveys and studies agree on BI’s growing importance in analyzing
past performance and identifying opportunities to improve future performance.

Common Challenges: Data Selection and Quality
Companies cannot analyze all of their data—and much of them would not add value. There-
fore, an unending challenge is how to determine which data to use for BI from what seems
like unlimited options (Oliphant, 2016). One purpose of a BI strategy is to provide a framework
for selecting the most relevant data without limiting options to integrate new data sources.
Information overload is a major problem for executives and for employees. Another common
challenge is data quality, particularly with regard to online information, because the source
and accuracy might not be verifiable.

Aligning BI Strategy with Business Strategy
Reports and dashboards are delivery tools, but they may not be delivering business intelligence.
To get the greatest value out of BI, the CIO needs to work with the CFO and other business lead-
ers to create a BI governance program whose mission is to achieve the following (Ladley, 2016):

1. Clearly articulate business strategies.
2. Deconstruct the business strategies into a set of specific goals and objectives—the targets.
3. Identify the key performance indicators (KPIs) that will be used to measure progress

toward each target.
4. Prioritize the list of KPIs.
5. Create a plan to achieve goals and objectives based on the priorities.
6. Estimate the costs needed to implement the BI plan.
7. Assess and update the priorities based on business results and changes in business strategy.

After completing these activities, BI analysts can identify the data to use in BI and the
source systems. This is a business-driven development approach that starts with a business
strategy and work backward to identify data sources and the data that need to be acquired
and analyzed.

Businesses want KPIs that can be utilized by both departmental users and management. In
addition, users want real-time access to these data so that they can monitor processes with the
smallest possible latency and take corrective action whenever KPIs deviate from their target

Business Intelligence and Electronic Records Management 93

values. To link strategic and operational perspectives, users must be able to drill down from
highly consolidated or summarized figures into the detailed numbers from which they were
derived to perform in-depth analyses.

BI Architecture and Analytics
BI architecture is undergoing technological advances in response to big data and the perfor-
mance demands of end-users (Wise, 2016). BI vendors are facing the challenges of social, sen-
sor, and other newer data types that must be managed and analyzed. One technology advance
that can help handle big data is BI in the cloud. Figure 3.17 lists the key factors contributing to
the increased use of BI. It can be hosted on a public or private cloud. Although cloud services
come with more upkeep, optimizing the service and customizing it for one’s company brings
undeniable benefits in data security. With a public cloud, a service provider hosts the data and/
or software that are accessed via an Internet connection. For private clouds, the company hosts
its own data and software, but uses cloud-based technologies.

have created demand for
effortless 24/7 access to
insights.

Smart Devices
Everywhere

when they provide insight
that supports decisions
and action.

Data are Big
Business

help to ask questions that
were previously unknown
and unanswerable.

Advanced Bl and
Analytics

are providing low-cost and
flexible solutions.

Cloud Enabled Bl
and Analytics

FIGURE 3.17 Four factors contributing to increased use of BI.

For cloud-based BI, a popular option offered by a growing number of BI tool vendors is
software as a service (SaaS). MicroStrategy offers MicroStrategy Cloud, which provides fast
deployment with reduced project risks and costs. This cloud approach appeals to small and
midsized companies that have limited IT staff and want to carefully control costs. The potential
downsides include slower response times, security risks, and backup risks.

Competitive Analytics in Practice: CarMax CarMax, Inc. is the nation’s larg-
est retailer of used cars and for a decade has remained one of FORTUNE Magazine’s “100 Best
Companies to Work For.” CarMax was the fastest retailer in U.S. history to reach $1 billion in
revenues. In 2016 the company had over $15 billion in net sales and operating revenues, rep-
resenting a 6.2% increase over the prior year’s results. The company grew rapidly because of
its compelling customer offer—no-haggle prices and quality guarantees backed by a 125-point
inspection that became an industry benchmark—and auto financing. As of November 30, 2016,
CarMax operated in 169 locations across 39 U.S. states and had more than 22,000 full- and
part-time employees.

CarMax continues to enhance and refine its information systems, which it believes to be a
core competitive advantage. CarMax’s IT includes the following:

• A proprietary IS that captures, analyzes, interprets, and distributes data about the cars
CarMax sells and buys.

• Data analytics applications that track every purchase; number of test drives and credit
applications per car; color preferences in every demographic and region.

94 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

• Proprietary store technology that provides management with real-time data about every
aspect of store operations, such as inventory management, pricing, vehicle transfers,
wholesale auctions, and sales consultant productivity.

• An advanced inventory management system that helps management anticipate future
inventory needs and manage pricing.

Throughout CarMax, analytics are used as a strategic asset and insights gained from ana-
lytics are available to everyone who needs them.

Electronic Records Management
All organizations create and retain business records. A record is documentation of a business
event, action, decision, or transaction. Examples are contracts, research and development,
accounting source documents, memos, customer/client communications, hiring and promo-
tion decisions, meeting minutes, social posts, texts, e-mails, website content, database records,
and paper and electronic files. Business documents such as spreadsheets, e-mail messages,
and word-processing documents are a type of record. Most records are kept in electronic for-
mat and maintained throughout their life cycle—from creation to final archiving or destruction
by an electronic records management system (ERMS).

One application of an ERMS would be in a company that is required by law to retain finan-
cial documents for at least seven years, product designs for many decades, and e-mail mes-
sages about marketing promotions for a year. The major ERM tools are workflow software,
authoring tools, scanners, and databases. ERM systems have query and search capabilities
so documents can be identified and accessed like data in a database. These systems range
from those designed to support a small workgroup to full-featured, Web-enabled enterprise-
wide systems.

Legal Duty to Retain Business Records
Companies need to be prepared to respond to an audit, federal investigation, lawsuit, or any
other legal action against them. Types of lawsuits against companies include patent violations,
product safety negligence, theft of intellectual property, breach of contract, wrongful termina-
tion, harassment, discrimination, and many more.

Because senior management must ensure that their companies comply with legal and
regulatory duties, managing electronic records (e-records) is a strategic issue for organizations
in both the public and private sectors. The success of ERM depends greatly on a partnership of
many key players, namely, senior management, users, records managers, archivists, adminis-
trators, and most importantly, IT personnel. Properly managed, records are strategic assets.
Improperly managed or destroyed, they become liabilities.

ERM Best Practices
Effective ERM systems capture all business data and documents at their first touchpoint—data
centers, laptops, the mailroom, at customer sites, or remote offices. Records enter the enter-
prise in multiple ways—from online forms, bar codes, sensors, websites, social sites, copiers,
e-mails, and more. In addition to capturing the entire document as a whole, important data
from within a document can be captured and stored in a central, searchable repository. In this
way, the data are accessible to support informed and timely business decisions.

In recent years, organizations such as the Association for Information and Image
Management (AIIM), National Archives and Records Administration (NARA), and ARMA
International (formerly the Association of Records Managers and Administrators) have created
and published industry standards for document and records management. Numerous best

Electronic records management
system (ERMS) consists of
hardware and software that
manage and archive electronic
documents and image paper
documents; then index and
store them according to
company policy.

Business Intelligence and Electronic Records Management 95

practices articles, and links to valuable sources of information about document and records
management, are available on their websites. The IT Toolbox describes ARMA’s eight generally
accepted recordkeeping principles framework.

ERM Benefits
Departments or companies whose employees spend most of their day filing or retrieving docu-
ments or warehousing paper records can reduce costs significantly with ERM. These systems
minimize the inefficiencies and frustration associated with managing paper documents and
workflows. However, they do not create a paperless office as had been predicted.

An ERM can help a business to become more efficient and productive by the following:

• Enabling the company to access and use the content contained in documents.
• Cutting labor costs by automating business processes.
• Reducing the time and effort required to locate information the business needs to support

decision-making.
• Improving the security of content, thereby reducing the risk of intellectual property theft.
• Minimizing the costs associated with printing, storing, and searching for content.

When workflows are digital, productivity increases, costs decrease, compliance obli-
gations are easier to verify, and green computing becomes possible. Green computing is an
initiative to conserve our valuable natural resources by reducing the effects of our computer
usage on the environment. You can read about green computing and the related topics of
reducing an organization’s carbon footprint, sustainability, and ethical and social responsibil-
ities in Chapter 14.

ERM for Disaster Recovery,
Business Continuity, and Compliance
Businesses also rely on their ERM system for disaster recovery and business continuity, secu-
rity, knowledge sharing and collaboration, and remote and controlled access to documents.
Because ERM systems have multilayered access capabilities, employees can access and change
only the documents they are authorized to handle.

When companies select an ERM to meet compliance requirements, they should ask the
following questions:

1. Does the software meet the organization’s needs? For example, can the DMS be installed
on the existing network? Can it be purchased as a service?

2. Is the software easy to use and accessible from Web browsers, office applications, and
e-mail applications? If not, people will not use it.

3. Does the software have lightweight, modern Web and graphical user interfaces that effec-
tively support remote users?

4. Before selecting a vendor, it is important to examine workflows and how data, documents,
and communications flow throughout the company. For example, know which informa-
tion on documents is used in business decisions. Once those needs and requirements are
identified, they guide the selection of technology that can support the input types—that
is, capture and index them so they can be archived consistently and retrieved on-demand.

IT at Work 3.5 describes how several companies currently use ERM. Simply creating
backups of records is not sufficient because the content would not be organized and indexed
to retrieve them accurately and easily. The requirement to manage records—regardless of
whether they are physical or digital—is not new.

96 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

Key Terms
active data warehouse (ADW) 81
big data 83
big data analytics 84
business analytics 68
business intelligence (BI) 91
business record 94
business-driven development approach 92
centralized database 73
change data capture (CDC) 80
data analytics 83
data entity 79
data management 69
data marts 80
data mining 88
data warehouse 67
database 69

database management system (DBMS) 70
decision model 68
declarative language 70
dirty data 75
distributed database 73
electronic records management system
(ERMS) 94
extract, transform and load (ETL) 80
enterprise data warehouses (EDWs) 80
eventual consistency 70
fault tolerance 72
Hadoop 85
information overload 92
immediate consistency 70
latency 70
MapReduce 86

master data management (MDM) 78
NoSQL 72
online transaction processing (OLTP)
systems 71
online analytical processing (OLAP)
systems 72
petabyte 66
query 70
relational database 70
relational management systems
(RDBMSs) 70
sentiment analysis 88
scalability 72
structured query language (SQL) 70
text mining 88

IT at Work 3.5

ERM Applications
Here some examples of how companies use ERM in the health-care,
finance, and education sectors:

• The Surgery Center of Baltimore stores all medical records
electronically, providing instant patient information to doc-
tors and nurses anywhere and at any time. The system also
routes charts to the billing department, which can then scan
and e-mail any relevant information to insurance providers
and patients. The ERM system helps maintain the required
audit trail, including the provision of records when they are
needed for legal purposes. How valuable has ERM been to
the center? Since it was implemented, business processes
have been expedited by more than 50%, the costs of these
processes have been significantly reduced, and the morale of
office employees in the center has improved noticeably.

• American Express (AMEX) uses TELEform, developed by
Alchemy and Cardiff Software, to collect and process more
than 1 million customer satisfaction surveys every year. The
data are collected in templates that consist of more than
600 different survey forms in 12 languages and 11 countries.
AMEX integrated TELEform with AMEX’s legacy system, which
enables it to distribute processed results to many managers.
Because the survey forms are now readily accessible, AMEX has
reduced the number of staff who process these forms from 17
to 1, thereby saving the company more than $500,000 a year.

• The University of Cincinnati provides authorized access to the
personnel files of 12,000 active employees and tens of thou-
sands of retirees. The university receives more than 75,000
queries about personnel records every year and then must
search more than 3 million records to answer these queries.
Using a microfilm system to find answers took days. The
solution was an ERM that digitized all paper and microfilm
documents, without help from the IT department, making
them available via the Internet and the university’s intranet.
Authorized employees access files using a browser.

IT at Work Questions
1. What are the business benefits of BI?
2. What are two-related challenges that must be resolved for BI

to produce meaningful insights?
3. What are the steps in a BI governance program?
4. What does it mean to drill down into data, and why is it

important?
5. What four factors are contributing to increased use of BI?
6. Why is ERM a strategic issue rather than simply an IT issue?
7. Why might a company have a legal duty to retain records?

Give an example.
8. Why is creating backups an insufficient way to manage an

organization’s documents?

Assuring Your Learning 97

Assuring Your Learning

Discuss: Critical Thinking Questions

1. What are the functions of databases and data warehouses?

2. How does data quality impact business performance?

3. List three types of waste or damages that data errors can cause.

4. What is the role of a master reference file?

5. Give three examples of business processes or operations that
would benefit significantly from having detailed real-time or near real-
time data and identify the benefits.

6. What are the tactical and strategic benefits of big data analytics?

7. Explain the four Vs of data analytics.

8. Select an industry. Explain how an organization in that indus-
try could improve consumer satisfaction through the use of data
warehousing.

9. Explain the principle of 90/90 data use.

10. Why is master data management (MDM) important in companies
with multiple data sources?

11. Why would a company invest in a data mart instead of a data
warehouse?

12. Why is data mining important?

13. What are the operational benefits and competitive advantages of
business intelligence?

14. How can ERM decrease operating costs?

Explore: Online and Interactive Exercises

1. Visit www.YouTube.com and search for SAS Enterprise Miner
Software Demo in order to assess the features and benefits of SAS
Enterprise Miner. The URL is https://www.youtube.com/watch?v=
Nj4L5RFvkMg.

a. View the SAS Enterprise Miner Software demo, which is about
seven minutes long.

b. Based on what you learn in the demo, what skills or expertise
are needed to build a predictive model?

c. At the end of the demo, you hear the presenter say that “SAS
Enterprise Miner allows end-users to easily develop predictive
models and to generate scoring to make better decisions about
future business events.” Do you agree that SAS Enterprise Miner
makes it easy to develop such models? Explain.

d. Do you agree that if an expert develops predictive models, it
will help managers make better decisions about future business
events? Explain.

e. Based on your answers to (c), (d), and (e), under what condi-
tions would you recommend SAS Enterprise Miner?

2. Research two electronic records management vendors, such as
Iron Mountain.

a. What are the retention recommendations made by the
vendors? Why?

b. What services or solutions does each vendor offer?

3. View the “Edgenet Gain Real time Access to Retail Product Data
with In-Memory Technology” video on YouTube. Explain the benefit of
in-memory technology.

Analyze & Decide: Apply IT Concepts to Business Decisions

1. Visit www.Oracle.com. Click the Solutions tab to open the menu;
then click Data Warehousing under Technology Solutions.

a. Scroll down to view “Procter & Gamble Drives 30X Performance
Gains with Oracle Exadata.”

b. Describe the Procter & Gamble’s challenges, why it selected
Oracle Exadata, and how that solution met their challenge.

2. Visit www.Teradata.com. Click Resources and open “Videos.” Select
one of the videos related to data analytics. Explain the benefits of the
solution chosen.

3. Spring Street Company (SSC) wanted to reduce the “hidden costs”
associated with its paper-intensive processes. Employees jokingly pre-
dicted that if the windows were open on a very windy day, total chaos

98 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

would ensue as thousands of papers started to fly. If a flood, fire, or
windy day occurred, the business would literally grind to a halt. The
company’s accountant, Sam Spring, decided to calculate the costs of
its paper-driven processes to identify their impact on the bottom line.
He recognized that several employees spent most of their day filing
or retrieving documents. In addition, there were the monthly costs to
warehouse old paper records. Sam measured the activities related to
the handling of printed reports and paper files. His average estimates
were as follows:

a. Dealing with a file: It takes an employee 12 minutes to walk
to the records room, locate a file, act on it, refile it, and return to
his or her desk. Employees do this 4 times per day (five days
per week).

b. Number of employees: 10 full-time employees perform the
functions.

c. Lost document replacement: Once per day, a document gets
“lost” (destroyed, misplaced, or covered with massive coffee
stains) and must be recreated. The total cost of replacing each lost
document is $200.

d. Warehousing costs: Currently, document storage costs are $75
per month.

Sam would prefer a system that lets employees find and work with
business documents without leaving their desks. He’s most concerned
about the human resources and accounting departments. These person-
nel are traditional heavy users of paper files and would greatly benefit

from a modern document management system. At the same time, how-
ever, Sam is also risk averse. He would rather invest in solutions that
would reduce the risk of higher costs in the future. He recognizes that the
U.S. PATRIOT Act’s requirements that organizations provide immediate
government access to records apply to SSC. He has read that manufactur-
ing and government organizations rely on efficient document manage-
ment to meet these broader regulatory imperatives. Finally, Sam wants
to implement a disaster recovery system.

Prepare a report that provides Sam with the data he needs to evalu-
ate the company’s costly paper-intensive approach to managing docu-
ments. You will need to conduct research to provide data to prepare this
report. Your report should include the following information:

1. How should SSC prepare for an ERM if it decides to implement one?

2. Using the data collected by Sam, create a spreadsheet that calcu-
lates the costs of handling paper at SSC based on average hourly rates
per employee of $28. Add the cost of lost documents to this. Then,
add the costs of warehousing the paper, which increases by 10% every
month due to increases in volume. Present the results showing both
monthly totals and a yearly total. Prepare graphs so that Sam can eas-
ily identify the projected growth in warehousing costs over the next
three years.

3. How can ERM also serve as a disaster recovery system in case of fire,
flood, or break-in?

4. Submit your recommendation for an ERM solution. Identify two
vendors in your recommendation.

Case 3.2
Business Case: Big Data Analytics is the “Secret
Sauce” for Revitalizing McDonald’s
With 62 million daily customers and an annual revenue of $27 billion,
McDonald’s has a virtually unrivaled amount of data at its disposal to
analyze. In order to dominate the market, retain its loyal customers,
and attract new customers who are skeptical of McDonald’s practices
and quality, it lends itself to its data, becoming an “information centric
organization.” What does it mean to be information centric? Instead
of using a fixed process of production, service, etc. as a business plan
that is product driven, McDonald’s uses customer data to dictate its
next move as a customer-driven corporation. During the inception of
McDonald’s in 1940, the McDonald brothers derived a product-driven
business centered around fast service and tasty food. While that
method was successful before other restaurants entered the fast food
market, growth was stunted due to a lack of innovation and change.
So, the organization began to collect customer data as a means to
monitor successful products, customer demands, and the results of
marketing campaigns.

This venture led to McDonald’s becoming the premier fast food
chain across the United States in the 1980s. Soon after becoming a
customer-driven corporation, McDonald’s introduced the Happy Meal
so families with small children could reduce costs and waste at dinner
time, released the Egg McMuffin as the most successful breakfast item of
all time, equipped professionals and teenagers with free Wi-Fi to expand
its customer segmentation, and provided nutrition details to become the
most transparent fast food chain at the time. All of these improvements
derived from McDonald’s using its immense amount of data to set its
chain apart from the rest.

In 2008, to further improve its ability to leverage big data, McDonald’s
made the transition from average-based metrics to trend analytics.
The issue with average-based metrics is that it is hard to compare
regions and stores. A store could be growing in its sales and produc-
tivity but have the same average metrics as a store that is declin-
ing. Using trend analytics allowed McDonald’s to combine multiple
datasets from multiple data sources to visualize and understand
cause-and-effect relationships in individual stores and regions. The
correlations it found enabled its analysts to prescribe solutions to
problems in sales, production, turnover, and supply chain manage-
ment to reduce costs and save time. The variables it studies allows
McDonald’s to create a standardized experience across the world.
However, analyzing local data in each store produces minor changes
around the organization. For example, most McDonald’s locations
look the same, but each restaurant is slightly different and optimized
for the local market.

A great example of McDonald’s big data analysis in action is its
updated drive-thru system. All fast food chains have bottlenecks in their
drive-thru lanes, but McDonald’s average customer wait time is about 3
minutes, which is close to the industry’s longest wait time of 214 seconds.
One of the most prominent issues in its drive-thru was that customers
going through the line for dinner, ordering large meals and searching over
the menu for an extended period of time, created a negative experience
for each car in line behind them. In response, McDonald’s optimized the
drive-thru across three components: design, information, and people.
Design focused on the improvements to the drive-thru, including better
speaker quality and higher resolution, digital menu boards. Information
centered around what was on the menu board. In order to decrease order
times, McDonald’s removed about 40% of the drive-thru menu board. In

Case 3.3 99

its third aspect, people, the fast food chain attempted to reduce the
negative experiences for those in line by creating a second drive-thru
line with a designated order taker for each line, a third drive-thru win-
dow, and two production lines.

Another example showing McDonald’s commitment to being a
customer-driven corporation is its introduction of all day breakfast,
which was the highest priority for customers across the United
States. Being the corporation with by far the largest share of the
fast food market, McDonald’s will continue to use its growing data
sets to provide the best experience and food to its customers (van
Rijmenam, 2016).

Questions
1. Explain McDonald’s mission and responsibilities.

2. What limitation did McDonald’s face in gaining data that was
meaningful to decision-making?

3. Describe trend analytics.

4. Is McDonald’s product oriented or customer oriented?

5. Why is the ability to identify patterns and relationships critical to
McDonald’s operations?

Case 3.3
Video Case: Verizon Improves Its
Customer Experience with Data Driven
Decision-Making
Verizon leverages Teradata’s data analytics platform to shift its opera-
tions from qualitative decision-making to evidence-based and data-
driven decision-making to improve the customer experience. Visit
www.Teradata.com and search for the video “Verizon: Using Advanced
Analytics to Deliver on Their Digital Promise to Help Customers Inno-
vate Their Lifestyle.”

1. How does Verizon use Teradata to make decisions?
2. How do the three sectors of Verizon work together to create value

for the customer?
3. How does Verizon use data analytics to “penetrate the market”?
4. What impact does customer behavior data have on Verizon’s

marketing strategy?

“IT Matters” Discussion Board
Research the concept of Big Data. Find at least one company that main-
tains a “Big Data” database.

1. What is Big Data? Give at least three examples of organizations
that have Big Data sets. What are the applications?

2. Discuss one of the organizations that you identified that uses Big
Data and briefly describe the use of Big Data in the organization.

3. Discuss as much as you can find about the amount of data they
have and how they process it.

4. What did you learn or what lessons did you take away from
this research?

Provide at least two hyperlinked references to back up your findings
(one for the organization you chose to discuss and one for the concept
of big data in general). Post your findings and respond to at least two
comments posted by your fellow students.

IT Toolbox

Framework for Generally Accepted Recordkeeping
Principles
The Framework for generally accepted recordkeeping principles
is a useful tool for managing business records to ensure that they
support an enterprise’s current and future regulatory, legal, risk
mitigation, environmental, and operational requirements.

The framework consists of eight principles or best practices,
which also support data governance. These principles were created
by ARMA International and legal and IT professionals.

1. Principle of accountability Assign a senior executive to
oversee a recordkeeping program; adopt policies and proce-
dures to guide personnel; and ensure program audit ability.

2. Principle of transparency Document processes and
activities of an organization’s recordkeeping program in an
understandable manner and available to all personnel and
appropriate parties.

3. Principle of integrity Ensure recordkeeping program is
able to reasonably guarantee the authenticity and reliability
of records and data.

4. Principle of protection Construct the recordkeeping
program to ensure a reasonable level of protection to records
and information that are private, confidential, privileged,
secret, or essential to business continuity.

5. Principle of compliance Ensure recordkeeping program
complies with applicable laws, authorities, and the organiza-
tion’s policies.

6. Principle of availability Maintain records in a manner that
ensures timely, efficient, and accurate retrieval of needed
information.

7. Principle of retention Maintain records and data for an
appropriate time based on legal, regulatory, fiscal, opera-
tional, and historical requirements.

8. Principle of disposition Securely disposed of records when
they are no longer required to be maintained by laws or orga-
nizational policies.

Sources: Compiled from Van Rijmenam (2016) and McDonald’s (2017).

100 CHAPTER 3 Data Management, Data Analytics, and Business Intelligence

References
Bing, C. “Data Mining Software Used by Spy Agencies just got more

Powerful.” FedScoop, June 21, 2016.
Burns, E. “Coca-Cola Overcomes Challenges to Seize BI Opportuni-

ties.” TechTarget.com. August 2013.
Burns, E. “Text Analysis Tool Helps Lenovo Zero in on the Customer.”

Business Analytics, April 8, 2016.
BusinessIntelligence.com. “Coca-Cola’s Juicy Approach to Big Data.”

July 29, 2013b. http://businessintelligence.com/bi-insights/coca-
colas-juicy-approach-to-big-data

Cattell, J., S. Chilukuri, and M. Levy. “How Big Data Can Revolutionize
Pharmaceutical R&D.” 2016. http://www.mckinsey.com/industries/
pharmaceuticals-and-medical-products/our-insights/how-big-
data-can-revolutionize-pharmaceutical-r-and-d

CNNMoney, The Coca-Cola Co (NYSE:KO) 2014.
Columbus, L. “Ten Ways Big Data Is Revolutionizing Marketing and

Sales.” Forbes, May 9, 2016.
Eisenhauer, T. “The Undeniable Benefits of Having a Well-Designed

Document Management System.” Axero Solutions, August 5, 2015.
FirstWind website www.firstwind.com, 2017.
Forbes. “Betting on Big Data.” 2015.
Hammond, T. “Top IT Job Skills for 2014: Big Data, Mobile, Cloud,

Security.” TechRepublic.com, January 31, 2014.
Harle, P., A. Havas, and H. Samandari. “The Future of Bank Risk

Management.” McKinsey & Company, July 2016.
Harvard Business School. “How Coca-Cola Controls Nature’s Oranges.”

November 22, 2015.
HealthCanal. “Where Do You Start When Developing a New Medicine?”

March 27, 2014.
IDC. “Explosive Internet of Things Spending to Reach $1.7 Trillion in

2020, According to IDC.” June 02, 2015.
King, L. “How Business Intelligence Helps Small Businesses Make

Better Decisions.” Huffington Post, July 28, 2016.

Kitamura, M. “Big Data Partnerships Tackle Drug Development
Failures.” Bloomberg News, March 26, 2014.

Kramer, S. “The High Costs of Dirty Data.” Digitalist, May 1, 2015.
Ladley, J. “Business Alignment Techniques for Successful and

Sustainable Analytics.” CIO, May 13, 2016.
Liyakasa, K. “Coke Opens Data-Driven Happiness, Builds Out Market-

ing Decision Engine.” Ad Exchanger, October 14, 2015.
McDonald’s website 2017. https://www.mcdonalds.com/us/en-us/

about-us/our-history.html
NIH (National Institute of Health). “Accelerating Medicines Partner-

ship.” February 2014. http://www.nih.gov/science/amp/index.htm
Oliphant, T. “How to Make Big Data Insights Work for You.” Business

Intelligence, February 24, 2016.
Ovalsrud, T. “Big Data and Analytics Spending to Hit $1.87 billion”

CIO, May 24, 2016.
Ransbothom, S. “Coca-Cola’s Unique Challenge: Turning 250

Datasets into One.” MIT Sloan Management Review, May 27, 2015.
RingLead, Inc. “The True Cost of Bad (And Clean) Data.” July 17, 2015.
syntheses.net 2017.
The Data Warehousing Institute (TDWI). tdwi.org/portals/business-

intelligence.asp. 2014
U.S. Department of Energy. “Wind Vision: A New Era for Wind Power in

the United States.” http://energy.gov/eere/wind/maps/wind-vision,
March 12, 2015.

Van Rijmenam, M. “From Big Data to Big Mac; how McDonalds lever-
ages Big Data.” DataFloq.com, August 15, 2016.

Van Rijmenam, M. “How Coca-Cola Takes a Refreshing Approach on
Big Data.” DataFloq, July 18, 2016.

Wise, L. “Evaluating Business Intelligence in the Cloud.” CIO,
March 9, 2016.

101

CHAPTER 4

Networks, Collaborative
Technology, and the
Internet of Things

CHAPTER OUTLINE

Case 4.1 Opening Case: Sony Builds an IPv6
Network to Fortify Competitive Edge

4.1 Network Fundamentals

4.2 Internet Protocols (IP), APIs,
and Network Capabilities

4.3 Mobile Networks

4.4 Collaborative Technologies and
the Internet of Things (IoT)

Case 4.2 Business Case: Google Maps API for
Business

Case 4.3 Video Case: Small Island Telecom
Company Goes Global

LEARNING OBJECTIVES

4.1 Describe the different types of networks and the basic
functions of business networks.

4.2 Understand the purpose of IPs and APIs and compare wireless
3G, 4G, and 5G networks and how they support businesses.

4.3 Describe the growth in mobile data traffic and understand the
components of the mobile infrastructure including near-field
communication. List the business functions that near-field
communication supports.

4.4 Evaluate performance improvements gained from
collaborative technology and understand concept of the
Internet of Things (IoT)

102 CHAPTER 4 Networks, Collaborative Technology, and the Internet of Things

Introduction
Across all types and sizes of organizations, the Internet and networks have changed the way
that business is conducted. Twenty years ago, computers were glorified typewriters that could
not communicate with one another. If we wanted to communicate we used the telephone.
Today computers constantly exchange data with each other over distance and time to provide
companies with a number of significant advantages. The convergence of access technologies,
cloud, 5G networks, multitasking mobile operating systems, and collaboration platforms con-
tinues to change the nature of work, the way we do business, how machines interact, and other
things not yet imagined. In this chapter you will learn about the different types of networks,
how they affect the way that businesses communicate with customers, vendors, and other
businesses, and how the largest network, the Internet, is enabling massive automatic data col-
lection efforts from “things” rather than from people.

Case 4.1 Opening Case

G
ra

sk
o/

Sh
ut

te
rs

to
ck

R
ob

A
rn

ol
d/

A
la

m
y

St
oc

k
Ph

ot
o

Ja
m

es
B

re
y/

G
et

ty
Im

ag
es

Sony Builds an IPv6 Network to Fortify
Competitive Edge

Sony’s Rapid Business Growth
In the early 2000s, Sony Corporation had been engaged in strate-
gic mergers and acquisitions to strengthen itself against intensifying
competition (Figure 4.1). By 2007 Sony’s enterprise network (internal
network) had become too complex and was incapable of supporting
communication, operations, and further business growth (Table 4.1).
The enterprise network was based on IPv4. A serious limitation was
that the IPv4 network could not provide real-time collaboration among
business units and group companies.

Expansion efforts were taking too long because of the complicated
structure of the network, and total cost of ownership (TCO) was increas-
ing. Also, a number of technical limitations were blocking internal com-
munications. To eliminate these limitations, Sony decided to invest in
IPv6-based networks.

Network Limitations
Many of the Sony Group companies had developed independently—
and had independent networks. Devices connected to the independent
networks were using the same IP addresses. That situation is compara-
ble to users having duplicate telephone numbers—making it impossi-
ble to know which phone was being called. Also, phones with the same
number could not call each other.

Once these networks were integrated, the duplicate IP address
caused traffic-routing conflicts. Routing conflicts, in turn, led to the fol-
lowing problems:

1. Sony’s employee communication options were severely limited,
which harmed productivity.

2. File sharing and real-time communication were not possible.

3. Introducing cloud services was difficult and time-consuming.

Migration to IPv6 Networks: An Investment
in the Future
With its virtually unlimited number of IP addresses, IPv6 would support
Sony’s long-term, next-generation information and communications
technology (ICT) infrastructure strategy and improve collaboration
and productivity.

Migrating from IPv4 to IPv6 involved 700 sites, hundreds of thou-
sands of networking devices, and hundreds of thousands of network
users spread around the globe. During the transition, Sony realized
that it was necessary to support both IP protocols. That is, while Sony
wanted to eventually completely migrate to IPv6, the IPv6 would sup-
plement and coexist with the existing enterprise IPv4 network, rather
than replace it. Running both protocols on the same network at the
same time was necessary because Sony’s legacy devices and apps only
worked on IPv4.

Sony selected Cisco as a key partner in the migration and inte-
gration of IPv4 and IPv6 traffic because of the maturity of its IPv6

Introduction 103

technology. The integrated network has been used by Sony as infra-
structure for product development. Sony also upgraded its Cisco net-
work switches at the corporate data center, campuses, and remote
offices to handle concurrent IPv4 and IPv6 traffic.

Business Results
The use of IPv6 eliminated the issue of conflicting IP addresses, ena-
bling Sony employees in all divisions to take advantage of the produc-
tivity benefits of real-time collaboration applications. Other business
improvements are as follows:

• Flexibility to launch new businesses quickly.

• Reduced TCO of enterprise network.

• Network without communications constraints, supporting “One
Sony” through information systems:

Decreased lead time of connecting a new group to the
enterprise network.

Automated network processes by ridding of manually config-
ured NAT devices.

However, Sony’s networks are far from perfect, especially when
it comes to its PlayStation Network Service. Unfortunately for gamers,
the PSN consistently crashes without warning and for relatively long
periods of time. The first crash of 2016 on January 4th caused the ser-
vice to be down for about 8 hours for all users. During that time, many
users could not play their games, use streaming services, or access the
online store.

Questions
1. Explain how Sony’s IPv4 enterprise network was restricting the

productivity of its workers.

Sony Corporation

Global Reach

Network Solution

Sony aims to accelerate global
collaboration and business across
business units to achieve goal of
“One Sony.”

Cisco Enterprise IPv6 network
integrated with IPv4 network.

Consumer electronics equipment
and services; music, pictures,
computer entertainment.

More versatile network
Network without communications
constraints, supporting “One Sony”
through information systems.

Brand

Business Results

FIGURE 4.1 Sony Corporation overview.

TABLE 4.1 Opening Case Overview

Company Sony Corporation, Sony.com

Location Headquartered in Tokyo, Japan. Over 700 total network sites worldwide.

Industries One of the largest consumer electronics and entertainment companies in the world,
including audio/video equipment, semiconductors, computers, and video games.
Also engaged in production and distribution of recorded music, motion picture,
and video.

Business
challenges

• Network expansion required too much time due to complexity of
enterprise network.

• Networking TCO (total cost of ownership) was continually increasing.
• Numerous constraints on networks obstructing communication between com-

panies in Sony Group.

Network
technology

• Integrated its IPv4 networks with new IPv6 solutions from Cisco. The integrated
IPv4/IPv6 network has been used by Sony as infrastructure for the development of
new products and enterprise-wide collaboration.

• Sony also upgraded its Cisco switches at the corporate data center, campuses, and
remote offices to handle concurrent IPv4 and IPv6 traffic.

104 CHAPTER 4 Networks, Collaborative Technology, and the Internet of Things

4.1 Network Fundamentals
Today’s managers need to understand the technical side of computer networks to make intel-
ligent investment decisions that impact operations and competitive position. Enterprises run
on networks—wired and mobile—and depend upon their ability to interface with other net-
works and applications. Computer networks are changing significantly in their capacity and
capabilities.

Network Types
Computers on a network are called nodes. The connection between computers can be
done via cabling, most commonly through Ethernet, or wirelessly through radio waves.
Connected computers share resources, such as the Internet, printers, file servers, and other
devices. The multipurpose connections enabled by a network allow a single computer to
do more than if it were not connected to other devices. The most well-known network is
the Internet.

Computer networks are typically categorized by their scope. Common types of networks
are shown in Table 4.2. Of these, LAN and WAN are the two primary and best-known categories
of networks.

Computer networks are a set of
computers connected together for
the purpose sharing resources.

TABLE 4.2 Types of Networks

Acronym Type Characteristics Example
LAN Local Area Network Connects network devices over a relatively short distance

Owned, controlled, and managed by one individual or
organization

Office building
School
Home

WAN Wide Area Network Spans a large physical distance
Geographically dispersed collection of LANs
Owned and managed by multiple entities

Internet
Large company

WLAN Wireless Local
Area Network

LAN based on Wi-Fi wireless network technology Internet
Large company

MAN Metropolitan Area
Network

Spans a physical area larger than a LAN but smaller
than a WAN
Owned and operated by a single entity, e.g.,
government agency, large company

City
Network of suburban fire stations

SAN Storage Area Network
Server Area Network

Connects servers to data storage devices High-performance database

CAN Campus Area Network
Cluster Area Network

Spans multiple LANs but smaller than a MAN University
Local business campus

PAN Personal Area Network Spans a small physical space, typically 35 feet or less
Connects personal IT devices of a single individual

Laptop, smartphone, and portable
printer connected together

2. What problems did duplicate IP addresses cause at Sony? Give
an analogy.

3. Why did Sony need to run both protocols on its network instead of
replacing IPv4 with IPv6?

4. Describe the strategic benefit of Sony’s IPv6 implementation.

5. Do research to determine the accuracy of this prediction:
“Today, almost everything on the Internet is reachable over
IPv4. In a few years, both IPv4 and IPv6 will be required for
universal access.”

Sources: Cisco (2016) and Neal (2016).

Network Fundamentals 105

Intranets, Extranets, and Virtual Private Networks
Intranets are used within a company for data access, sharing, and collaboration. They are por-
tals or gateways that provide easy and inexpensive browsing and search capabilities. Colleges
and universities rely on intranets to provide services to students and faculty. Using screen shar-
ing and other groupware tools, intranets can support team work.

An extranet is a private, company-owned network that can be logged into remotely via
the Internet. Typical users are suppliers, vendors, partners, or customers. Basically, an extranet
is a network that connects two or more companies so they can securely share information.
Since authorized users remotely access content from a central server, extranets can drastically
reduce storage space on individual hard drives.

A major concern is the security of the transmissions that could be intercepted or compro-
mised. One solution is to use virtual private networks (VPNs), which encrypt the packets before
they are transferred over the network. VPNs consist of encryption software and hardware that
encrypt, send, and decrypt transmissions, as shown in Figure 4.2. In effect, instead of using a
leased line to create a dedicated, physical connection, a company can invest in VPN technology
to create virtual connections routed through the Internet from the company’s private net-
work to the remote site or employee. Extranets can be expensive to implement and maintain
because of hardware, software, and employee training costs if hosted internally rather than by
an application service provider (ASP).

©
tu

ng
ph

ot
o/

iS
to

ck
ph

ot
o

FIGURE 4.2 Virtual private networks (VPNs) create encrypted
connections to company networks.

Network Terminology
To be able to evaluate the different types of networks and the factors that determine their func-
tionality, you need to be familiar with the following network terminology:

• Modem It is a device designed to adapt/modify the information signals in a way that can
be transported by the media. The word modem is composed of two terms: Modulator and
Demodulator, the modulator adapts the information signal in order to be transported by
the media and the demodulator does the inverse process at reception. Digital modems are
called “CSU/DSU” (Channel Service Unit/Data Service Units).

• Modulation and coding These are the specific techniques used by the modem to
adapt the signal to the media. There are several ways to do this process like Amplitude
Modulation, Phase modulation, Frequency Modulation. In a few words modulation/
coding is to decide how the “1s” and “0s” are represented in terms of voltages and/or
frequencies.

106 CHAPTER 4 Networks, Collaborative Technology, and the Internet of Things

• Signal It is the information we want to send, every signal is composed of a combination
of 1s and 0s. Every signal has a frequency spectrum.

• Signal frequency spectrum These are all the frequency components of a signal. The
more 1s and 0s are transmitted per unit of time (i.e., per second) the highest will be the
frequency components of a signal. The bandwidth of the signal is measured in hertz or
number of variations per second. The more 1s and 0s are transmitted within one second
the higher will be the frequency spectrum or signal bandwidth.

• Media bandwidth Every media (i.e., Copper, Coaxial, and Fiber Optics) has a limitation
in the range of frequency signals that can move through it without significant attenua-
tion. The bandwidth of the media varies by type, is limited, and typically can’t accept the
entire signals frequency spectrum (Figure 4.3). The range of frequencies that can move
through the media without significant attenuation is called bandwidth and it is also mea-
sured in hertz.

The mission of a modem/DSU-CSU is to adapt the information signal so that it can
move through the media without significant attenuation. Typically “significant attenua-
tion” means that the signal has not lost more than half of its original power.

Generally speaking, the media bandwidth (in hertz) can be defined as the range of
frequencies (i.e., fmax − fmin) at which the signal has not lost more than 50% of its power.
Upon coding-modulation techniques, it is possible to pack many binary symbols in
one hertz (many binary symbols per second), for example, it is possible to pack 5 bits in
each hertz of the signal. So if the bandwidth is 200,000 hertz then up to 1,000,000 bits/s
(2000,000 hertz *5 bits/hertz) can be transmitted.

A different modulation/coding technique (i.e., for the same signal and the same
media) might pack 10 bits per every hertz of bandwidth and up to 2 Mbits/s (200,000 hertz
*10 bits/hertz = 2 Mbits/s). The media bandwidth provided should be capable of transport-
ing this coded-modulated signal without significant attenuation.

• Capacity or digital bandwidth It is the maximum amount of bits/second that can be
transmitted over the media. Upon ideal conditions, it is possible to reach the maximum
capacity in a connection although this seldom happens (see Figure 4.4).

Functions Supported by Business Networks
Figure 4.5 describes the basic business functions supported business networks: communica-
tion, mobility, collaboration, relationships, and search. These functions depend on network
switches and routers—devices that transmit data packets from their source to their destina-
tion based on IP addresses. A switch acts as a controller, enabling networked devices to talk to
each other efficiently. For example, switches connect computers, printers, and servers within an
office building. Switches create a network. Routers connect networks. A router links computers

1 Gbps Ethernet

100 Mbps Ethernet
Gigabit Passive Optical Network (GPON)

Ethernet Passive Optical Network (EPON)
CableDSL

ISDN

FIGURE 4.3 Bandwidth variation by media type.

Network Fundamentals 107

to the Internet, so users can share the connection. Routers act like a dispatcher, choosing the
best paths for packets to travel.

Investments in network infrastructure, including data networks, IP addresses, routers, and
switches are business decisions because of their impact on productivity, security, user experi-
ences, and customer service.

Quality of Service
An important management decision is the network’s quality of service (QoS), especially for
delay-sensitive data such as real-time voice and high-quality video. The higher the required
QoS, the more expensive the technologies needed to manage organizational networks.
Bandwidth-intensive apps are important to business processes, but they also strain network
capabilities and resources. Regardless of the type of traffic, networks must provide secure, pre-
dictable, measurable, and sometimes guaranteed services for certain types of traffic. For exam-
ple, QoS technologies can be applied to create two tiers of traffic:

• Prioritize traffic Data and apps that are time-delay-sensitive or latency-sensitive apps,
such as voice and video, are given priority on the network.

0
91

182
273
364
455
546
637
728
819
910

M
B

yt
es

06
:51

:55

06
:52

:23

06
:52

:51

06
:53

:19

06
:53

:47

06
:54

:15

06
:54

:43

06
:55

:11

06
:55

:39

06
:56

:07

06
:56

:35

Current Bandwidth Usage – Last 5 Minutes

Enterprise Current Policy Threshold

Bandwidth Monitor

Current Policy Threshold(06:51:55): 300

FIGURE 4.4 Bandwidth capacity monitor.

Communication

Mobility
Provides secure, trusted,
and reliable access from any
mobile device anywhere at
satisfactory download and
upload speeds.

Relationships
Manages interaction with
customers, supply chain
partners, shareholders,
employees, regulatory
agencies, and so on.

Search
Able to locate data, contracts,
documents, spreadsheets, and other
knowledge within an organization
easily and efficiently.

Collaboration
Supports teamwork
that may be synchronous
or asynchronous;
brainstorming; and
knowledge and
document sharing.

Provides sufficient capacity for human
and machine-generated transmissions.
Delays are frustrating, such as when
large video files pause during download
waiting for the packets to arrive.
Buffering means the network cannot
handle the speed at which the video is
being delivered and therefore stops to
collect packets.

FIGURE 4.5 Basic functions of business networks.

108 CHAPTER 4 Networks, Collaborative Technology, and the Internet of Things

• Throttle traffic In order to give latency-sensitive apps priority, other types of traffic need
to be held back (throttled).

The ability to prioritize and throttle network traffic is referred to as traffic shaping and
forms the core of the hotly debated Net neutrality issue, which is discussed in IT at Work 4.1.

Net neutrality is a principle that Internet service providers (ISPs) and their regulators
treat all Internet traffic the same way. It’s essentially equal opportunity for Internet speeds and
access to website—no unfair fast or slow lanes and no blocking of anything that’s legal on your
phone, computer, or table.

IT at Work 4.1

Net Neutrality Debate Intensifies
In 2016, the battle over the complicated issue of net neutrality
heated up due to AT&T’s purchase of Time-Warner. However, with
AT&T’s takeover of Time-Warner, which owns HBO and DC Comics,
it is almost certain that AT&T will give priority to customers who
try to access its newfound property (Pachal, 2016). On the opposing
side of that issue is traffic shaping. Traffic shaping creates a two-tier
system for specific purposes such as:

1. Time-sensitive data are given priority over traffic that can be
delayed briefly with little-to-no adverse effect. Companies like
Comcast and AT&T argue that Net neutrality rules hurt con-
sumers. Certain applications are more sensitive to delays than
others, such as streaming video and Internet phone services.
Managing data transfer makes it possible to assure a certain
level of performance or QoS.

2. In a corporate environment, business-related traffic may be
given priority over other traffic, in effect, by paying a premium
price for that service. Proponents of traffic shaping argue that
ISPs should be able to charge more to customers who want to
pay a premium for priority service.

Specifically, traffic is shaped by delaying the flow of less important
network traffic, such as bulk data transfers, P2P file-sharing pro-
grams, and BitTorrent traffic.

Traffic shaping is hotly debated by those in favor of Net neu-
trality. They want a one-tier system in which all Internet data
packets are treated the same, regardless of their content, destina-
tion, or source. In contrast, those who favor the two-tiered system
argue that there have always been different levels of Internet ser-
vice and that a two-tiered system would enable more freedom of
choice and promote Internet-based commerce.

Federal Communications Commission’s 2010 Decision
On December 21, 2010, the Federal Communications Commission
(FCC) approved a compromise that created two classes of Inter-
net access: one for fixed-line providers and the other for the
wireless Net. In effect, the new rules are Net semi-neutrality.
The FCC banned any outright blocking of and “unreasonable
discrimination” against websites or applications by fixed-line
broadband providers. But the rules do not explicitly forbid “paid
prioritization,” which would allow a company to pay an ISP for
faster data transmission. Net neutrality supporters include major
internet companies who provide the content you read and watch
online, including AOL, Facebook, Netflix, Twitter, and Vimeo who
don’t want to be discriminated against by network owners. Those

against it include AT&T, Comcast Time Warner Cable, Verizon, and
other internet service providers who own the networks and fear
price controls.

Net Neutrality Overturned in 2014
In January 2014, an appeals court struck down the FCC’s 2010
decision. The court allowed ISPs to create a two-tiered Internet, but
promised close supervision to avoid anticompetitive practices, and
banned “unreasonable” discrimination against providers.

On April 24, 2014 FCC Chairman Tom Wheeler reported that
his agency would propose new rules to comply with the court’s
decision. These new rules were approved by the FCC in 2015.
Wheeler stated that these rules “would establish that behavior
harmful to consumers or competition by limiting the openness of
the Internet will not be permitted” (Wheeler, 2014). But Wheeler’s
proposal would allow network owners to charge extra fees to
content providers. This decision has angered consumer advocates
and Net neutrality advocates who view Wheeler with suspicion
because of his past work as a lobbyist for the cable industry and
wireless phone companies.

Rolling Back Net Neutrality Protections in 2017
The process to overhaul how the Internet is regulated is now offi-
cially underway. On May 18, 2017 the FCC voted 2-1 to move forward
with a proposal to roll back net neutrality protections. The contro-
versial vote is the first step in a lengthy process to overturn the rules
put into place during the Obama administration. Longtime net neu-
trality advocates predict there will be negative consequences for
businesses and consumers if net neutrality is overturned. Michael
Cheah, general counsel at Vimeo, summed it up by saying that net
neutrality is about “allowing consumers to pick the winners and
losers and not [having] the cable companies make those decisions
for them” (Fiegerman, 2017).

IT at Work Questions
1. What is Net neutrality?
2. What tiers are created by traffic shaping?
3. Why did the battle over Net neutrality intensify in 2014?
4. Did the FCC’s 2015 net neutrality rules favor either side of the

debate? Explain.
5. What consequences may occur when the 2015 net neutrality

rules are overturned?

Sources: Compiled from Federal Communications Commission (fcc.gov, 2017),
Fiegerman (2017), Pachal (2016), Wheeler (2014), and various blog posts.

Internet Protocols (IP), APIs, and Network Capabilities 109

Questions

1. Name the different types of networks.

2. What is meant by “bandwidth”?

3. What is the difference between an intranet and an extranet?

4. How does a virtual private network (VPN) provide security?

5. What is the purpose of a modem?

6. Describe the basic functions of business networks.

7. How do investments in network infrastructure impact an organization?

8. Name the two tiers of traffic to which quality of service is applied.

4.2 Internet Protocols (IP), APIs,
and Network Capabilities
The basic technology that makes global communication possible is a network protocol com-
monly known as an Internet Protocol (IP). Each device attached to a network has a unique
IP address that enables it to send and receive files. Files are broken down into blocks known
as packets in order to be transmitted over a network to their destination’s IP address. Ini-
tially, networks used IP Version 4 (IPv4). In April 2014 ARIN, the group that oversees Internet
addresses, reported that IPv4 addresses were running out—making it urgent that enterprises
move to the newer IP Version 6 (IPv6) (Figure 4.6).

The IPv6 Internet protocol has features that are not present in IPv4. For example, IPv6
simplifies aspects of how addresses are assigned, how networks are renumbered and places
responsibility for packet fragmentation when packets are processed in routers. The IPv6 pro-
tocol does not offer direct interoperability with IPv4, instead it creates a parallel, independent
network. Fortunately, several transition mechanisms, such as NAT64 and 6rd, have been devel-
oped to allow IPv6 hosts to communicate with IPv4 servers.

Network protocols serve the following three basic functions:

1. Send data to the correct recipient(s).
2. Physically transmit data from source to destination, with security protected as needed.
3. Receive messages and send responses to the correct recipient(s).

The capacity and capabilities of data networks provide opportunities for more automated
operations and new business strategies. M2M communications over wireless and wired

Internet Protocol (IP) is the
method by which data are sent
from one device to another over
a network.

IP address is a unique
identifier for each device that
communicates with a network
that identifies and locates
each device. An IP address is
comparable to a telephone
number or home address.

Packet is a piece of a message
that is collected and re-assembled
with the other pieces of the same
message at their destination.
To improve communication
performance and reliability, each
larger message sent between
two network devices is often
subdivided into packets.

IP Version 4 (IPv4) has been
Internet protocol for over three
decades, but has reached the
limits of its 32-bit address design.
It is difficult to configure, it is
running out of addressing space,
and it provides no features for site
renumbering to allow for an easy
change of Internet Service Provider
(ISP), among other limitations.

IP Version 6 (IPv6) is the most
recent version of the Internet
Protocol. IPv6 is replacing IPv4
because of IPv4’s limitations in
number of IP addresses it can
generate. IPv6 has a 128-bit address
and allows 7.9 × 1028 times as many
addresses as IPv4, which provides
about 4.3 billion addresses.

IPv4

IPv6

32 bit address
0000.0000.0000.0000

128 bit address
0000.0000.0000.0000.0000.0000.0000.0000

FIGURE 4.6 IPv4 addresses have 4 groups of four alphanumeric
characters, which allow for 232 or roughly 4.3 billion unique IP
addresses. IPv6 addresses have 8 groups of alphanumeric characters,
which allows for 2128, or 340 trillion, trillion, trillion addresses. IPv6
also offers enhanced quality of service that is needed by the latest
in video, interactive games, and e-commerce.

110 CHAPTER 4 Networks, Collaborative Technology, and the Internet of Things

networks automate operations, for instance, by triggering action such as sending a message
or closing a valve. The speed at which data can be sent depends on several factors, including
capacity, server usage, computer usage, noise, and the amount of network traffic. Transfer rate
or speed is an instantaneous measurement.

Comparing 3G, 4G, 4G LTE, and 5G Network Standards
Over the past 20 years, networks have evolved from 3G networks designed for voice and
data to 4G and 5G networks that support broadband Internet connectivity. In its 2016 report,
SNS Research, a major market analysis and consulting firm, announced its forecast of 5G
network contribution to the world economy. Experts predict that by 2020, “LTE and 5G infra-
structure investments are expected to account for a market worth $32 billion” (PRNews-
wire, 2016).

3G networks support multimedia and broadband services over a wider distance
and at faster speeds than prior generation networks. 3G networks have far greater ranges
than 1G and 2G networks since they use large satellite connections to telecommunica-
tion towers.

4G networks are digital, or IP, networks that enable even faster data transfer rates. 4G
delivers average realistic download rates of 3 Mbps or higher (as opposed to theoretical rates,
which are much higher). In contrast, today’s 3G networks typically deliver average download
speeds about one-tenth of that rate.

5G networks—the coming generation of broadband technology. 5G builds on the
foundation created by 4G. 5G will dramatically increase the speed at which data is transferred
across the network.

Unlike its predecessors, 2G and 3G that have a circuit-switched subsystem, 4G is based
purely on the packet-based IP. Users can obtain 4G wireless connectivity through one of the
following standards:

1. WiMAX is a technology standard for long-range wireless networks. WiMax is based on the
IEEE 802.16 standard. IEEE 802.16 specifications are as follows:
• Range: 30 miles (50 km) from base station.
• Speed: 70 megabits per second (Mbps).
• Line-of-sight not needed between user and base station.
WiMAX operates on the same basic principles as Wi-Fi in that it transmits data from one
device to another via radio signals.

2. Long-Term Evolution (LTE) is a GSM-based technology that provides the fastest and most
consistent download speeds and most closely follows the United Nation technical stand-
ard for 4G networks. In the United States, LTE is deployed by Verizon, AT&T, and T-Mobile.
LTE capabilities include the following:
• Speed: Downlink data rates of 100 Mbps and uplink data rates of 50 Mbps.

Improved network performance, which is measured by its data transfer capacity, pro-
vides fantastic opportunities for mobility, mobile commerce, collaboration, supply chain
management, remote work, and other productivity gains.

5G mobile networks will offer huge gains in both speed and capacity over existing 4G net-
works—along with opportunities at the operations and strategic levels. In the short term, the
5G infrastructure build-out will create new jobs. In the longer term, 5G will create entirely new
markets and economic opportunities driven by superior mobile capabilities in industries rang-
ing from health care to automotive.

5G networks are designed to support the escalation in mobile data consumption, with
users demanding higher data speeds and traffic volumes expected to increase by hundreds or
even thousands of times over the next 10 years. It is likely that 5G networks will have to deliver
baseline data speeds of 100 Mbit/s and peak speeds of up to 10 Gbit/s. 5G will make it easier to

Internet Protocols (IP), APIs, and Network Capabilities 111

send texts, make calls, and download and upload Ultra HD and 3D videos. 5G operates with a
5-Ghz signal and is set to offer speeds of up to 1 gigabyte per second for tens of thousands of
connections or tens of megabytes per second for tens of thousands of connections.

The move to 5G is being driven by the significant increase in the number of devices to
be supported. Mobile networks will no longer be concerned primarily with person-to-person
communications, as the Internet of Things (IoT) creates billions of new devices for remote
sensing, telemetry, and control applications which will lead to huge numbers of machine-to-
machine and person-to-machine interactions. Although 5G isn’t expected until 2020, many
organizations are already investing in the infrastructure required to run this new mobile
wireless standard.

Circuit versus Packet Switching
All generations of networks are based on switching. Prior to 4G, networks included
circuit switching, which is slower than packet switching. 4G was first to be fully packet
switched, which significantly improved performance. The two basic types of switching are
as follows:

Circuit switching A circuit is a dedicated connection between a source and destination.
In the past, when a call was placed between two landline phones, a circuit or connection
was created that remained until one party hung up. Circuit switching is older technology
that originated with telephone calls; it is inefficient for digital transmission.
Packet switching Packet switching transfers data or voice in packets. Files are broken
into packets, numbered sequentially, and routed individually to their destination. When
received at the destination, the packets are reassembled into their proper sequence.

Wireless networks use packet switching and wireless routers whose antennae transmit
and receive packets. At some point, wireless routers are connected by cables to wired net-
works. The first real network to run on packet-switching technology was ARPAnet described in
Tech Note 4.1.

Tech Note 4.1

Origin of the Internet, E-mail, and TCP/IP
The Advanced Research Projects Agency network (ARPAnet) was
the first real network to run on packet-switching technology. In
October 1969, computers at Stanford University, UCLA, and two
other U.S universities connected for the first time—making them
the first hosts on what would become the Internet. ARPAnet was
designed for research, education, and government agencies.
ARPAnet provided a communications network linking the country
in the event that a military attack or nuclear war destroyed conven-
tional communications systems.

In 1971 e-mail was developed by Ray Tomlinson, who used
the @ symbol to separate the username from the network’s name,
which became the domain name.

On January 1, 1983, ARPAnet computers switched over to
the transmission control protocol/Internet protocols (TCP/IPs)
developed by Vinton Cerf. A few hundred computers were affected
by the switch. The original ARPAnet protocol had been limited to
1,000 hosts, but the adoption of the TCP/IP standard made larger
numbers of hosts possible. The number of Internet hosts in the
domain name system (DNS) topped 1.05 billion in 2016, almost
double the number reported in 2010.

Application Program Interfaces and Operating Systems
When software developers create applications, they must write and compile the code for a spe-
cific operating system (OS). Figure  4.7 lists the common OSs. Each OS communicates with
hardware in its own unique way; each OS has a specific API that programmers must use. Video
game consoles and other hardware devices also have application program interfaces (APIs)
that run software programs.

Application program interface
(API) An interface is the
boundary where two separate
systems meet. An API provides a
standard way for different things,
such as software, content, or
websites, to talk to each other in
a way that they both understand
without extensive programming.

112 CHAPTER 4 Networks, Collaborative Technology, and the Internet of Things

What Is an API? An API consists of a set of functions, commands, and protocols used
by programmers to build software for an OS. The API allows programmers to use predefined
functions or reusable codes to interact with an OS without having to write a software program
from scratch. APIs simplify the programmer’s job.

APIs are the common method for accessing information, websites, and databases.
They were created as gateways to popular apps such as Twitter, Facebook, and Amazon and
enterprise apps provided by SAP, Oracle, NetSuite, and many other vendors.

Automated API The current trend is toward automatically created APIs that are making
innovative IT developments possible. Here are two examples of the benefits of automated
APIs:

• Websites such as the European Union Patent office have mappings of every one of their
pages to both URLs for browser access and URLs for REST APIs. Whenever a new page is
published, both access methods are supported.

• McDonald’s, along with Unilever and Gatorade, are using automated API’s to bring adver-
tisements to Snapchat users. The social network app is using an auction-based system and
targeting to choose which users see which advertisements (Joseph, 2016).

API Value Chain in Business APIs deliver more than half of all the traffic to major
companies like Twitter and eBay. APIs are used to access business assets, such as cus-
tomer information or a product or service, as shown in Figure 4.8. IT developers use APIs
to quickly and easily connect diverse data and services to each other. APIs from Google,
Twitter, Amazon, Facebook, Accuweather, Sears, and E*Trade are used to create many
thousands of applications. For example, Google Maps API is a collection of APIs used by
developers to create customized Google Maps that can be accessed on a Web browser or
mobile devices. Tech Note 4.2 describes a new API that Amazon developed for its Internet
assistant, Alexa.

API Developers

Provides quick,
easy access to
business assets.

Applications
created using
APIs

Use APIs to
create new
business.

Business Assets

Data
information
products
services

Customers, employees,
and end-users use the
business apps that give
them access to assets.

FIGURE 4.8 API value chain in business.

Android
iOS
Windows Phone

Common Mobile OS
Windows
Mac OS X
Linux

Common Desktop OS

FIGURE 4.7 Common mobile and desktop operating systems.
Each computer OS provides an API for programmers. Mobile OSs are
designed around touchscreen input.

Mobile Networks and Near-Field Communication 113

Questions

1. Why has IPv6 become increasingly important?

2. What is the difference between IPv4 and Ipv6?

3. What is the purpose of an IP address?

4. What are the benefits of using an API?

5. What is the difference between 4G and 5G?

6. What is the most current network standard?

7. What benefits will the upcoming 5G network standard offer businesses?

8. What is the difference between circuit switching and packet switching?

Tech Note 4.2

Amazon Develops New API for Alexa
Online retail giant Amazon announced in 2016 its new and improved
API for its voice-automated speaker Alexa, along with other devel-
oped applications. The new API allows software developers to
increase the efficiency of Alexa’s list feature, which allows users to
add items to their lists within Alexa. For example, a user can ask
Alexa to “add buy soccer cleats” to their to-do lists (Zeman, 2016).
This application extends to shopping lists, buy lists, and even
music playlists.

The new API called the List Skills API means that developers
will have a standardized voice interaction model instead of having
to create one of their own. In other words, applications like Alexa
will all have standardized instructions that users can take advan-
tage of universally. Similarly, Apple developed Siri for iPhones,
Apple compuers, and iPads. List Skills API gives customers the
ability to add anything to their lists or give commands across any
device or application that uses it.

The API value chain takes many forms because the organiza-
tion that owns the business asset may or may not be the same as
the organization that builds the APIs. Different people or organiza-
tions may build, distribute, and market the applications. At the end
of the chain are end-users who benefit from the business asset.
Often, many APIs are used to create a new user experience. The
business benefits of APIs are listed in Table 4.3.

TABLE 4.3 Business Benefits of APIs

Characteristic Benefit
APIs are channels to new
customers and markets

APIs enable partners to use
business assets to extend the
reach of a company’s products
or services to customers and
markets they might not

APIs promote innovation Through an API, people who
are committed to a challenge or
problem can solve it themselves

APIs are a better way to
organize IT

APIs promote innovation by
allowing everyone in a company
to use each other’s assets
without delay

APIs create a path to
lots of Apps

Apps are going to be a crucial
channel in the next 10 years.
Apps are powered by APIs.
Developers use APIs and
combinations of APIs to create
new user experiences

4.3 Mobile Networks and Near-Field
Communication
In the 21st-century global economy, advanced wireless networks are a foundation on which
global economic activity takes place. Current 4G and 5G networks and technologies provide
that foundation for moving entire economies. For any nation to stay competitive and prosper-
ous, it is imperative that investment and upgrades in these technologies continue to advance
to satisfy demand. Cisco forecasts that the average global mobile connection speed will more

114 CHAPTER 4 Networks, Collaborative Technology, and the Internet of Things

than double from the current 1.4 to 3 Mbps and 5G networks are promising speeds that will
be 100 times faster than current speeds. The factors that are driving global mobile traffic are
shown in Figure 4.9.

More Mobile
Connections

• Over 11.6
billion

Faster Mobile
Speeds

• 3Megabits
per second
(Mbs)

More Mobile
Users

• Appx. 6.1
billion

More Mobile
Video

• 75% of
mobile traffic

FIGURE 4.9 Four drivers of global mobile traffic through 2020.

Busy-hour Internet
Traffic

Increase by factor of 4.6
Average traffic will only increase by factor of 2

29% of network traffic

30% of network traffic

67% of total network traffic

Three times the entire global population

Increase 100%

Equivalent to 95 times the volume of the entire global
Internet in 2005

It will take an individual 5 million years to watch the amount of
video that will cross global IP networks EACH month!

PCs

Smartphones

Wireless & Mobile

Global Internet
Traffic

Devices connected to
IP networks

Broadband Speeds

Video

FIGURE 4.10 Mobile Data Traffic Milestone by 2020.

Increase in Mobile Network Traffic and Users
In its most recent Visual Networking Index Forecast (VNI), Cisco reported that mobile data
traffic has grown 400 million times over the past 15 years. They also predict that by 2020
monthly global mobile data traffic will be 30.6 Exabytes; number of mobile-connected
devices will exceed 11.6 billion (exceeding the world’s projected population of 7.8 billion)
and smartphones will account for 81% of total mobile traffic. This includes a major
increase in machine-to-machine communications and the number of wearable technol-
ogy devices.

Smartphone users are expected to rise from the 2.6 billion reported in 2014 to 6.1 billion
in 2020 and 80% of these new smartphone users will be located in Asia Pacific, the Middle East,
and Africa. Much of that traffic will be driven by billions of devices talking to other devices wire-
lessly and consumers’ growing demand for more and more videos.

According to the Cisco Visual Networking Index (VNI): Forecast and Methodology 2015–2020.
(Cisco 2016), annual global IP traffic will reach 2.3 Zettabytes or 194 Exabytes per month and
smartphone traffic will exceed PC traffic by 2020. Figure 4.10 lists the milestones that mobile
data traffic will reach by 2020.

Exabyte is one quintillion bytes
(1,000,000,000,000,000,000 Bytes)
which is the equivalent of 1,000
petabytes of data or 7 trillion
online video clips. Five Exabytes is
equal to all words ever spoken by
human beings.

Zettabyte is one sextillion bytes
(1,000,000,000,000,000,000,000
Bytes) which is approximately
equal to 1,000 Exabytes.

Mobile Networks and Near-Field Communication 115

Higher Demand for High-Capacity Mobile Networks
This increase in mobile networks capacity and use is increasing the demand for high-capacity
mobile networks. The four drivers of the increase in global mobile traffic demand are shown in
Figure 4.10. Demand for high-capacity networks is growing at unprecedented rates. Examples
of high-capacity networks are wireless mobile, satellite, wireless sensor, and VoIP (voice over
Internet Protocol) such as Skype. Voice over IP (VoIP) networks carry voice calls by converting
voice (analog signals) to digital signals that are sent as packets. With VoIP, voice and data trans-
missions travel in packets over telephone wires. VoIP has grown to become one of the most
used and least costly ways to communicate. Improved productivity, flexibility, and advanced
features make VoIP an appealing technology.

Mobile Infrastructure
Enterprises are moving away from the ad hoc adoption of mobile devices and network infra-
structure to a more strategic planning build-out of their mobile capabilities. As technologies
that make up the mobile infrastructure evolve, identifying strategic technologies and avoid-
ing wasted investments require more extensive planning and forecasting. Factors to consider
are the network demands of multitasking mobile devices, more robust mobile OSs, and their
applications. Mobile infrastructure consists of the integration of technology, software, support,
standards, security measures, and devices for the management and delivery of wireless com-
munications, including the following.

Wi-Fi and Bluetooth Bluetooth is a short-range—up to 100 meters or 328 feet—wireless
communications technology found in billions of devices, such as smartphones, computers,
medical devices, and home entertainment products. When two Bluetooth-enabled devices
connect to each other, this is called pairing.

Wi-Fi is the standard way computers connect to wireless networks. Nearly all computers
have built-in Wi-Fi chips that allow users to find and connect to wireless routers. The router
must be connected to the Internet in order to provide Internet access to connected devices.

Wi-Fi technology allows devices to share a network or Internet connection without the
need to connect to a commercial network. Wi-Fi networks beam packets over short distances
using part of the radio spectrum, or they can extend over larger areas, such as municipal Wi-Fi
networks. However, municipal networks are not common because of their huge costs. See
Figure 4.11 for an overview of how Wi-Fi works.

Wi-Fi Networking Standards

• 802.11ac This is the newest generation of Wi-Fi signaling in popular use. 802.11ac uti-
lizes dual-band wireless technology and support simultaneous connections on both the
2.4 and 5 GHz Wi-Fi bands. 802.11ac offers backward compatibility to 802.11b/g/n and
bandwidth rated up to 130 Mbps on 5 GHz, plus up to 450 Mbps on 2.4 GHz.

• 802.11b This standard shares spectrum with 2.4-GHz cordless phones, microwave
ovens, and many Bluetooth products. Data are transferred at distances up to 100 meters
or 328 feet.

• 802.11a This standard runs on 12 channels in the 5-GHz spectrum in North America,
which reduces interference issues. Data are transferred about 5 times faster than 802.11b,
improving the quality of streaming media. It has extra bandwidth for large files. Since the
802.11a and b standards are not interoperable, data sent from an 802.11b network cannot
be accessed by 802.11a networks.

• 802.11g This standard runs on three channels in the 2.4-GHz spectrum, but at the speed
of 802.11a. It is compatible with the 802.11b standard.

• 802.11n This standard improves upon prior 802.11 standards by adding multiple-input
multiple-output (MIMO) and newer features. Frequency ranges from 2.4 to 5 GHz with a
data rate of about 22 Mbps, but perhaps as high as 100 Mbps.

Bluetooth is a short-range
wireless communications
technology.

Wi-Fi is the standard
way computers connect to
wireless networks.

116 CHAPTER 4 Networks, Collaborative Technology, and the Internet of Things

Two Components of Wireless Infrastructure
There are three general types of mobile networks: wide area networks (WANs), WiMAX, and
local area networks (LANs). WANs for mobile computing are known as wireless wide area
networks (WWANs). The range of a WWAN depends on the transmission media and the wire-
less generation, which determines which services are available. The two components of wire-
less infrastructures are wireless LANs and WiMAX.

WLANs Wireless LANs use high-frequency radio waves to communicate between com-
puters, devices, or other nodes on the network. A wireless LAN typically extends an existing
wired LAN by attaching a wireless AP to a wired network.

WiMAX Wireless broadband WiMAX transmits voice, data, and video over high-frequency
radio signals to businesses, homes, and mobile devices. It was designed to bypass traditional
telephone lines and is an alternative to cable and DSL. WiMAX is based on the IEEE 802.16 set of
standards and the metropolitan area network (MAN) access standard. Its range is 20–30 miles
and it does not require a clear line of sight to function. Figure 4.12 shows the components of a
WiMAX/Wi-Fi network.

Wireless Network
Access Point

Cable/DSL
Modem

Antenna

Radio
Waves

Directional Antenna
and PC Card

Laptop(s) or Desktop(s)

1

2

3

Radio-equipped access point connected to the Internet
(or via a router). It generates and receives radio waves
(up to 400 feet).
Several client devices, equipped with PC cards, generate
and receive radio waves.
Router is connected to the Internet via a cable or
DSL modem, or is connected via a satellite.

Wireless Network
PC Card

2

1 3

Satellite

Internet

PC

Router

FIGURE 4.11 Overview of Wi-Fi.

Mobile Networks and Near-Field Communication 117

Mashup of GPS and Bluetooth The mashup of GPS positioning and short-range
wireless technologies, such as Bluetooth and Wi-Fi, can provide unprecedented intelligence.
These technologies create opportunities for companies to develop solutions that make a
consumer’s life better. They could, for example, revolutionize traffic and road safety. Intelligent
transport systems being developed by car manufacturers allow cars to communicate with
each other and send alerts about sudden braking and will even allow for remote driving in the
future. In the event of a collision, the car’s system could automatically call emergency services.
The technology can also apply the brakes automatically if it was determined that two cars
were getting too close to each other or alert the driver to a car that is in their blind spot in the
next lane.

Advancements in networks, devices, and RFID sensor networks are changing enterprise
information infrastructures and business environments dramatically. The preceding exam-
ples and network standards illustrate the declining need for a physical computer, as other
devices provide access to data, people, or services at anytime, anywhere in the world, on
high-capacity networks.

Business Use of Near-Field Communication
If you’ve used AirDrop on your smartphone you’ve engaged in near-field communication.
NFC is a location-aware technology that is more secure than other wireless technolo-
gies  like Bluetooth and Wi-Fi. And, unlike RFID, NFC is a two-way communication tool.
An NFC tag contains small microchips with tiny aerials which can store a small amount of
information for transfer to another near-field communication (NFC) device, such as a
mobile phone.

Location-aware NFC technology can be used to transfer photos and files, make pur-
chases in restaurants, resorts, hotels, theme parks and theaters, at gas stations, and
on buses and trains. Here are some examples of NFC applications and their potential
business value.

• The Apple iWatch wearable device with NFC communication capabilities could be ideal
for mobile payments. Instead of a wallet, users utilize their iWatch as a credit card
or wave their wrists to pay for their Starbucks coffee. With GPS and location-based
e-commerce services, retailers could send a coupon alert to the iWatch when a user

Mashup is a general term
referring to the integration of two
or more technologies.

Near-field communication
(NFC) enables two devices
within close proximity to establish
a communication channel and
transfer data through radio waves.

Notebook with
built-in

WiMAX adapter

Wi-Fi
Hotspots

Base Station

WiMAX
Hub

InternetWiMAX
Network

FIGURE 4.12 WiMAX/Wi-Fi network.

118 CHAPTER 4 Networks, Collaborative Technology, and the Internet of Things

passes their store. Consumers would then see the coupon and pay for the product with
the iWatch.

• The self-healthcare industry is being radically transformed by the growing use of NFC tech-
nology. Wearable devices such as Fit-Bits, smart glucose monitors, and electrical nerve
stimulators are becoming increasingly cheap and popular due to the proliferation of NFC
tech. These devices can not only monitor, but they also can provide “automated or remote
treatment” to users (Patrick,  2016). Remote control with health-care devices allow for
smarter preventive care without the need for doctor or hospital visits and can increase the
well-being of those living with chronic illnesses.

• Passengers on public transportation systems can pay fares by waving an NFC smartphone
as they board.

Another interesting near-field application is described in IT at Work 4.2 when technology
was used as an incentive in a marketing campaign by Warner Music.

IT at Work 4.2

NFC-Embedded Guitar Picks
Fans attending gigs by The Wild Feathers were given guitar picks
embedded with an NFC tag. Warner Music had distributed the
guitar picks for fans to enter a competition, share content via social
media, and vote at the gig simply by tapping with an NFC phone.
NFC-embedded picks were inserted into the band’s promotional
flyers at six European venues. Each pick was encoded with a unique
URL and also printed with a unique code for iPhone users to enable
tracking and monitoring.

Marketing Campaign Success Shows an Exciting
Future for NFC
The tags generated a high response rate. Over 65% of the NFC guitar
picks had registered in the competition. And 35% of the fans had
shared content on social media—spending an average of five min-
utes on the site.

NFC is being used in marketing campaigns because the
technology offers slick one-tap interaction. NFC allows brands to

engage with their customers in unique ways and create exciting
user experiences. With millions of NFC-equipped smartphones set
to reach users over the next few years and the technology’s advan-
tages for shoppers and businesses, NFC is emerging as a major
technology.

IT at Work Questions
1. Assume you attended a concert and were given a brochure

similar to the one distributed to fans at The Wild Feathers
concert. Would you use the guitar pick or comparable
NFC-embedded item to participate in a contest? To post
on Facebook or tweet about the concert? Explain why
or why not.

2. How can NFC be applied to create an interesting user
experience at a sporting event? At a retail store or coffee
shop?

3. Refer to your answers in Question 2. What valuable informa-
tion could be collected by the NFC tag in these businesses?

Bluetooth and Wi-Fi seem similar to near-field communication on the surface. All three
allow wireless communication and data exchange between digital devices like smartphones.
The difference is that near-field communication utilizes electromagnetic radio fields while
technologies such as Bluetooth and Wi-Fi focus on radio transmissions instead.

Choosing Mobile Network Solutions
When you are choosing a mobile network solution, it’s important to carefully consider the four
factors shown in Figure 4.13.

1. Simple Easy to deploy, manage, and use.
2. Connected Always makes the best connection possible.
3. Intelligent Works behind the scenes, easily integrating with other systems.
4. Trusted Enables secure and reliable communications.

Collaborative Technologies and the Internet of Things 119

Questions

1. What are the factors contribute to mobility?

2. Why is mobile global traffic increasing?

3. What accounts for the increased in mobile traffic?

4. Give some examples of VoIP networks.

5. How is NFC different from RFID?

6. What are the two components of a wireless network infrastructure?

7. What is near-field communication and how is it used in business?

8. What factors should be considered when evaluating mobile networks?

SIMPLE

• Easy to deploy, manage, and use

CONNECTED

• Always makes the best connection possible

INTELLIGENT

• Works behind the scenes
• Easily integrated with other systems

TRUSTED

• Enables secure and reliable communications

FIGURE 4.13 Four important factors to consider when choosing a mobile
network solution.

4.4 Collaborative Technologies and the
Internet of Things
Now more than ever, business gets done through information sharing and collaborative plan-
ning. Business performance depends on broadband data networks for communication, mobil-
ity, and collaboration. For example, after Ford Motor Company began relying on UPS Logistics
Group’s data networks to track millions of cars and trucks and to analyze any potential prob-
lems before they occur, Ford realized a $1 billion reduction in vehicle inventory and $125 mil-
lion reduction in inventory carrying costs annually.

More and more people need to work together and share documents over time and dis-
tance. Teams make most of the complex decisions in organizations and many teams are geo-
graphically dispersed. This makes it difficult for organizational decision-making when team
members are geographically spread out and working in different time zones.

Messaging and collaboration tools include older communications media such as e-mail,
videoconferencing, fax, and texts—and blogs, Skype, Web meetings, and social media. Yam-
mer is an enterprise social network that helps employees collaborate across departments,
locations, and business apps. These private social sites are used by more than 400,000 enter-
prises worldwide. Yammer functions as a communication and problem-solving tool and is rap-
idly replacing e-mail. You will read about Yammer in detail in Chapter 7.

120 CHAPTER 4 Networks, Collaborative Technology, and the Internet of Things

Virtual Collaboration
Leading businesses are moving quickly to realize the benefits of virtual collaboration. Several
examples appear below.

Information Sharing Between Retailers and Their Suppliers One of the
most publicized examples of information sharing exists between Procter & Gamble (P&G) and
Walmart. Walmart provides P&G with access to sales information on every item Walmart buys
from P&G. The information is collected by P&G on a daily basis from every Walmart store, and
P&G uses that information to manage the inventory replenishment for Walmart.

Retailer–Supplier Collaboration: Asda Corporation Supermarket chain Asda
has rolled out Web-based electronic data interchange (EDI) technology to 650 suppliers. Web
EDI technology is based on the AS2 standard, an internationally accepted HTTP-based pro-
tocol used to send real-time data in multiple formats securely over the Internet. It promises
to improve the efficiency and speed of traditional EDI communications, which route data over
third-party, value-added networks (VANs).

Lower Transportation and Inventory Costs and Reduced Stockouts:
Unilever Unilever’s 30 contract carriers deliver 250,000 truckloads of shipments annually.
Unilever’s Web-based database, the Transportation Business Center (TBC), provides these car-
riers with site specification requirements when they pick up a shipment at a manufacturing
or distribution center or when they deliver goods to retailers. TBC gives carriers all of the vital
information they need: contact names and phone numbers, operating hours, the number of
dock doors at a location, the height of the dock doors, how to make an appointment to deliver
or pick up shipments, pallet configuration, and other special requirements. All mission-critical
information that Unilever’s carriers need to make pickups, shipments, and deliveries is now
available electronically 24/7.

Reduction of Product Development Time Caterpillar, Inc. is a multinational
heavy-machinery manufacturer. In the traditional mode of operation, cycle time along the
supply chain was long because the process involved paper—document transfers among man-
agers, salespeople, and technical staff. To solve the problem, Caterpillar connected its engi-
neering and manufacturing divisions with its active suppliers, distributors, overseas factories,
and customers through an extranet-based global collaboration system. By means of the collab-
oration system, a request for a customized tractor component, for example, can be transmitted
from a customer to a Caterpillar dealer and on to designers and suppliers, all in a very short
time. Customers also can use the extranet to retrieve and modify detailed order information
while the vehicle is still on the assembly line.

Group Work and Decision Processes
Managers and staff continuously make decisions as they develop and manufacture products,
plan social media marketing strategies, make financial and IT investments, determine how
to meet compliance mandates, design software, and so on. By design or default, group pro-
cesses emerge, referred to as group dynamics, and those processes can be productive or dys-
functional.

Group Work and Dynamics Group work can be quite complex depending on the fol-
lowing factors:

• Group members may be located in different places or work at different times.
• Group members may work for the same or different organizations.

Collaborative Technologies and the Internet of Things 121

• Needed data, information, or knowledge may be located in many sources, several of which
are external to the organization.

Despite the long history and benefits of collaborative work, groups are not always
successful.

Online Brainstorming in the Cloud Brainstorming ideas is no longer limited to a
room full of people offering their ideas that are written on a whiteboard or posters. Companies
are choosing an alternative—online brainstorming applications, many of them cloud-based. An
advantage is the avoidance of travel expenses if members are geographically dispersed, which
often restricts how many sessions a company can afford to hold. The following are two exam-
ples of online brainstorming apps:

• Evernote is a cloud-based tool that helps users gather and share information, and brain-
storm ideas. One function is Synch, which keeps Evernote notes up-to-date across a user’s
computers, phones, devices, and the Web. A free version of Evernote is available for down-
load from www.evernote.com.

• iMindmap Online, from UK-based ThinkBuzan, relies on mind mapping and other
well-known structured approaches to brainstorming. iMindmap Online helps streamline
work processes, minimize information overload, generate new ideas, and boost
innovation.

The Internet of Things (IoT)
The Internet of Things has the potential to impact how we live and how we work. The IoT is a
subset of the Internet which dictates that objects we interact with everyday send and receive
signals to and from each other to exchange data about almost everything. The IoT can best be
described as a collection technology in that it collects data from millions of data sensors
embedded in everything from cars to refrigerators to space capsules. This aggregation of data
points through smart meters, sensors, etc. contribute to the “Internet of Things” (IoT).

Analytics, big data, and sensor integrations are revolutionizing how we live and work. A
recent study conducted by IndustryWeek (2016), reported that more than half of U.S. manufac-
turers report they are currently using IoT technology to collect machine data, and a significant
but smaller percentage (44%) are collecting data from sensors embedded in their products.

Several things have created the “perfect storm” for the creation and growth of the IoT. These
include more widely available broadband Internet, lower cost of connecting, development of more
devices with Wi-Fi capabilities and embedded sensors, and the overwhelming popularity of the
smartphone. In layperson’s terms, the IoT is the concept of connecting any device that has an on/
off switch to the Internet or each other. This includes everything from everyday items such as cell-
phones, coffee makers, washing machines, lamps, and headphones to airplane jet engines or an
oil rig drill, smart traffic signals, smart parking, traffic congestion monitoring, air pollution sensors,
potable water monitoring, and river, dam, and reservoir water level monitors. In other words, if
it can be connected, it will be connected. Just think of the IoT as a giant network of connected
“things” with relationships between people-to-people, people-to-things, and things-to-things.

The primary driver for IoT is the broader adoption and deployment of sensors and smart
devices. Some industries have had IoT in place for quite some time, but for others it is an
entirely new concept. Lately, IoT has been gaining in popularity and use. The use of the smaller
sensors, as compared to the traditional IT infrastructure, enables companies to gain more com-
puting capacity and reduce power consumption for less cost. All in all, it’s a win-win situation.

IoT Sensors, Smart Meters, and the Smart Grid
It has been estimated that the number of network-connected sensors and devices could triple
to 21 billion by 2020 (IndustryWeek, 2016).

Internet of Things is the
network of physical objects
or “things” embedded with
electronics, software, sensors,
and network connectivity, that
enables these objects to collect
and exchange data.

122 CHAPTER 4 Networks, Collaborative Technology, and the Internet of Things

Sensors The heart of IoT resides in the source of the data, that is, the sensors. Sensors
generate data about activities, events, and influencing factors that provide visibility into perfor-
mance and support decision processes across a variety of industries and consumer channels.

Smart Grid and Smart Cities With a combination of smart meters, wireless tech-
nology, sensors, and software, the smart grid allows utilities to accurately track power grids
and cut back on energy use when the availability of electricity is stressed. And consumers
gain insight into their power consumption to make more intelligent decisions about how to
use energy.

A fully deployed smart grid has the potential of saving between $39.69 and $101.57, and
up to 592 pounds of carbon dioxide emissions, per consumer per year in the United States,
according to the Smart Grid Consumer Collaborative (SGCC).

On a broader scale, the IoT can be applied to things like “smart cities” that can help reduce
waste and improve efficiency. IT at Work 4.3 describes how a town in Spain is using the IoT to
improve everyday life for its’ citizens, or is it?

IT at Work 4.3

Smart City or Police State?
In the small city of Santander on Spain’s Atlantic coast, Mayor
Iñigo de la Serna raised $12 million, mostly from the European
Commission, to launch SmartSantander. SmartSantander is a
smart city experiment that is improving the quality of life,
reducing energy consumption, and engaging its citizens in civic
duties.

20,000 Sensors Embedded
The city implemented wireless sensor networks and embedded
20,000 sensors in its streets and municipal vehicles to monitor gar-
bage collection, crime, and air quality and manage street lighting
for better energy efficiency. “The internet of Things unites all the
data coming from sensors, along with the data the city already has
and data provided by citizens,” says Joaquin Gonzalez, director of
Telefonica in Cantabria (Frangoul,  2016). Sensors communicate
with smartphone apps to inform drivers and commuters on parking
availability, bus delays, road closures, and the current pollen count
in real time. Parking apps direct drivers to available spaces via
cell phone alerts. Drivers benefit from a reduction in the time and
annoyance of finding parking spots. Anyone can feed his or her own
data into the system by, for example, snapping a smartphone photo
of a pothole or broken streetlight to notify the local government
that a problem needs to be fixed. Users can even point their smart-
phones at landmarks in the city to learn more about them and
events happening around the city.

Build-Out of Smart City Applications
This mobile technology can help cities contribute to a greener
planet. Municipal landscape sprinklers can send facts to city
agencies for analysis to conserve water usage. Sensors can monitor
weather and pollen counts as well as water and power leaks. City
officials also claim that the development has saved money through
automated and data driven applications such as dimming street
lights at optimal times, resulting in 25% savings on electricity bills
and 20% on garbage.

Police State
The data streams and mobile apps that keep citizens informed also
keep the government informed. What is the difference between a
smart city and a police state? Many see the new sensor saturation as
a sort of “Big Brother” experiment. Consider how data collected from
sensors mounted outside a bar to track noise levels might be used.

• Scenario #1: Instances of loud noises and squealing tires are
transmitted to local police. The city uses the information to
enforce public nuisance laws and make arrests.

• Scenario #2: People who live in the neighborhood show civic
leaders what is keeping them up at night and receive help in
resolving the problem.

• Scenario #3: Landlords could use data showing less noise and
cleaner air to promote their apartments or office buildings.

The Dark Side of Smart
The wireless networks and sensors need to be maintained. Thou-
sands of batteries embedded in roadways could have expensive
and disruptive maintenance requirements.

Parking space alerts might create other annoyances. If
everyone becomes aware of a parking spot up the street, the rush
of cars converging on a few open locations could lead to rage and
defeat the purpose of such an alert.

IT at Work Questions
1. What are the benefits of a smart city?
2. What are the potential abuses of data collected in this way?
3. Consider the dark side of smart. Are you skeptical of the ben-

efits of a smart city?
4. Would you want to live in a smart city? Explain.
5. How would you prevent Santander from becoming a police

state?

Sources: Compiled from Eggers (2016), Frangoul (2016), O’Connor (2013),
and Edwards (2014).

Collaborative Technologies and the Internet of Things 123

Security and Privacy in the IoT Network security and data privacy are manufac-
turers’ top concerns about IoT technology. With billions of devices connected together there are
a multitude of end-points where security breaches can occur and individuals or organizations
can be hacked.

Advantages and Disadvantages of IoT Organizations are struggling with the
advantages and disadvantages associated with the IoT and seeking to understand how it will
impact their business.

Wireless hospitals and remote patient monitoring, for example, are growing IoT trends.
Tracking medical equipment and hospital inventory, such as gurneys, is done with RFID tag-
ging at a number of hospitals. Remote monitoring apps are making health care easier and more
comfortable for patients while reaching patients in remote areas.

Organizations can expect to gain from using the IoT in a number of ways, for example,
expected benefits from using IoT include the following:

1. Monitoring performance, quality, and reliability of products and services
2. Gaining insight into potential new products and service
3. Support sales
4. Better understand product use
5. Remote troubleshooting of products
6. Deliver revenue-generating post-sales service
7. More efficiently deliver post-sales services

Similarly, there are concerns around using the IoT and the ability to collect and analyze the
massive amounts of data that it enables. Main disadvantages that organizations have about the
use of IoT include the following:

1. Network security
2. Data privacy
3. Data analysis capabilities
4. Data collection capabilities
5. Realistic efficiency opportunities
6. Realistic new revenue opportunities
7. Cost

Questions

1. Why has group work becoming more challenging?

2. What might limit the use of face-to-face brainstorming?

3. How can online brainstorming tools overcome those limits?

4. List ways in which virtual collaboration can be used in business.

5. What devices do you have that take advantage of the IoT? Describe how they impact the way that you
live and work.

6. What is driving the rise of the IoT?

7. What is the main concern that organizations have about the IoT?

8. Do you think the advantages outweigh the disadvantages of the IoT? Explain.

124 CHAPTER 4 Networks, Collaborative Technology, and the Internet of Things

Key Terms
3G 110
4G 110
5G 110
application program interface (API) 111
Bluetooth 115
circuit switching 111
computer networks 104
Exabyte 114
extranet 105
fixed-line broadband 108
group dynamics 120
information and communications
technology (ICT) 102
Internet of Things 121
Internet Protocol (IP) 109

Intranet 105
IP address 109
IP Version 4 (IPv4) 109
IP Version 6 (IPv6) 109
latency-sensitive apps 107
local area network (LAN) 116
Long-Term Evolution (LTE) 110
mashup 117
near-field communication (NFC) 117
Net neutrality 108
Net semi-neutrality 108
packet 109
packet switching 111
protocol 109
quality of service (QoS) 107

router 106
sensors 122
smart grid 122
smart city 122
switch 106
traffic shaping 108
transmission control protocol/Internet
protocols (TCP/IPs) 111
virtual private networks (VPNs) 105
voice over IP (VoIP) 115
wide area network (WAN) 116
Wi-Fi 115
WiMAX 116
Zettabyte 114

Assuring Your Learning

Discuss: Critical Thinking Questions

1. Explain how network capacity is measured.

2. How are devices identified to a network?

3. Explain how digital signals are transmitted.

4. Explain the functions of switches and routers.

5. QoS technologies can be applied to create two tiers of traffic. What
are those tiers? Give an example of each type of traffic.

6. Typically, networks are configured so that downloading is faster
than uploading. Explain why.

7. What are the differences between 3G, 4G, and 5G networks?

8. What are two 4G wireless standards?

9. How is network performance measured?

10. Discuss two applications of near-field communication (NFC).

11. What are the benefits of APIs?

12. Describe the components of a mobile communication in-
frastructure.

13. What is the range of WiMAX? Why does it not need a clear
line of sight?

14. Why are VPNs used to secure extranets?

15. How can group dynamics improve group work? How can it disrupt
what groups might accomplish?

16. What are the benefits of using software to conduct brainstorming
in the cloud (remotely)?

Explore: Online and Interactive Exercises

1. Visit the Google apps website. Identify three types of collaboration
support and their value in the workplace.

2. Compare the various features of broadband wireless networks
(e.g., 3G, Wi-Fi, and WiMAX). Visit at least three broadband wireless
network vendors.

a. Prepare a list of capabilities for each network.

b. Prepare a list of actual applications that each network can
support.

c. Comment on the value of such applications to users. How can
the benefits be assessed?

Case 4.2 125

Case 4.2
Business Case: Google Maps API for Business
A restaurant owner has a website where customers can place orders for
delivery. When a customer inputs a delivery address, a software script
verifies whether the address is within the delivery range of the restau-
rant. If the address is not in the delivery range, the site does not let
the customer check out and sends a message informing the customer
that he or she is outside of the delivery range. The script requests infor-
mation from Google Maps via an API to calculate whether or not the
address was in the range. The free version, called Google Maps API,
allows up to 2,500 requests per day from a single IP address and is lim-
ited to noncommercial purposes.

The owner needs to purchase a Google Maps API for Business license
because any requests in excess of 2,500 will be ignored. The Google Maps
API for Business provides better resolution, scale, and enhanced features
and support to businesses that add maps to their websites, mobile apps,
or asset-tracking applications.

Directions and Routing Features
The Google Maps API delivers the full power of Google’s routing engine
to applications. Among other features, it:

• Generates routes between up to 23 locations for driving, walking,
or cycling.

• Generates routes to avoid toll roads or highways.

• Reduces travel time by calculating the optimal order to visit
each location.

• Calculates travel time and distance between locations, for
example, to offer users a way to filter search results by drive time.

Data Visualization
The Google Maps API lets managers visualize data using heat maps,
symbols, and custom styles. For U.S. maps, companies have access to
a demographics layer containing up-to-date census data provided by
Nielsen and five-year projections of many data fields. The demograph-
ics layer may only be used on intranets or internal websites.

Advanced Analytics
The Google Maps API for Business offers an analytics tool that shows
how visitors interact with the maps—for example, how many visitors
switched to satellite view, what they zoomed, and which map features
were used the most. Using this information, businesses can customize
the user experience based on their preferences and better engage with
customers.

Automobile Association
Google Maps is the most widely used online mapping service in
the world, with more than 800,000 sites using the Google Maps
API and over 250 million active users on mobile devices alone. In
the United Kingdom, the Automobile Association (AA) provides
roadside assistance and directions to motorists. AA invested in the
Google Maps API for Business to offer interactive route planning
and improve visitors’ experiences. The value AA derived from the
API was a 12% increase in the number of routes downloaded, hit-
ting an average of 4 million downloads per week of its routing or
trip-planning service. Approximately 20% of site visitors remained
on the site for at least five minutes—up from only 6% prior to imple-
mentation. The API also cuts the time and cost of IT support for the
mapping platform.

Questions
1. Describe Google Maps API.

2. Why do you think Google provides free noncommercial use of
its Maps API?

3. How many times have you used a website’s mapping feature for
directions or to calculate distance? How did having a familiar
interface improve your experience?

4. Google claims that its Maps API helps a company’s customers and
employees make better business and purchasing decisions by
visualizing important information on a familiar map. Explain how
data visualization provides these benefits. Give two examples in
your explanation.

Analyze & Decide: Apply IT Concepts to Business Decisions

1. Visit www.Youtube.com and search for tutorials on the latest ver-
sion of iMindMap. Watch a few of the tutorials. As an alternative, watch
the video at http://www.youtube.com/watch?v=UVt3Qu6Xcko&list
=PLA42C25431E4EA4FF. Describe the potential value of sharing maps
online and synching maps with other computers or devices. What is
your opinion of the ease or complexity of the iMindMap interface?

2. Visit the AT&T website and read the article “What you Need to
Know about IoT Wide Area Networks.” Write a short report discussing
the benefits of each type of network that can be used in an organiza-
tion’s IoT and make a choice for your “business.”

126 CHAPTER 4 Networks, Collaborative Technology, and the Internet of Things

References
Cisco. “Sony Adopts Cisco Solution for Global IPv6 Project.” Cisco

Public Information, Customer Case Study. October 28, 2014.
Cisco. “Cisco Visual Networking Index: Forecast and Methodology,

2015–2020,” 2016.
Edwards, J. “The Connected Life.” Teradata Magazine, Q1, 2014.
Eggers, W. D. “8 Ways Digital is Transforming Governments around

the World.” The Huffington Post, July 18, 2016.
Fiegerman, S. “Trump’s FCC May Try to Roll Back Net Neutral-

ity. Here’s Why it Matters.” 2017. Accessed at: http://money.cnn.
com/2017/01/24/technology/fcc-net-neutrality/index.html

Frangoul, A. “Thousands of Sensors are Making This Famous City
Smarter.” CNBC, May 5, 2016.

IndustryWeek. “The Internet of Things: Finding the Path to Value.”
IndustryWeek. 2016.

Joseph, S. “McDonald’s, Unilever and Gatorade Among the First to
Run Snapchat API Campaigns.” TheDrum, October 6, 2016.

Neal, D. “Sony’s Playstation Network Is Down in the UK, Again.” The
Inquirer, October 26, 2016.

O’Connor, M. C. “Santander: Test Bed for Smart Cities and Open Data
Policies.” SmartPlanet.com, May 8, 2013.

Pachal, P. “How the AT&T-Time Warner Deal Threatens Net Neutral-
ity.” Mashable, October 23, 2016.

Patrick, M. “How Will the Internet of Medical Things Change Health-
care?” Electronic Design, October 20, 2016.

PRNewswire. “LTE and 5G Infrastructure Investments are Expected to
Account for a Market Worth $32 Billion by 2020—Research and Mar-
kets.” October 20, 2016.

Wheeler, T. “Setting the Record Straight on the FCC’s Open Internet
Rules.” FCC blog, April 24, 2014.

Zeman, E. “Amazon Opens Beta for Alexa List Skills API.” Programma-
bleWeb, October 13, 2016.

Case 4.3
Video Case: Small Island Telecom Company
Goes Global
Go online to research the Isle of Man, a small island in the Irish Sea
off the coast of Great Britain. Visit the Cisco website. Search for
the video “Island Telecom Competes on a Global Level.” Watch the
video to learn how this small telecom company was able to evolve
from a traditional local service provider to a global cloud services

innovator thanks to Cisco’s networking technology (video runs
2:09 minutes).

Questions
1. Describe the benefits that Island Telecom achieved through using

Cisco’s networking product.

2. What factors allowed Island Telecom to make the transition from
local to global?

127

CHAPTER 5

Cybersecurity and Risk
Management Technology

CHAPTER OUTLINE

Case 5.1 Opening Case: Yahoo wins the gold and
silver medal for the worst hacks in history!

5.1 The Face and Future of Cyberthreats

5.2 Cyberattack Targets and Consequences

5.3 Cyber Risk Management

5.4 Internal Audits and Controls

5.5 Frameworks, Standards, and Models

Case 5.2 Business Case: Lax Security at LinkedIn
Exposed

Case 5.3 Video Case: Botnets, Malware Security,
and Capturing Cybercriminals

LEARNING OBJECTIVES

5.1 Describe the extent of incidents and data breaches in
organizations and the sources of cyberthreats that are putting
organizations in jeopardy.

5.2 Describe the targets of cyberattacks and the impact these
attacks have on both public and private sector organizations.

5.3 Explain why cyber risk management must be a top business
priority and outline an organizational model for cybersecurity.

5.4 Describe the internal audits and controls that are used
to defend against occupational fraud at all levels of an
organization.

5.5 Explain how risk management frameworks, standards, and
models help ensure compliance with industry and federal
regulations. Assess the risk associated with a network crash,
debilitating hacker attack, or other IT disruption.

a. Explain how compliance and security can diverge such
that being compliant is not necessarily equivalent to being
secure. (Home Depot, Target, and a myriad of others were
all PCI compliant.)

128 CHAPTER 5 Cybersecurity and Risk Management Technology

Introduction
Today, most business leaders know they are responsible for cybersecurity and privacy threats,
wherever they occur. What most don’t understand is how to design, implement, and manage
threat-intelligent business strategies and risk management plans to prevent data breaches and
protect IT and business resources.

In the digital economy, organizational data is typically available on demand 24/7 to enable
companies to benefit from opportunities for productivity improvement and data sharing with
customers, suppliers, and business partners. The concept of data on demand is an operational
and competitive necessity for global companies, but unfortunately, it also opens them up to
cyberattacks.

New vulnerabilities are continuously being found in operating systems, applications, and
wired and wireless networks. Left unaddressed, vulnerabilities provide an open door for cyber-
attacks that can cause business disruptions and devastating financial consequences. Managers
no longer question whether their networks will be breached, but when it will happen, how
much damage will be done, how long the investigation will take, and how much the investiga-
tion and fines will cost.

For example, after detecting a network hack, credit card processing company Global
Payments, Inc. spent 14 months investigating the resulting data breach that exposed 1.5 million
U.S. debit and credit card accounts. Global’s damages totaled $93 million. This loss consisted of
$36 million in fraud losses and fines and $77 million for the investigation, remediation, credit
monitoring, and identity theft insurance for affected consumers. And this is not an unusual
occurrence, according to a global study conducted by the Ponemon Institute, the average cost
of a breached record is $141 and the average cost of an overall data breach is $3.62 million
(Ponemon Institute, 2017).

These reports of data breaches focus primarily on what companies are required to
report publicly—theft of personally identifiable information (PII), payment data, and
personal health information (PHI). Consequently, the costs commonly associated with data
breaches only take into consideration these more easily understood impacts. But these
are not always an attacker’s objective. Rarely brought into full view are theft of intellectual
property (IP), espionage, data destruction, attacks on core operations, or attempts to dis-
able critical infrastructure. These attacks can have a much more significant impact on orga-
nizations. But the damage they cause is not widely understood and is much more difficult
to quantify.

As a result, organizations need to acquire a deeper knowledge of cyberattacks and com-
bine it with business context, valuation techniques, and financial quantification to establish the
true costs of their losses. Applying this more accurate knowledge of potential business impacts,
leaders can be much more effective in managing and controlling cyber risk and improve their
ability to recover from a cyberattack.

In Chapter  5, you will learn about cybersecurity terminology, the rising number of
data breaches, sources of cyberthreats, damage caused by cybercriminals’ aggressive tac-
tics and their impacts on organizations. You will also learn how organizations can defend
against cyberattacks, correctly assess the damage they cause, and ensure the actions
needed for business continuity. But, first, let’s take a look at two of the biggest cyberattacks
ever reported.

Introduction 129

Case 5.1 Opening Case

Yahoo Wins the Gold and Silver Medal for the Worst
Hacks in History!
It wasn’t until Fall 2016 that Yahoo alerted its users and the public to
the first of two of the largest known breaches of user information in
history that had occurred 2–3 years earlier. On September 22, 2016,
Yahoo publicly disclosed that over 1 billion Yahoo account records
were stolen in mid-2013. A second news release on December 15, 2016,
revealed a second attack that occurred in 2014 when the account
information of over 500 million Yahoo account holders was breached.
The delay in reporting is partly due to the fact that Yahoo itself did
not know of the breach until shortly before releasing these statements
to the public. The information leaked in the attacks included e-mail
accounts, telephone numbers, street addresses, unencrypted security
questions and answers, but no financial information.

To add insult to injury, at the time of the first news release, Yahoo
was in negotiations with mega-corporation Verizon to acquire Yahoo for
$4.83 billion. After the first news release, Verizon said that the announce-
ment could have a negative impact on their purchasing decision. The
second news release caused Verizon to further review the financial impli-
cations of the two breaches and reduce its offer by $350 million.

The 2013 breach was conducted by an unknown unauthorized third
party. The information stolen in the 2014 attack was sold by a “state-
sponsored actor” on the Dark Web for 3 Bitcoins (approx. $1,900). The actor,
who used the name “Peace” is of Russian origin and attempted to sell data
from 200 million Yahoo users online. Yahoo urged all of its users to change
their passwords and security questions and to review their accounts for
suspicious activity. To date, little information has been released on the
2013 breach, but more is known about the incident that occurred in 2014.

How the Second Attack was Carried Out
The data theft was similar to the way in which a typical online attack of
a database is carried out. The protections used for database containing
the login and personal information were insufficient to protect against
the advanced methods used by the hackers. In this case, the encryption
method employed in the database was broken by the hacker. Addition-
ally, cybercrime analyst Vitali Kremez maintains that the hacker stole
the information from Yahoo slowly and methodically so as to not draw
attention to the breach taking place.

Since the breach was not immediately detected, the hacker had
plenty of time to leverage the information in a financially, personal, or
politically beneficial manner. It is not clear if the seller is the original hacker.

Impact of the Data Breach
Since the breaches were so devastating and far reaching to most of
Yahoo’s customer base, Verizon is having second thoughts about the
acquisition. Craig Silliman, general counsel to Verizon, said Verizon has

“a reasonable basis” to believe that the data breach will have a sig-
nificant impact on the deal proceedings and the likelihood that it will
actually happen (Fiegerman, 2016). He furthers to explain that Yahoo
will have to convince Verizon that the breach will not affect future pro-
cesses in the company and that more security features have been and
will be implemented. Also, the incidents could make the Yahoo deal
worth about $200 million less than the $4.8 billion initially settled
upon. In addition to the decreased value of Yahoo’s core assets, the
company’s stock fell about 2% after the comments by Craig Silliman.

Justice is Served
On March 17, 2017, the U.S. Department of Justice indicted two Russian
Intelligence agents and two state-sponsored hackers, Alexsey Belan
and Karim Baratov, for the theft of the Yahoo user data in 2014. Belan,
one of the FBI’s most notorious criminal hackers, had been previously
indicted in two other cases. In the indictments it was revealed that the
targets of the theft included Russian journalists, U.S. and Russian gov-
ernment officials, military personnel, and private-sector employees of
financial, transportation, and other companies (Balakrishnan, 2017).

The obvious issue surrounding the Yahoo data breaches is Inter-
net security. Simple username, password, and security questions sim-
ply are not enough to keep hackers at bay. UC Davis professor Hemant
Bhargava notes that two-factor authentication (TFA) is successful in
many other companies and that Yahoo should follow suit (Matwyshyn
& Bhargava, 2016). An example of TFA would be that a user is asked to
enter information such as username and password, then a mobile app
generates and sends a random number code for the user to enter before
being granted access to his or her account. Both the Yahoo account and
the mobile app are linked to a common, secure account. This method is
exceptionally popular and useful since over 50% of Web users access the
Web through their mobile phones.

Questions
1. Why do you think Yahoo was targeted for these data breaches?

2. Why did Yahoo keep the breaches from the public eye? How did
their nondisclosure affect Yahoo’s relationship with its customers
and partners?

3. In addition to the data theft, what else was damaged by
this incident?

4. Were these cybersecurity incidents foreseeable? Were they
avoidable?

5. Assuming that the CEO and CIO were forced to resign, what mes-
sage does that send to senior management at Yahoo?

Sources: Compiled from Fiegerman (2016), Hackett (2016a), Kan (2016), Lee
(2016), Matwyshyn and Bhargava (2016), Murgia (2016), Sterling (2015), and
Balakrishnan (2017).

G
F

iu
m

e/
G

et
ty

Im
ag

es

Te
tia

na
V

its
en

ko
/A

la
m

y
St

oc
k

Ph
ot

o

R
ya

n
A

ns
on

/B
lo

om
be

rg
/

G
et

ty
Im

ag
es

130 CHAPTER 5 Cybersecurity and Risk Management Technology

5.1 The Face and Future of Cyberthreats
Over the past several years, the number of cyberattacks in which data records have been stolen by
hackers has increased at an alarming rate. In 2016, the total number of U.S. data breaches hit an all-
time record high of 1,093 according to a report released on January 19, 2017, by the Identity Theft
Resource Center (ITRC) (Goldman, 2017). This represents a 40% increase over the previous year. The
general business sector reported the highest number of cyberattacks with 494 reported incidents,
followed by the healthcare/medical industry with 377, education sector with 98, government/
military with 72, and the banking/credit/financial sector with 52 breaches (see Figure 5.1).

Vulnerability is a gap in IT security defenses of a network, system, or application that can
be exploited by a threat to gain unauthorized access. Vulnerabilities can be exemplified by
lack of controls around people (user training, inadequate policies), process (inadequate sep-
aration of duties, poor process controls), or tools (lack of technical controls enforcement or
monitoring).

Data incidents and breaches in 2016 exposed everything from usernames to passwords to
Social Security numbers and are caused by the successful exploitation of vulnerabilities in
information systems by a threat (risk = threat × vulnerability). Vulnerabilities threaten the confi-
dentiality, integrity, or availability (CIA) of data and information systems, as defined in Figure 5.2.

Data incident is an attempted
or successful unauthorized
access to a network, system, or
application; unwanted disruption
or denial of service; unauthorized
use of a system for processing or
storage of data; changes to system
without the owners knowledge,
instruction, or consent.

Data breach is the successful
retrieval of sensitive information
by an individual, group, or
software system.

Business, 494,
45%

Health care, 377,
34%

Education,
98, 9%

Government,
72, 7%

Finance, 52, 5%

Total Incidents = 1,093

FIGURE 5.1 Number of 2016 U.S. data breaches by industry sector.

Confidentiality: No unauthorized data disclosure.

Integrity: Data, documents, messages, and other files
have not been altered in any unauthorized way.

Availability: Data is accessible when needed by those
authorized to do so.

FIGURE 5.2 The three objectives of data and information
systems security.

The Face and Future of Cyberthreats 131

Hacks of high-tech companies like Yahoo, LinkedIn, Google, Amazon, eBay, and Sony,
and top security agencies like the CIA and FBI are proof that no one is safe. Cyberwarriors
are too well funded and motivated. Taking a global perspective, Verizon’s 2016 Data Breach
Investigations Report (DBIR) examined over 100,000 incidents, including 3,141 confirmed
data breaches across 82 countries. Of these, 89% of the breaches were motivated by finan-
cial gain or espionage. In over 90% of the breaches, it took attackers mere minutes (or less)
to compromise a system. On the other hand, it took companies weeks to months to dis-
cover that a breach had occurred and in most cases it was external sources, such as cus-
tomers or law enforcement that sounded the alarm! Cyberthreats can be intentional or
unintentional.

Table 5.2 lists eight sources of intentional and unintentional cyberthreats that account for
the vast majority of data breaches and other cybersecurity incidents.

Cyberthreat is a threat posed
by means of the Internet (a.k.a.
cyberspace) and the potential
source of malicious attempts to
damage or disrupt a computer
network, system, or application.

Fifty-six percent of all breaches were phishing attacks, where hackers trick an employee
into clicking a specially crafted e-mail link or attachment which then provides the hackers
access to the user’s system and ultimately corporate network and data. These attacks were
up 38% from 2015. Table  5.1 lists the top five data breaches worldwide in 2016. Although
these numbers are high, it’s important to remember that a vast majority of data breaches go
unreported, according to cybersecurity experts, because corporate victims fear that disclo-
sure would damage their stock price, or because they never knew they were hacked in the
first place.

The consequences of insufficient cybersecurity include damaged reputations, consumer
backlash, lost market share, falling share prices, financial penalties, and federal and state
government fines. As a result, companies are investing heavily in security-related technol-
ogies—worldwide spending on security-related hardware, software, and services rose to $73.7
billion in 2016 from $68.2 billion a year earlier and that number is expected to approach $90
billion in 2018.

TABLE 5.1 2016 Biggest Data Breaches Worldwide, in Terms of Number of Data Records Breached

Company Type of Data Breach Records Breached
Anthem Insurance The attack against U.S.-based health insurer Anthem was an identity theft

breach that resulted in the theft of 78.8 million records, making it the largest
data breach of the year in terms of records compromised. Current and former
members of one of Anthem’s affiliated health plans, as well as some members of
other independent Blue Cross and Blue Shield plans who received health-care
services in any of the areas that Anthem serves, were said to be affected.

78.8 Million

Turkish General Directorate
of Population and
Citizenship Affairs

The Turkish government agency experienced an identity theft attack at the
hands of a malicious outsider. The attack exposed 50 million records, and
information pertaining to citizens was stolen.

50 Million

Korean Pharmaceutical
Information Center

The South Korean organization that distributes pharmacy management soft-
ware to many of the country’s pharmacies was hit by an identity theft breach
launched by a malicious insider. The result was the exposure of 43 million
records. According to the Korea Herald, medical information of nearly 90% of the
South Korean population was sold to a multinational firm, which processed and
sold the data.

43 Million

U.S. Office of Personnel
Management

The state-sponsored attack, which was described by federal officials as being
among the largest breaches of government data in the history of the United
States, scored a 9.6 on the risk assessment scale. The attack exposed data
including PII such as Social Security numbers, names, dates and places of birth,
and addresses.

22 Million

Experian The U.S.-based credit bureau and consumer data broker experienced an iden-
tity theft breach by a malicious outsider that resulted in the theft of 15 million
records. The data included some PII about consumers in the United States,
including those who applied for T-Mobile services or device financing.

15 Million

Source: Breach Level Index (2016).

132 CHAPTER 5 Cybersecurity and Risk Management Technology

Intentional Threats
Examples of intentional threats include data theft such as inappropriate use of data (e.g.,
manipulating inputs); theft of computer time; theft of equipment and/or software; deliberate
manipulation in handling, entering, programming, processing, or transferring data; sabotage;
malicious damage to computer resources; destruction from malware and similar attacks; and
miscellaneous computer abuses and Internet fraud.

Unintentional Threats
Unintentional threats fall into three major categories: human error, environmental hazards,
social unrest and computer system failures.

• Human error can occur in the design of the hardware or information system. It can also
occur during programming, testing, or data entry. Neglecting to change default passwords
in applications or on systems or failing to manage patches creates security holes. Human

TABLE 5.2 Major Sources of Cyberthreats

Source/Type Characteristics Solution

Intentional Cyberthreat
Hacking Unauthorized access of networks, systems or applications for

economic, social, or political gain. Use of programs such as
backdoor services to promote reentry or further incursion into
target environment

Train your staff
Change password frequently
Have “strong” passwords

Phishing Social engineering, targeting human behavior rather than computer
technology

Train your staff
Monitor activity

Crimeware Use of malware and ransomware Use antimalware/AV software
Patch promptly
Monitor change and watch key indicators
Back-up system regularly
Capture data on attacks
Practice principle of least privilege

Distributed denial-
of-service

Use of compromised systems to overwhelm a system with
malicious traffic

Segregate key servers
Choose your providers carefully
Test your anti-DDoS service

Insider and privi-
lege misuse

Employees, contractors, partners, suppliers, and other external
entities with specific insider roles abusing access granted to systems
for legitimate business purposes.

Monitor user behavior
Track mobile media usage
Know your data

Physical theft Theft of laptops, tablets, peripherals, printed material, etc. Encrypt your data
Train your staff
Reduce use of paper

Unintentional Cyberthreat
Physical loss Theft of laptops, tablets, and peripheral devices Encrypt your data

Train your staff

Miscellaneous errors Any unintentional action that compromises security, except theft, and
loss of assets

Learn from your mistakes
Strengthen controls
Ensure all assets go through a rigorous
check by IT before they are decommis-
sioned or disposed of

Source: Verizon (2016).

The Face and Future of Cyberthreats 133

An Inside Look at How the Hacking Industry Operates Hacking is an
industry with its own way of operating, a workforce, and support services. Hackers use social
networks, underground forums, and the Deep Web to rate and promote services, share exploits,
and recruit others. In certain forums and in the Deep Web, hackers can purchase the use of any
number of services. These include the following:

Educational services
Software platforms for building and distributing hacking tools and malware/ransomware
Sale or purchase of stolen data ranging from items as simple as e-mail accounts to credit
cards, PII, and corporate data.

error also includes untrained or unaware users falling prey to social engineering like
phishing scams or ignoring security procedures. Human errors contribute to the majority
of internal control and information security problems.

• Environmental hazards include volcanoes, earthquakes, blizzards, floods, power fail-
ures or strong fluctuations, fires (the most common hazard), defective heating, ventilation
and air-conditioning (HVAC) systems, explosions, radioactive fallout, and water-cooling-
system failures. In addition to the primary damage, computer resources can be damaged
by the side effects of a hazard, such as smoke and water. Such hazards may disrupt normal
computer operations resulting in extended data inaccessibility and exorbitant restoration
and recovery costs.

• Computer systems failures can occur as the result of poor manufacturing, defective
materials, or poor maintenance. Unintentional malfunctions can also occur for other rea-
sons, ranging from administrator inexperience to inadequate testing.

In the next sections, you will learn more about the various sources of cyberthreats and
their potential impact on organizations.

Hacking
Hacking is a very profitable industry. In 2016, 56% of reported data breaches were reported to
be the result of hacking, which is 18% higher than those reported for 2015 (Verizon,  2016).
Hacking is a big part of underworld cybercrime, and a way for hacktivists to protest. Both the
anonymity of the Internet and lack of international treaties provide hackers with a feeling of
near invincibility because they face very low risk of capture and punishment.

It is important to note that in the Hacker culture there are three classes of Hackers, shown
in Table 5.3.

Hacking is broadly defined
as intentionally accessing a
computer without authorization
or exceeding authorized access.
Various state and federal laws
govern computer hacking.

Hacktivist is short for hacker-
activist or someone who performs
hacking to promote awareness
for or otherwise support a social,
political, economic, or other
cause. Hacking an application,
system, or network without
authorization, regardless of
motive, is a crime.

TABLE 5.3 Three Classes of Hackers

Type Characteristics Outcome
White hat Computer security specialist

who breaks into protected
systems and networks to test
and assess their security.

Use their skills to improve security by exposing
vulnerabilities before malicious hackers (black
hats) can detect and exploit them.

Black hat Person who attempts to find
computer security vulnera-
bilities and exploit them for
personal financial gain or
other malicious reasons.

Can inflict major damage on both individual com-
puter users and large organizations by stealing
personal financial information, compromising
security of major systems, or shutting down or
alerting the function of websites and networks.

Gray hat Person who may violate eth-
ical standards or principles,
but without the malicious
intent ascribed to black
hat hackers.

May engage in practices that are less than ethical,
but are often operating for the common good,
e.g., exploits a security vulnerability to spread
public awareness that the vulnerability exists.

134 CHAPTER 5 Cybersecurity and Risk Management Technology

Contract hackers are available for hire or complete hack attacks can be bought.
Hacking help desks provide 24/7 support—making sophisticated attacks easier to manage
and execute.
Organized crime groups quickly learned that cybercrime has better payoffs with substan-
tially lower risks to life, limb, and liberty than other activities like human trafficking, smug-
gling, extortion, and the drug trade. They become virtually untouchable by law enforcement
because often no one sees the crime and if it is identified, the lack of international treaties
and cooperation make capture and trial between those non-extradition countries virtu-
ally impossible. Given this, it is not surprising that almost every survey identifies the same
troubling trend—the recovery costs and frequency of cybercrimes are increasing while
the costs of execution are declining. This means much stronger IT security practices and
defenses are obviously needed. One of the greatest cybersecurity weaknesses is users who
ignore the dangers of weak passwords—more than half of all confirmed data breaches
involve weak or stolen passwords. The capture and misuse of credentials, such a user’s IDs
and passwords, is one of the foundations of the cybercriminal and nation-state hackers
used in executing numerous other types of cyberthreats, including phishing (discussed in
more detail later in the chapter). Proper credential management is essential to security.

Cyber Social Engineering and Other Related
Web-Based Threats
Experts believe the greatest cybersecurity dangers over the next few years will involve persis-
tent threats, mobile computing, and the use of social media for social engineering. From an IT
security perspective, social engineering is a hacker’s clever use of deception or manipulation
of people’s tendency to trust, be helpful, or simply follow their curiosity. Powerful IT security
systems cannot defend against what appears to be authorized access.

Notorious hacker Kevin Mitnick, who served time in jail for hacking, used social engineering
as his primary method to gain access to computer networks. In most cases, the criminal never
comes face-to-face with the victim, but communicates via the phone or e-mail.

Humans are easily hacked, making them and their social media posts high-risk attack vec-
tors. For instance, it is often easy to get users to infect their corporate network or mobile devices
by tricking them into downloading and installing malicious applications or backdoors.

Phishing Phishing is the term used to describe a social-engineering attack that can use
e-mail sent to the recipient under false pretense to steal confidential information from the
target. This is done by the sender pretending to be a known person or legitimate organization,
such as PayPal, a bank, credit card company, or other trusted source and asking the user to
perform an action that would expose his or her computer to a cyberthreat or reveal credentials,
personal, financial, or business-related private information. Phishing messages are either sent
in mass campaigns or they are specifically targeted at a particular group of people or person.
The former requires no front work to gain context for the target but relies on sheer volume of
messages (millions to tens of millions) to achieve returns.

The latter requires more effort to gather relevant context about the message target and
is therefore sent out in far smaller batches but has a higher rate of return on both the number
of opened messages and the payback per message for that effort. The latter approach is dis-
cussed later in this section.

Phishing messages include a request to respond with information of some kind or a link
to a fraudulent website that often looks like an authentic site the user works with. When the
user clicks the link to the site, he or she falls victim to a malware download, drive-by attack, or
information skimming such as being asked for a credit card number, Social Security number,
account number, or password.

Criminals use the Internet and private networks to hijack large numbers of systems
including PC’s mobile devices, servers, and Internet of Thing (IoT) devices to spy on users,
spam them, shake down businesses, and steal identities. Once captured, they are called Bots,

The Face and Future of Cyberthreats 135

short for robots or Internet Robots. But why are they so successful? The Information Security
Forum, a self-help organization that includes many Fortune 100 companies, compiled a list of
the top information problems and discovered that nine of the top 10 incidents were the result
of three factors:

1. Mistakes or human errors leading to misconfigured systems, applications, or networks
2. Malfunctioning systems
3. Failure to patch or otherwise properly maintain software on existing systems

Unfortunately, these factors can too easily create gaps in cybersecurity controls that com-
panies and individuals use to protect their information.

Spear Phishing Spear phishing targets select groups of people who have something in
common. They can work at the same company, bank at the same financial institution, use a
specific Internet provider, or attend the same church or university. The scam e-mails appear to
be sent from organizations or people the potential victims normally receive e-mails from, mak-
ing them even more deceptive.

Here is how spear phishing works:

1. Spear phishers gather information about people’s activities, social groups, companies,
and/or jobs from general media announcements, social media or compromised accounts,
applications that are poorly designed and leak information or they can steal it from web-
sites, computers, or mobile devices they have compromised, and then use that informa-
tion to customize messages.

2. Then they send the customized e-mails to targeted victims, creating some sort of pretext
requiring the user to act or respond. These can be threats of account closure, loss of access
or privilege, loss of funds or additional charges, legal actions impact to friends or family
members, and so on. With the background information gained the message creates a very
legitimate-sounding and compelling explanation as to why they need your personal data.

3. Finally, the victims are asked to click on a link inside the e-mail that takes them to a phony
but realistic-looking website, where they are asked to provide passwords, account num-
bers, user IDs, access codes, PINs, and so on.

When spear phishing targets are executives or persons of significant wealth, power,
influence, or control the activity is known as “whaling.”

Crimeware IT security researchers discover almost 1 million malicious programs every
day. Why would so many hackers be spending so much time generating or launching these pro-
grams? The answer is simple—it pays well! Crimeware can be broken down into several cate-
gories, including spyware, adware, malware, and ransomware.

Malware Assaults are Part of Everyday Operations There have been
numerous test cases of malware overheating devices, causing them to physically distort or
worse. These attacks, bundled into a cyberattack, could have devastating and lasting effects
beyond what we commonly associate with an aggravating distributed denial-of-service
(DDoS) attack.

Viruses, worms, trojans, rootkits, backdoors, and keyloggers are types of malware. Most
viruses, trojans, and worms are activated when an attachment is opened or a link is clicked.
But when features are automated, they may trigger malware automatically, too. For example:

• If an e-mail client, such as Microsoft Outlook or Gmail, is set to allow scripting, then virus
infection occurs by simply opening a message or attachment.

• Viewing e-mail messages in HTML, instead of in plain text, can trigger virus infections.

Malware is not just about e-mail. It also includes rogue applications and malicious websites.

Spyware is tracking software
that is not designed to
intentionally damage or disable
a system. For example, an
employer may install spyware
on corporate laptops to monitor
employee browsing activities, or
an advertiser might use cookies to
track what Web pages a user visit
in order to target advertising in a
marketing campaign.

Adware is software that
embeds advertisements in the
application. It is considered a
legitimate alternative offered to
consumers who do not wish to pay
for software.

Malware refers to hostile or
intrusive software, including
computer viruses, rootkits, worms,
trojan horses, ransomware, and
other malicious programs used
to disrupt computer or mobile
operations, gather sensitive
information, gain access to private
computer systems.

Ransomware is a type of
malware that is designed to block
access to a computer system until
a sum of money has been paid.

136 CHAPTER 5 Cybersecurity and Risk Management Technology

Remote access trojans (RATS) are a form of Trojan horse that creates an unprotected
backdoor into a system through which a hacker can remotely control that system. As the name
implies, a backdoor provides easy access to a system, computer, or account by creating the
access that may or may not require authentication.

However, hackers are very territorial and don’t want someone else using systems they
worked to compromise, so RATS often require some form of access control to eliminate the
need to authenticate with a username and password.

A malware’s payload is code that is dropped on the system that performs any or all of
the following functions: facilitates the infection or communicates with the command and con-
trol server or downloads more code. In doing so, the payload carries out the purpose of the
malware. The payload could cause damage that is visible or operate in stealth mode so as to
remain undetected. A vector is the specific method that malware uses to propagate, or spread,
to other machines or devices. Malware may also replicate to make copies of itself.

Malware creators often use social engineering to maximize the effective distribution of
their creations. For example, the ILoveYou worm, released in May, 2000, used social engineering
to entice people to open malware-infected e-mail messages. It successfully attacked tens of
millions of Windows computers when it was sent as an e-mail attachment with the subject line:
ILOVEYOU. Within nine days, the worm had spread worldwide, crippling networks, destroying
files, and causing an estimated $5.5 billion in damages.

Malware Reinfection, Signatures, Mutations, and Variants When a host
computer is infected, attempts to remove the malware may fail—and the malware may reinfect
the host for these two reasons:

1. Malware is captured in backups or archives Restoring the infected backup or archive
also restores the malware.

2. Malware infects removable media Months or years after the initial infection, the remov-
able media may be accessed, and the malware could attempt to infect the host.

Most antivirus (AV) software relies on signatures to identify and then block malware.
According to the Worldwide Malware Signature Counter, at the start of 2013, there were an esti-
mated 19 million malware signatures. Detecting and preventing infections are not always a pos-
sibility. Zero-day exploits—malware so new their signatures are not yet known—are an example.
Malware authors also evade detection by AV software and firewalls by altering malware code to cre-
ate variants, which have new signatures. But not all procedures or AV tools are capable of removing
every trace of the malware. Even if the malicious parts of the infection can be cleaned from a system,
the remaining pieces of code could make the system unstable or expose to future infection.

Botnets Today’s malware is often designed for long-term control of infected machines.
Advanced malware sets up outbound communication channels in order to upload stolen data,
download payloads, or do reconnaissance.

In contrast, a botnet is a group of external attacking entities and is a totally different attack
method/vector from malware which is internal to the system. Infected computers, called zom-
bies, can be controlled and organized into a network of zombies on the command of a remote
botmaster (also called bot herder). Storm worm, which is spread via spam, is a botnet agent
embedded inside over 25 million computers. Storm’s combined power has been compared to
the processing might of a supercomputer. Storm-organized attacks are capable of crippling any
website. Zombies can be commanded to monitor and steal personal or financial data—acting
as spyware. Botnets are used to send spam and phishing e-mails and launch DDoS attacks. Bot-
nets are extremely dangerous because they scan for and compromise other computers, which
then can be used for every type of crime and attack against computers, servers, and networks.

Ransomware Is Increasingly Becoming a Problem Ransomware has been
around for more than a decade. The problem began on a fairly small scale, targeting individual
users, but the ransomware cyberthreat has been growing in the last couple of years and the

Trojan horse is a program that
appears harmless, but is, in fact,
malicious.

The Face and Future of Cyberthreats 137

attacks have become large scale. Now, some company executives fear entire companies will be
shut down by ransomware until they pay up, or risk losing all their data.

Ransomware works by first infiltrating a computer with malware and then encrypting
all the files on the disk. The malware used to encrypt files can be difficult to defend against,
and the encryption in most cases can’t be broken. Then, the user is presented with a limited
time offer: Lose all your data or send money with the promise the data will be unlocked. The
fee typically varies from a few dollars to hundreds of dollars and often has to be transmitted
in Bitcoin. One hospital in Los Angeles, whose electronic medical record system was locked
out for 10 days, was forced to pay cyberattackers 40 Bitcoins to get its system unlocked
when law enforcement and computer experts were unable to help in restoring the hospital’s
data files.

Computer security experts have theorized that this type of attack has a higher rate of
success versus other cybercrime activity that has become more difficult. The best insurance
against ransomware is to have offline or segregated backups of data.

Denial-of-Service
Cybersecurity experts warn that battling the increasing number of Denial-of-Service (DoS)
threats needs to be a top priority. DoS threats come in a number of “flavors,” depending on
their target. The three most prominent forms are:

Distributed Denial-of-Service (DDoS)—crashes a network or website by bombarding it
with traffic (i.e., requests for service) and effectively denying services to all those legiti-
mately using it and leaving it vulnerable to other threats.
Telephony Denial-of-Service (TDoS)—floods a network with phone calls and keeps the
calls up for long durations to overwhelm an agent or circuit and prevents legitimate callers
such as customers, partners, and suppliers from using network resources.
Permanent Denial-of-Service (PDoS)—completely prevents the target’s system or device
from working. This attack type is unique. Instead of collecting data or providing some
ongoing perverse function its objective is to completely prevent its target’s device(s) from
functioning. The damage PDoS causes is often so extensive that hardware must be rein-
stalled or reinstated. PDoS is also known as “phlashing.”

A “chilling” example of the havoc that PDoS can cause was demonstrated when a PDoS
attack took the building management system offline in a block of residential apartments in
Finland. The system’s Internet connection was blocked causing the system to repeatedly try to
reconnect by rebooting itself. During this downtime, the system was unable to supply heat at
a time when temperatures were below freezing! Fortunately, the energy company was able to
find alternate accommodations for residents until the system was brought back online.

Insider and Privilege Misuse
Threats from employees, referred to as internal threats, are a major challenge largely due to
the many ways an employee can carry out malicious activity. Insiders may be able to bypass
physical security (e.g., locked doors) and technical security (e.g., passwords) measures that
organizations have put in place to prevent unauthorized access. Why? Because defenses such
as firewalls, intrusion detection systems (IDSs), and locked doors mostly protect against
external threats. Despite the challenges, insider incidents can be minimized with a layered
defense-in-depth strategy consisting of security procedures, acceptable use policies (AUPs),
and technology controls.

Data tampering is a common means of attack that is overshadowed by other types of
attacks. It refers to an attack during which someone enters false or fraudulent data into a com-
puter, or changes or deletes existing data. Data tampering is extremely serious because it may
not be detected. This is the method often used by insiders.

138 CHAPTER 5 Cybersecurity and Risk Management Technology

Physical Theft or Loss
The threat of an information asset going missing, whether through negligence or malice can
send companies into a panic. The “miniaturization” of computing has led to an increase in
physical theft or loss. Laptops, tablets, modems, routers, and USBs are much more easily trans-
portable than mainframes or servers! When a laptop or tablet with unencrypted sensitive docu-
ments on it goes missing it’s difficult to determine if a data breach has actually occurred, but
precautions must always be taken. Theft of laptops occurs primarily in victims’ own work area
or from their vehicles. On the positive side, lost items are much more prevalent than theft. Theft
is more likely to be related to the procurement of USB drives and printer paper.

Miscellaneous Errors
The main concern related to this source of cyberthreat is a shortage of capacity, thus prevent-
ing information from being available when needed. Other threat actions that fall within this
category of miscellaneous errors are shown in Table 5.4.

New Attack Vectors
Vulnerabilities exist in networks, operating systems, applications, databases, mobile devices,
and cloud environments. These vulnerabilities are attack vectors or entry points for malware,
hackers, hacktivists, and organized crime. Mobile devices and apps, social media, and cloud
services introduce even more attack vectors for malware, phishing, and hackers. As a result,
new cyberthreats are on the horizon.

Malicious (Rogue) Mobile Applications The number of malicious Android applica-
tions is growing at an alarming rate. According to a report by AV provider and software analysis
group Trend Micro, more than 850,000 Android phones worldwide have been infected by the
new “Godless” malware, as of June, 2016 (Goodin, 2016). The malware is transferred to users’
phones through rogue applications in the Google Play store. According to mobile security cloud
service providers Marble Security and Trend Micro, over 42% of the more than 300 rogue mobile
applications found in the Google Play store are published in the United States (RT.com, 2015;
Duan, 2016). Almost all of these applications were found in unreliable third-party stores. Rogue
mobile applications can serve up trojan attacks, other malware, or phishing attacks.

Companies offering legitimate applications for online banking, retail shopping, gaming, and
other functions might not be aware of threats lurking in their app stores. And despite their best
efforts, legitimate app store operators cannot reliably police their own catalogs for rogue apps.

With a single click on a malicious link, users can launch a targeted attack against their
organizations.

Attack vector is a path or
means by which a hacker can
gain access to a computer or
network server in order to deliver
a malicious outcome.

TABLE 5.4 Threat Actions Classified as Miscellaneous Errors

Misdelivery Information delivered to the wrong person, when e-mails or documents
are sent to the wrong people

Publishing error Information published to an unintended audience, such as the entire
Internet, enabling them to view it

Misconfiguration A firewall rule is mistyped allowing access to a sensitive file server from all
internal networks rather than a specific pool of hosts

Disposal error A hard drive is not “wiped” on decommissioned devices

Programming error Code is mistyped or logic is flawed

Date entry error Data is entered incorrectly or into the incorrect file or duplicated

Omission Data is not entered; document is not sent

Cyberattack Targets and Consequences 139

Questions

1. Define and give an example of an intentional threat and an unintentional threat.

2. Why might management not treat cyberthreats as a top priority?

3. Describe the differences between distributed denial-of-service (DDoS), telephony denial-of-service
(TDoS), and permanent denial-of-service (PDoS).

4. Why is social engineering a technique used by hackers to gain access to a network?

5. List and define three types of malware.

6. What are the risks caused by data tampering?

7. Define botnets and explain why they are dangerous.

8. Why is ransomware on the rise? How might companies guard against ransomware attacks?

5.2 Cyberattack Targets and Consequences
Every enterprise has data that profit-motivated criminals want. Customer data, networks, web-
sites, proprietary information systems, and patents are examples of assets—things of value
that need to be protected. However, it would appear that management may not be doing
enough to defend against cyberattacks. Even high-tech companies and market leaders appear
to be detached from the value of the confidential data they store and the ways in which highly
motivated hackers will try to steal them.

One of the biggest mistakes managers make is underestimating IT vulnerabilities and
threats. For example, workers use their laptops and mobiles for both work and leisure, and in
an era of multitasking, they often do both at the same time. Yet off-time or off-site use of devices
remains risky because, despite policies, employees continue to engage in dangerous online
and communication habits. Those habits make them a weak link in an organization’s otherwise
solid security efforts.

Some of the most prevalent and deadly targets that cyber criminals will attack in companies
and governmental agencies include: critical infrastructure; theft of IP; identity theft; bring your
own device (BYOD); and social media. Some of these attacks will be conducted as high-profile
attacks while others will fall into the category of “under-the-radar” attacks. Before discussing the
different cyberattack targets, let’s take a look at the differences between these two approaches.

“High-Profile” and “Under-the-Radar” Attacks
Advanced persistent threat (APT) attackers operate “under the radar” so they can continue
to steal data, as described in IT at Work 5.1 and profit from it. These APT attackers are profit-
motivated cybercriminals who often operate in stealth mode. In contrast, hackers and hacktiv-
ists with personal agendas carry out high-profile attacks to gain recognition and notoriety.

Hacktivist groups, such as Anonymous, a loosely associated international network of
activist and hacktivist entities and its spin-off hacker group, LulzSec, have committed daring
data breaches, data compromises, data leaks, thefts, threats, and privacy invasions. Consider
the following three examples:

Philippine Commission on Elections A few months before a Philippine election, the
hacker group Anonymous tapped into the commission’s website and released personal
information on 55 million registered voters. The demonstration was in response to the
Philippines’ lax security measures around its voting machines; 1.3 million overseas voters’
information, which included passport numbers, were included in the breach.
Combined Systems, Inc. Proudly displaying its hacktivist flag, Anonymous took credit
for knocking Combined Systems, Inc. offline and stealing personal data from its clients.
Anonymous went after Combined Systems, which sells tear gas and crowd-control devices
to law enforcement and military organizations, to protest war profiteers.

140 CHAPTER 5 Cybersecurity and Risk Management Technology

CIA Twice in one year, Anonymous launched a DoS attack that forced the CIA website
offline. The CIA takedown followed a busy week for the hacktivists. Within 10 days, the
group also went after Chinese electronics manufacturer Foxconn, American Nazi groups,
AV firm Symantec, and the office of Syria’s president.

In contrast, APTs typically steal corporate and government secrets. Most APT attacks are
launched through phishing. Typically, this type of attack begins with some reconnaissance on
the part of attackers. This can include researching publicly available information about the
company and its employees, often from social networking sites. This information is then used
to create targeted phishing e-mail messages. A successful attack could give the attacker access
to the enterprise’s network.

APTs are designed for long-term espionage. Once installed on a network, APTs transmit
copies of documents, such as Microsoft Office files and PDFs, in stealth mode. APTs collect and
store files on the company’s network; encrypt them; then send them in bursts to servers often in
China or Russia. This type of attack has been observed in other large-scale data breaches that
exposed significant numbers of identities.

Both high-profile and under-the-radar attacks can be launched against a number of differ-
ent targets. We will discuss those next.

Critical Infrastructure Attacks
Hackers, hacktivists, crime syndicates, militant groups, industrial spies, fraudsters, and hostile
governments continue to attack networks for profit, fame, revenge, or an ideology; to wage
warfare and terrorism, fight against a terrorist campaign, or disable their target. For example,
the Department of Homeland Security (DHS) Industrial Control Systems Cyber Emergency
Response Team (ICS-CERT) warned that attacks against critical infrastructure are growing. In
2015, more than 427 vulnerability incidents were reported, far surpassing the 245 total attacks
reported in 2014. The most affected industry was the energy sector.

Figure  5.3 shows the 16 critical infrastructure sectors whose assets, systems, and net-
works, whether physical or virtual, are considered so vital to the United States that their
incapacitation or destruction would have a debilitating effect on security, national economic
security, national public health or safety, or any combination thereof.

Critical infrastructure is
defined as, “systems and assets,
whether physical or virtual, so
vital to the a country that the
incapacity or destruction of
such systems and assets would
have a debilitating impact on
security, national economic
security, national public health
or safety, or any combination of
those matters” (Department of
Justice, 2001).

Chemical

Water and
Wastewater

Transportation
Systems

Nuclear Reactors,
Materials & Waste

IT Health care &
Public Health

Energy

Emergency
Services

Defense
Industrial Base

Critical Mfg.CommunicationsCommercial

Food &
Agriculture

Financial

Government
Facilities

Dams

FIGURE 5.3 U.S. critical infrastructure sectors.

Cyberattack Targets and Consequences 141

Attacks on critical infrastructure sectors can significantly disrupt the functioning of
government and business—and trigger cascading effects far beyond the targeted sector and
physical location of the incident. These cyberattacks could compromise a country’s critical
infrastructure and its ability to provide essential services to its citizens.

For example, the first cyberattack against a nation’s power grid occurred in December,
2015, when a cyberattacker successfully seized control of the Prykarpattyaoblenergo Control
Center (PCC) in the Western Ukraine leaving 230,000 citizens without power for up to six hours.
The attackers carefully planned their assault over many months. They studied the networks
and siphon operator credentials and finally launched their devastating synchronized assault
in the middle of winter. The PCC operated a supervisory control and data acquisition (SCADA)
system, which is a common form of industrial control system, that distributed electricity. The
critical devices at 16 substations became unresponsive to any remote command by its oper-
ators after attackers overwrote its firmware. This type of control system is surprisingly more
secure than some used in the United States since they have robust firewalls that separate
them from control center business networks. Governments around the world have plans in
place to deal with the consequences of natural disasters, yet none have disaster relief plans
for a downed power grid. Clearly, this must change. Local and state governments must work
together with their national counterparts to produce and quickly implement plans to address
future attacks.

In response to the consistently growing number of cyberattacks over the past decade,
the Inter-American Committee Against Terrorism (CICTE) issued a formal declaration to
protect critical infrastructure from emerging threats and a Presidential executive order
was signed in May 2017 to strengthen the cybersecurity of Federal networks and critical
infrastructure.

Theft of Intellectual Property
Intellectual property (IP) can represent more than 80% of a company’s value and as such is a
critical part of all 21st-century organizations. Losing customer data to hackers can be costly
and embarrassing but losing IP, commonly known as trade secrets, could threaten a company’s
existence. It’s a business leaders’ nightmare—that gut-wrenching realization that a corporate
network has been breached and valuable intellectual assets have been stolen by unknown
cybercriminals (Gelinne et al., 2016).

Theft of IP has always been a threat from corporate moles, disgruntled employees, and
other insiders. While some IP may still be obtainable exclusively through physical means, dig-
itization has made theft easier. Advancements in technology, increased mobility, rapid global-
ization, and the anonymous nature of the Internet create growing challenges in protecting IP.
Hackers’ preferred modus operandi is to break into employees’ mobile devices and leapfrog
into employers’ networks—stealing trade secrets without a trace.

Cybersecurity experts and government officials are increasingly concerned about breaches
from other countries into corporate and government networks either through mobile devices
or other means. For example, a government agency could have blueprints for a secret new
weapon system stolen by foreign agents, or an employee of a popular game developer might
steal their latest game before it is released to the public.

In May of 2016, President Barack Obama signed the Defend Trade Secrets Act (DTSA), to
allow “the owners of trade secrets to bring a civil action in federal court for trade secret mis-
appropriation” (Gibson Dunn, 2016). Until the signing of the DTSA, corporations had to rely
on state law regarding trade secrets. Now, every American corporation is equally protected
under federal law. Moreover, it extends the power of the federal government in regulation
of trade secrets through interstate and foreign commerce while maintaining existing trade
secret laws.

A famous example of theft of IP is the APT attack named Operation Aurora perpetrated
against Google, described in IT at Work 5.1.

Intellectual property is a
work or invention that is the
result of creativity that has
commercial value, including
copyrighted property such as a
blueprint, manuscript, or a design,
and is protected by law from
unauthorized use by others.

142 CHAPTER 5 Cybersecurity and Risk Management Technology

Identity Theft
One of the worst and most prevalent cyberthreats is identity theft. Thefts where individuals’
Social Security and credit card numbers are stolen and used by thieves are not new. Criminals
have always obtained information about other people—by stealing wallets or dumpster diving.
But widespread electronic sharing and databases have made the crime worse. Because finan-
cial institutions, data-processing firms, and retail businesses are reluctant to reveal incidents in
which their customers’ personal financial information may have been stolen, lost, or compro-
mised, laws continue to be passed that force those notifications.

Bring Your Own Device
Another, more recent, vulnerability is bring your own device (BYOD). Roughly 74% of U.S. organ-
izations are either already using or planning to use BYOD. It’s an appealing concept because
BYOD enables companies to cut costs by not having to purchase and maintain employees’

IT at Work 5.1

Operation Aurora
Operation Aurora was a counterespionage operation being run by
the Chinese government. It was a series of cyberattacks conducted
by APTs with ties to the People’s Liberation Army in China. Attackers
successfully accessed a database that flagged Gmail accounts
marked for court-ordered wiretaps to gain insights into active inves-
tigations being conducted by the FBI and other law enforcement
agencies that involved undercover Chinese operatives.

To access IP, Operation Aurora exploited security flaws in
e-mail attachments to sneak into the networks of major financial,
defense, and technology companies and research institutions in the
United States by performing six steps, as described in Figure 5.4.
Standard IT security technologies at Google failed to prevent these
six steps from occurring and neither Google nor its Gmail account
holders knew they had been hacked.

Once the APTs gained access to Google’s internal systems
(Step 6), they were free to steal corporate secrets. Reportedly, over
30 other large companies from a wide range of industries were sim-
ilarly targeted by Operation Aurora.

Most hack activities do not become headline grabbers until
after the incidents are detected and reported. Even then, victimized
companies are reluctant to discuss them so statistics are scarce. In
the case of Operation Aurora, the attack was not discovered until
almost one year after the fact!

IT at Work Questions
1. Describe the six steps of Operation Aurora.
2. What was the purpose of Operation Aurora?
3. What could Google have done to prevent Operation

Aurora?

1. A targeted user
receives a link in an
e-mail or text from
a “trusted” source.

4. The exploit
downloads a binary
disguised as an
image and executes
the malicious payload.

5. The payload sets
up a backdoor and
connects to C and C
servers in Taiwan.

6. Attackers now
have complete
access to internal
systems. They are
now a persistent
threat.

2. When the user
clicks the link, a
website hosted in
Taiwan loads. It
contains malicious
JavaScript.

3. The user’s
browser downloads
and executes the
JavaScript, which
includes a zero-day
IE exploit.

FIGURE 5.4 Overview of the six steps in the Operation Aurora APT attack.

Cyberattack Targets and Consequences 143

mobile devices. Unfortunately, many companies have rushed into it without considering issues
relating to security. Mobile devices rarely have strong authentication, access controls, and
encryption even though they connect to mission-critical data and cloud services. For example,
only 20% of androids have a security app installed.

The BYOD trend is driven by employees using their own devices for business purposes
because they are more powerful than those the company has provided. Another factor is
mobility. In the past, and before the BYOD push, employees worked at their desks on a land-
line and on a computer plugged into the wall with a network cable. This change in exposure
requires greater investment to defend against BYOD risks. As more and more people work
from home and on the go, the office-bound traditional 9-to-5 workday has become a thing
of the past.

Users bringing their personal mobile devices and their own mobile applications to
work and connecting them to the corporate network is part of the larger consumerization
of information technology (COIT) trend. Bring your own device (BYOD) and bring your
own apps (BYOA) are practices that move enterprise data and IT assets to employees’ mo-
bile devices and the cloud, creating a new set of tough IT security challenges. Figure 5.5 sum-
marizes how apps, mobile devices, and cloud services put organizations at a greater risk of
cyberattack. Widely used applications that are outside of the organization’s firewall are Twitter,
Google Analytics, Dropbox, WebEx, and Salesforce.com.

Enterprises take risks with BYOD practices that they never would consider taking with con-
ventional computing devices. One possible reason is that new devices, apps, and systems have
been rolled out so quickly. As a result, smartphones are not being managed as secure devices,
with fewer than 20% of users installing antimalware and 50% using some type of data encryp-
tion. In fact, employees expected instant approval of (or at least no disapproval of) and support
for their new tablet computers within hours of the product’s release.

BYOD Raises Serious and Legitimate Areas of Concern Hackers break into
employees’ mobile devices and leapfrog into employers’ networks—stealing secrets without a
trace. New vulnerabilities are created when personal and business data and communications
are mixed together. All cybersecurity controls—authentication, access control, data confiden-
tiality, and intrusion detection—implemented on corporate-owned resources can be rendered
useless by an employee-owned device. The corporation’s mobile infrastructure may not be
able to support the increase in mobile network traffic and data processing, causing unaccept-
able delays or requiring additional investments.

Another serious problem arises when an employee’s mobile device is lost or stolen. The
company can suffer a data breach if the device is not adequately secured by a strong password
and the data on the BYOD is not encrypted.

Tech Note 5.1 demonstrates why users should only download applications from trusted
sources and check reviews to verify the legitimacy of the application being downloaded.

• Business operations are controlled by apps, systems,
and networks that are so interconnected that
anyone’s mobile device is an entry point for attacks.

• Cloud services have created vulnerabilities in systems
and apps that are surprising even the experts.

Cloud services
create

vulnerabilities

Apps and mobiles
create attack

vectors

FIGURE 5.5 Factors that expose companies and users to attack.

144 CHAPTER 5 Cybersecurity and Risk Management Technology

Social Media Attacks
Companies’ poor social media security practices put their brands, customers, executives, and
entire organizations at serious risk. According to Cisco, Facebook scams are the most common form
of malware distributed in 2015. The FBI reported that social media-related events had quadrupled
over the past five years and PricewaterhouseCoopers (2015) found that more than one in eight
enterprises has suffered at least one security breach due to a social media-related cyberattack.

Social networks and cloud computing increase vulnerabilities by providing a single point
of failure and attack for organized criminal networks. Critical, sensitive, and private information
is at risk, and like previous IT trends, such as wireless networks, the goal is connectivity, often
with little concern for security. As social networks increase their offerings, the gap between
services and information security also increases. For example, virus and malware attacks on a
well-established service such as e-mail have decreased as e-mail security has improved over
the years. Unfortunately, malware is still finding ways to successfully disrupt new services and
devices, such e-readers, netbooks, Google’s Chrome OS, Facebook, YouTube, Twitter, LinkedIn,
and other cloud-based social media networks. For example, in Twitter and Facebook, where
users build relationships with other users, cybercriminals are hacking in using stolen logins.
These types of attacks that take advantage of user trust are very difficult to detect. Facebook
recently reported that up to 2% of its 31 million accounts are false, Twitter estimates 5%, and
LinkedIn openly admitted, that they don’t have a reliable system for identifying and counting
duplicate or fraudulent accounts.

To combat these cyberthreats, Web filtering, user education, and strict policies are key to
preventing widespread outbreaks.

Networks and Services Increase Exposure to Risk An overriding reason why
these networks and services increase exposure to risk is the time-to-exploitation of today’s
sophisticated spyware and mobile viruses. Time-to-exploitation is the elapsed time between
when vulnerability is discovered and when it is exploited. That time has shrunk from months to
minutes so IT staff have ever-shorter timeframes to find and fix flaws before they are compro-
mised by an attack. Some attacks exist for as little as two hours, which means that enterprise IT
security systems must have real-time protection.

When new vulnerabilities are found in operating systems, applications, or wired and
wireless networks, patches are released by the vendor or security organization. Patches are
software programs that users download and install to fix a vulnerability. Microsoft, for example,
releases patches that it calls service packs to update and fix vulnerabilities in its operating
systems, including Vista, and applications, including Office 2010. Service packs can be down-
loaded from Microsoft’s website.

Left undetected or unprotected, vulnerabilities provide an open door for IT attacks and business
disruptions and their financial damages. Despite the best technology defenses, information secu-
rity incidents will occur mostly because of the users who do not follow secure computing practices
and procedures. IT at Work 5.2 illustrates how Google’s new automated cybersecurity initiative is
poised to reduce Google’s losses suffered due to cyberattacks in the cloud.

Tech Note 5.1

Android Botnet over SMS
A botnet of exploited android phones was sending massive
amounts of spam via Yahoo e-mail servers using the short mes-
saging service (SMS) as the command and control (C&C) chan-
nel. Infected androids, or bots, log into the owner’s Yahoo Mail
account to send spam. Most of the devices were located in Chile,
Venezuela, Thailand, Indonesia, Lebanon, Philippines, Russia,
and Saudi Arabia—in countries where users are less likely to get

their android applications from the Google Play market, which
automatically ensures that the applications are safe. Users down-
loading free phone apps from third-party app stores to avoid
paying for legitimate versions were actually downloading the
android malware.

User should only download applications from trusted sources
and also check the reviews to verify the applications are legitimate
because there are many bogus applications.

Cyberattack Targets and Consequences 145

IT at Work 5.2

Google’s Automated Game of Monkey in the Middle
Google is the world’s largest Internet-based search engine, servicing
2 trillion Internet searches every year (Burgess,  2016). In addition
to being the largest encyclopedia known to man, Google has also
expanded its services into cloud computing and online advertising.
The Internet security sector is becoming increasingly important as
the amount of global cyberattacks grows every year. Encryption tech-
nologies serve the purpose of hiding information from unauthorized
personnel and hackers using binary code (0 s and 1 s instead of plain
text). In October, 2016, Google made the leap to becoming a security
platform for Internet communication using encryption technology.

In order to do this, Google created three adversarial neural net-
works, that is, an information processing unit that acts very much like
the brain, using interconnected processing “neurons” to gather con-
clusions about large data sets or other sources of information. In turn,
adversarial neural networks are those that compete with each other to
gain the same information more quickly or efficiently. Google named
its three neural networks Eve, Alice, and Bob. Alice was tasked to send
secret, encrypted messages to Bob, while Eve attempts to intercept the
information before Bob receives it. The purpose of this demonstration
is to test the plausibility of neural networks in Internet security appli-
cations and train them to better encrypt sensitive information.

Google’s Neural Network Effectiveness
Throughout 15,000 simulations, Alice and Bob were able to send
and decrypt hidden messages without Eve fully decrypting any of
them. In fact, as the study progressed, Eve made more decryption
errors as Bob and Alice became more effective (Figure 5.6). The
implications of this study are significant to the future of machine
learning and security. While neural networks are relatively
simple in terms of cryptanalysis, because the adversarial neural
networks were able to learn how to better secure information,
other, more complex security technologies can also learn how to
protect information and determine which information is worth
protecting.

IT at Work Questions
1. Why is decryption security important in today’s inter-

connected society?

2. What can Google’s AI teach us about cybersecurity?

3. What are the future implications of this study?

8

7

6

5

4

B
its

w
ro

ng
(

of
1

6)

3

2

1

0
50005000 10000 15000 20000

steps
25000 30000 35000 400000

Bob Eve

FIGURE 5.6 Effectiveness of Bob and Eve in receiving and decrypting messages
over time.

Sources: Compiled from Berman (2016), Burgess (2016), and Abadi and
Andersen (2016).

Questions

1. What is a critical infrastructure?

2. List three types of critical infrastructures.

3. How do social networks and cloud computing increase vulnerability?

4. Why are patches and service packs needed?

5. Why is it important to protect IP?

6. How are the motives of hacktivists and APTs different?

7. Explain why data on laptops and computers need to be encrypted.

8. Explain how identity theft can occur.

146 CHAPTER 5 Cybersecurity and Risk Management Technology

5.3 Cyber Risk Management
Top management needs to sponsor and promote security initiatives and fund them as a top
priority. As you will read in this section, robust data security is not just the responsibility of IT
and top management, but the ongoing duty of everyone in an organization.

It is becoming more important than ever that security is viewed as a high priority as the
growth of mobile technologies and the IoT threaten to provide attackers with new opportu-
nities. The five key factors contributing to the rising number of data breaches that must be
addressed in a cyber risk management program are listed in Table 5.5.

Keep in mind that security is an ongoing, unending process—something akin to painting
the Golden Gate Bridge in San Francisco—and not a problem that can be solved with just
hardware or software. Hardware and software security defenses cannot protect against irre-
sponsible business practices. These are organizational and people issues.

IT Defenses
Since malware and botnets use many attack methods and strategies, multiple tools are
needed to detect them and/or neutralize their effects. Three essential defenses are the
following:

1. Antivirus Software Antimalware tools are designed to detect malicious codes and pre-
vent users from downloading them. They can also scan systems for the presence of worms,
trojans, and other types of threats. This technology does not provide complete protection
because it cannot defend against zero-day exploits. Antimalware may not be able to detect
a previously unknown exploit.

2. Intrusion Detection Systems (IDSs) As the name implies, an IDS scans for unusual or
suspicious traffic. An IDS can identify the start of a DoS attack by the traffic pattern, alert-
ing the network administrator to take defensive action, such as switching to another IP
address and diverting critical servers from the path of the attack.

3. Intrusion Prevention Systems (IPSs) An IPS is designed to take immediate action—
such as blocking specific IP addresses—whenever a traffic-flow anomaly is detected. An
application-specific integrated circuit (ASIC)-based IPS has the power and analysis capa-
bilities to detect and block DDoS attacks, functioning somewhat like an automated cir-
cuit breaker.

Business policies, procedures, training, and disaster recovery plans as well as hardware
and software are critical to cybersecurity. Table  5.6 lists the characteristics of an effective
cybersecurity program.

To help keep managers updated on the latest cyberthreats and prioritize defenses, KPMG
publishes its Data Loss Barometer. The annual report describes the latest trends and statistics
for data losses worldwide. Key findings and predictions are listed in Table 5.7.

The higher the value of the asset to the company and to cybercriminals, the greater the
risk is to the company and the higher the level of security needs to be. The smart strategy is

Risk is the probability of a
threat successfully exploiting a
vulnerability and the estimated
cost of the loss or damage. TABLE 5.5 Five Key Factors Leading to an Increase in Cyberattacks

1. Interconnected, interdependent, wirelessly networked business environment

2. Smaller, faster, cheaper computers and storage devices

3. Decreasing skills necessary to be a computer hacker

4. International organized crime taking over cybercrime

5. Lack of management support

Cyber Risk Management 147

to invest more to protect the company’s most valuable assets rather than trying to protect all
assets equally, as discussed in IT at Work 5.2. The IT security field—like sports and law—has
its own terminology, which is summarized for quick reference in Figure 5.7 and Table 5.8.

TABLE 5.6 Characteristics of an Effective Cybersecurity Program

Make data and documents available and accessible 24/7 while simultaneously restricting access.

Implement and enforce procedures and AUPs for data, networks, hardware, and software that are
company or employee owned, as discussed in the opening case.

Promote secure and legal sharing of information among authorized persons and partners.

Ensure compliance with government regulations and laws.

Prevent attacks by having network intrusion defenses in place.

Detect, diagnose, and respond to incidents and attacks in real time.

Maintain internal controls to prevent unauthorized alteration of data/records.

Recover from business disasters and disruptions quickly.

TABLE 5.7 Worldwide Data Loss Key Findings and Predictions

Key findings from KPMG Data Loss Barometer Report and its predictions for the next few years:

• Hacking is the number one cause of data loss.
• Internal threats have reduced significantly, while external threats are increasing significantly.
• The most hacked sectors are technology, financial services, retail, and automotive.
• Expect increased loss of data from mobile devices.
• Expect a steep rise in automated hacking and botnets.
• Expect less tolerant regulators and greater fines and negative consequences.
• Expect greater visibility and reporting of data loss as a result of less tolerant regulators.

Source: KPMG (2016).

Threat
Someone or something that
can cause loss, damage,
or destruction.

Vulnerability
Weakness or flaw in a system that
allows an attack to be successful.

Companies’ IT security defenses
influence how vulnerable they are
to threats.

Asset
Something of value that needs to
protected.

Customer data, trade secrets,
proprietary formulas, and other
intellectual property.

Exploit
A program (code) that allows attackers
to automatically break into a system
through a vulnerability.

To attack or take advantage of a
vulnerability.

Risk
Probability of a threat exploiting a
vulnerability and the resulting cost of
the loss, damage, disruption,
or destruction.
Risk = f (Threat, Vulnerability, Cost of
the impact)

FIGURE 5.7 Basic IT security concepts.

148 CHAPTER 5 Cybersecurity and Risk Management Technology

Minimum Security Defenses for Mobiles Minimum security defenses for mobile
devices are mobile biometrics, rogue app monitoring, remote wipe capability, and encryption.
For travelers, do-not-carry rules may be a necessary defense.

A biometric control is an automated method of verifying the identity of a person, based
on physical or behavioral characteristics. The most common biometrics are a thumbprint or
fingerprint, voice print, retinal scan, and signature.

Mobile biometrics, such as voice and fingerprint biometrics, can significantly improve the
security of physical devices and provide stronger authentication for remote access or cloud
services. Biometric controls have been integrated into e-business hardware and software prod-
ucts. Biometric controls do have some limitations: They are not accurate in certain cases, and
some people see them as an invasion of privacy. Most biometric systems match some personal
characteristic against a stored profile.

When Apple acquired Siri, Inc., the voice-based personal assistant Siri was integrated into
its Apple’s operating system, Siri gave Apple the potential to move into voice biometrics.

Voice biometrics is an effective authentication solution across a wide range of consumer
devices including smartphones, tablets, and TVs. Future mobile devices are expected to have
fingerprint sensors to add another authentication factor.

Another type of defense is rogue app monitoring to detect and destroy malicious applica-
tions in the wild. Several vendors offer 24/7 monitoring and detection services to monitor major
app stores and shut down rogue applications to minimize exposure and damage.

In the event of loss or theft of a device, a mobile kill switch or remote wipe capability
as well as encryption are needed. All major smartphone platforms have some kind of remote-
erase capability and encryption option.

In response to mobile security threats, many U.S. companies and government agencies
are imposing do-not-carry rules on mobiles to prevent compromise. Travelers can bring only
“clean” devices and are forbidden from connecting to the government’s network while abroad.

Do-Not-Carry Rules The U.S. Chamber of Commerce did not learn that it and its member
organizations were the victims of a cybertheft for months until the FBI informed the Chamber
that servers in China were stealing data from four of its Asia policy experts, individuals who fre-
quently travel to Asia. Most likely, the experts’ mobile devices had been infected with malware

TABLE 5.8 IT Security Terminology

Term Definition
Exposure Estimated cost, loss, or damage that can result if a threat exploits a vulnerability

Access control Security feature designed to restrict who has access to a network, IS, or data

Audit Procedure of generating, recording, and reviewing a chronological record of system events to determine
their accuracy

Encryption Transforming data into scrambled code to protect them from being understood by unauthorized users

Plaintext or clear text Readable text

Ciphertext Encrypted text

Authentication Method (usually based on username and password) by which an IS validates or verifies that a user is really
who he or she claims to be

Biometrics Methods to identify a person based on a biological feature, such as a fingerprint or retina

Firewall Software or hardware device that controls access to a private network from a public network (Internet) by
analyzing data packets entering or exiting it

Intrusion detection
system (IDS)

A defense tool used to monitor network traffic (packets) and provide alerts when there is suspicious traffic,
or to quarantine suspicious traffic

Fault tolerance The ability of an IS to continue to operate when a failure occurs, but usually for a limited time or at a
reduced level

Cyber Risk Management 149

that was transmitting information and files back to the hackers. By the time the Chamber hard-
ened (secured) its network, hackers had stolen at least six weeks of e-mails, most of which were
communications with the largest U.S. corporations. Even later, the Chamber learned that its
office printer and a thermostat in one of its corporate apartments were communicating with
an Internet address in China. The Chamber did not disclose how hackers had infiltrated its sys-
tems, but its first step was to implement do-not-carry rules.

U.S. companies, government agencies, and organizations are now imposing do-not-
carry rules, which are based on the assumption that devices will inevitably be compromised
according to Mike Rogers, current chairman of the House Intelligence Committee. For example,
House members can bring only “clean” devices and are forbidden from connecting to the gov-
ernment’s network while abroad. Rogers said he travels “electronically naked” to ensure cyber-
security during and after a trip. IT at Work 5.3 explains how one cybersecurity expert complies
with do-not-carry rules while traveling.

Business Continuity Planning
Risk management is not complete without a business continuity plan that has been tested to
verify that it works. Business continuity refers to maintaining business functions or restoring
them quickly when there is a major disruption. The plan covers business processes, assets,
human resources, business partners, and more. Fires, earthquakes, floods, power outages,
malicious attacks, and other types of disasters hit data centers. Yet, business continuity plan-
ning capabilities can be a tough sell because they do not contribute to the bottom line—that
is, until it is too late. Compare them to an insurance policy: If and only if a disaster occurs, the
money has been well spent. And spending on business continuity preparedness is an ongoing
process because there is always more that could be done to prepare better.

The purpose of a business continuity plan is to keep the business running after a disaster
occurs. Each function in the business should have a feasible backup plan. For example, if the
customer service center or call center was destroyed by a storm or lost all power, would anyone
know how the reps would continue to answer customer calls? The backup plan could define
how to provide necessary network access to enable business to continue.

Government Regulations
Cyberattacks are now the number one type of danger facing many countries around the globe.
As a result, international, federal, and state laws and industry regulations mandate that enter-
prises invest in cybersecurity defenses, audits, and internal controls to help secure confidential
data, prevent attacks, and defend against fraud and unauthorized transactions such as money
laundering (Morris 2016).

IT at Work 5.3

Traveling Electronically Clean
When Kenneth G. Lieberthal, an expert at the Brookings Institution,
travels to other countries, he follows a routine that seems straight
from a secret agent movie. He leaves his smartphone and laptop
at home. Instead, he brings loaner devices, which he erases before
he leaves the United States and wipes clean the minute he returns.
While traveling, he disables Bluetooth and Wi-Fi and never lets his
phone out of his sight. While in meetings, he not only turns off his
phone, but also removes the battery for fear his microphone could
be turned on remotely.

Lieberthal connects to the Internet only through an encrypted,
password-protected channel. He never types in a password directly,

but copies and pastes his password from a USB thumb drive. By not
typing his password, he eliminates the risk of having it stolen if key-
logging software were to be installed on his device.

IT at Work Questions
1. Many travelers might consider Lieberthal’s method

too inconvenient. Clearly, his electronically clean
methods are time consuming and expensive. In your
opinion, is there a trade-off between cybersecurity and
convenience? Explain.

2. Create a list of best cybersecurity practices for travelers
based on Lieberthal’s methods.

150 CHAPTER 5 Cybersecurity and Risk Management Technology

IT defenses must satisfy ever-stricter government and international regulations. All
mandate the protection of PII. To protect consumers, some countries require strict compli-
ance with these regulations. For example, in the United States the director of the Bureau of
Consumer Protection at the Federal Trade Commission (FTC) warned that the agency would
bring enforcement action against small businesses lacking adequate policies and procedures
to protect consumer data. Some examples of major national security regulations are listed in
Figure 5.8. Some of these regulations also apply to occupational fraud that is described in the
next section.

To ensure compliance with these regulations in United Sates, the SEC and FTC impose
huge fines for data breaches to deter companies from underinvesting in data protection.

Questions

1. Explain why it is becoming more important for organizations to make cyber risk management
a high priority?

2. Name four U.S. government regulations that relate to cyber risk management.

3. What is the purpose of rogue application monitoring?

4. Why is a mobile kill switch or remote wipe capability an important part of managing cyber risk?

5. Why does an organization need to have a business continuity plan?

6. Name the three essential cybersecurity defenses.

7. Name three IT defenses.

8. Why do companies impose do-not-carry rules?

5.4 Defending Against Fraud
Not all cybercrimes are “attacks” conducted from outside the organization. Some are con-
ducted by employees within the organization. This is called fraud. Fraudsters carry out their
crime by abusing the power of their position or by taking advantage of the trust, ignorance, or
laziness of others. According to the latest Annual Global Fraud Survey, 81% of organizations

Fraud is a nonviolent crime in
which fraudsters use deception,
confidence, and trickery for their
personal gain.

Personal Information Protection
and Electronic Document Act
(PIPEDA)

Sarbanes–Oxley Act (SOX)
Gramm-Leach-Bliley Act (GLB)
Federal Information Security
Management Act (FISMA)
USA PATRIOT Act

Basel III

CANADA

USA

UNITED
KINGDOM

GLOBAL FINANCIAL SERVICES

Data Protection Act

JAPAN

Personal Information
Protection Act

AUSTRALIA

Federal Privacy Act

FIGURE 5.8 Global government regulations of PII.

Defending Against Fraud 151

have been victims of frauds perpetrated by insiders. Of these, 36% were carried out by senior or
middle managers and 45% were attributed to junior employees. Only 23% of the reported
frauds resulted from actions of an agent or nonemployee with access.

Occupational Fraud Prevention and Detection
High-profile cases of occupational fraud committed by senior executive have led to an increase
in government regulations. Unfortunately, this increased legislation has not put an end to fraud.

The single most effective fraud prevention tactic is making employees aware that fraud will
be detected by IT-monitoring systems and punished, with the fraudster possibly turned over to
the police or FBI. The fear of being caught and prosecuted is a strong deterrent. IT must play a
visible and major role in detecting fraud. A strong corporate governance program and internal
audits and controls are essential to the prevention and detection of occupational fraud.

Several examples of occupational fraud, their characteristics and the extent to which they
impact corporate financial statements are illustrated in Figure 5.9.

Corporate Governance An enterprise-wide approach that combines risk, security,
compliance, and IT specialists greatly increases the prevention and detection of fraud. Preven-
tion is the most cost-effective approach, since detection and prosecution costs are enormous
in addition to the direct cost of the loss. It starts with corporate governance culture and ethics
at the top levels of the organization.

IT monitoring and control also demonstrate that the company has implemented effec-
tive corporate governance and fraud prevention measures. Regulators look favorably on
companies that can demonstrate best practices in corporate governance and operational risk
management. Management and staff would then spend less time worrying about regulations
and more time adding value to their brands and business.

Internal fraud prevention measures are based on the same controls that are used to pre-
vent external intrusions—perimeter defense technologies, such as firewalls, e-mail scanners,
and biometric access. They are also based on human resource (HR) procedures, such as recruit-
ment screening and training.

Intelligent Analysis and Anomaly Detections Most detection activity can be han-
dled by intelligent analysis engines using advanced data warehousing and analytics techniques.
These systems take in audit trails from key systems and personnel records from the HR and
finance departments. The data are stored in a data warehouse where they are analyzed to detect
anomalous patterns, such as excessive hours worked, deviations in patterns of behavior, copy-
ing huge amounts of data, attempts to override controls, unusual transactions, and inadequate

Type of Fraud Impacts Financial Statements? Typical Characteristics
Operating Management
Corruption

No Occurs off the books. Median loss due to corruption is 6X
median loss due to misappropriation

Conflict of Interest No Breach of confidentiality, such as revealing competitor bids.
Often occurs coincident with bribery.

Bribery No Uses positional power or money to influence others

Embezzlement or
“misappropriation”

Yes Employee theft. Employee access to company property creates
the opportunity for embezzlement

Senior management financial
reporting fraud

Yes Involves massive breach of trust and leveraging of
positional power

Accounting Cycle fraud Yes Also called “earnings management” or “earnings engineering.”
Violates generally accepted accounting principles (GAAP) and
other all other accounting principles. See aicpa.org

FIGURE 5.9 Types, impact, and characteristics of occupational fraud.

152 CHAPTER 5 Cybersecurity and Risk Management Technology

documentation about a transaction. Information from investigations is fed back into the detec-
tion system so it learns of any anomalous patterns. Since insiders might work in collusion with
organized criminals, insider profiling is important to find wider patterns of criminal networks.

General Controls
It is also important to have a set of general controls in place. The major categories of general
controls are physical controls, access controls, data security controls, communication network
controls, and administrative controls.

Physical Controls Physical security refers to the protection of computer facilities and
resources. This includes protecting physical property such as computers, data centers, soft-
ware, manuals, and networks. It provides protection against most natural hazards as well as
against some human hazards. Appropriate physical security may include several physical
controls such as the following:

• Appropriate design of the data center. For example, the data center should be noncombus-
tible and waterproof.

• Shielding against electromagnetic fields.
• Good fire prevention, detection, and extinguishing systems, including a sprinkler system,

water pumps, and adequate drainage facilities.
• Emergency power shutoff and backup batteries, which must be maintained in operational

condition.
• Properly designed and maintained air-conditioning systems.
• Motion detector alarms that detect physical intrusion.

Access Controls Access control is the management of who is and who is not autho-
rized to use a company’s hardware and software. Access control methods, such as firewalls
and access control lists, restrict access to a network, database, file, or data. It is the major line
of defense against unauthorized insiders as well as outsiders. Access control involves authori-
zation (having the right to access) and authentication, which is also called user identification
(proving that the user is who he or she claims to be).

Authentication methods include:

• Something only the user knows, such as a password
• Something only the user has, for example, a smart card or a token
• Something only the user is, such as a signature, voice, fingerprint, or retinal (eye) scan;

implemented via biometric controls, which can be physical or behavioral

Administrative Controls While the previously discussed general controls are technical
in nature, administrative controls deal with issuing guidelines and monitoring compliance
with the guidelines. Examples of controls are shown in Table 5.9.

To guard against fraud and protect clients, customers, and constituents, all public and
private enterprises are subject to federal and state laws and regulations, some of which are
shown in Figure 5.8. In the United States, the Sarbanes–Oxley Act requires that companies prove
that their financial applications and systems are controlled (secured) to verify that financial
reports can be trusted. It is intended to discourage fraud at the corporate and executive levels.

Sarbanes–Oxley Act Mandates More Accurate Business Reporting and
Disclosure of Violations The Sarbanes–Oxley Act (SOX) mandates more accurate
business reporting and disclosure of generally accepted accounting principles (GAAP) viola-
tions. Section 302 deters corporate and executive fraud by requiring that the CEO and CFO verify
that they have reviewed the financial report, and, to the best of their knowledge, the report

Defending Against Fraud 153

does not contain an untrue statement or omit any material fact. To motivate honesty, executive
management faces criminal penalties including long jail terms for false reports. Section 805 man-
dates a review of the Sentencing Guidelines to ensure that “the guidelines that apply to organi-
zations . . . are sufficient to deter and punish organizational criminal conduct.” The Guidelines
also focus on the establishment of “effective compliance and ethics” programs. As indicated in
the Guidelines, a precondition to an effective compliance and ethics program is “an organiza-
tional culture that encourages ethical conduct and a commitment to compliance with the law.”

Among other measures, SOX requires companies to set up comprehensive internal controls.
There is no question that SOX, and the complex and costly provisions it requires public com-
panies to follow, have had a major impact on corporate financial accounting. For starters, com-
panies have had to set up comprehensive internal controls over financial reporting to prevent
fraud, catching it when it occurs. Since the collapse of Arthur Andersen, following the accounting
firm’s conviction on criminal charges related to the Enron case, outside accounting firms have
gotten tougher with clients they are auditing, particularly with regard to their internal controls.

SOX and the SEC are making it clear that if controls can be ignored, there is no control.
Therefore, fraud prevention and detection require an effective monitoring system. If a company
shows its employees that it can find out everything that every employee does and use that evi-
dence to prosecute a wrongdoer to the fullest extent possible under the law, then the likelihood
of any employee adopting an “I can get away with it” attitude drops drastically.

Approximately 85% of occupational fraud could be prevented if proper IT-based internal
controls had been designed, implemented, and followed.

Internal Controls
The internal control environment is the work atmosphere that a company sets for its employees.
Internal control (IC) is a process designed to achieve:

• Reliability of financial reporting, to protect investors
• Operational efficiency
• Compliance with laws, regulations, and policies
• Safeguarding of assets

Cyber Defense Strategies
The objective of IT security management practices is to defend all of the components of an
information system, specifically data, software applications, hardware, and networks, so
they remain in compliance. Before they make any decisions concerning defenses, the people
responsible for security must understand the requirements and operations of the business,
which form the basis for a customized defense strategy.

TABLE 5.9 Representative Administrative Controls

• Appropriately selecting, training, and supervising employees, especially in accounting and
information systems

• Fostering company loyalty
• Immediately revoking access privileges of dismissed, resigned, or transferred employees
• Requiring periodic modification of access controls, such as passwords
• Developing programming and documentation standards (to make auditing easier and to use the

standards as guides for employees)
• Insisting on security bonds or malfeasance insurance for key employees
• Instituting separation of duties, namely, dividing sensitive computer duties among as many

employees as economically feasible in order to decrease the chance of intentional or uninten-
tional damage

• Holding periodic random audits of the system

154 CHAPTER 5 Cybersecurity and Risk Management Technology

The defense strategy and controls that should be used depend on what needs to be pro-
tected and a cost–benefit analysis. That is, companies should neither underinvest nor overin-
vest. The major objectives of defense strategies are listed in Table 5.10.

A defense strategy is also going to require several controls, as shown in Figure  5.10.
General controls are established to protect the system regardless of the specific application.
For example, protecting hardware and controlling access to the data center are independent
of the specific application. Application controls are safeguards that are intended to protect
specific applications. In the next two sections, we discuss the major types of these two groups
of information system controls.

TABLE 5.10 Major Objectives of Defense Strategies

Action Details
Prevention and deterrence Properly designed controls may prevent errors from occurring, deter criminals from attacking the

system, and, better yet, deny access to unauthorized people. These are the most desirable controls.

Detection Like a fire, the earlier an attack is detected, the easier it is to combat, and the less damage is done.
Detection can be performed in many cases by using special diagnostic software, at a minimal cost.

Contain the damage This objective involves minimizing or limiting losses once a malfunction has occurred. It is also called
damage control. This can be accomplished, for example, by including a fault-tolerant system that
permits operation in a degraded mode until full recovery is made. If a fault-tolerant system does not
exist, a quick and possibly expensive recovery must take place. Users want their systems back in oper-
ation as fast as possible.

Recovery A recovery plan explains how to fix a damaged information system as quickly as possible. Replacing
rather than repairing components is one route to fast recovery.

Correction Correcting the causes of damaged systems can prevent a problem from occurring again.

Awareness and compliance All organization members must be educated about the hazards and must comply with the security
rules and regulations.

Defense Control

Web Controls

Biometrics

Data Security

Communication

Administrative

Other

Access

Physical

Output

Processing

Input

Encryption

Cable Testers

Firewalls

Virus Protection

Authentication Biometrics

General Application

FIGURE 5.10 Major defense controls.

Frameworks, Standards, and Models 155

Auditing Information Systems
Some companies rely on surprise audits. But being proactive about searching for problems
is more effective and can stop frauds early on, before the losses mount. An audit is an impor-
tant part of any control system. Auditing can be viewed as an additional layer of controls or
safeguards. It is considered as a deterrent to criminal actions, especially for insiders. Auditors
attempt to answer questions such as these:

• Are there sufficient controls in the system? Which areas are not covered by controls?
• Which controls are not necessary?
• Are the controls implemented properly?
• Are the controls effective? That is, do they check the output of the system?
• Is there a clear separation of duties of employees?
• Are there procedures to ensure compliance with the controls?
• Are there procedures to ensure reporting and corrective actions in case of violations

of controls?

Auditing a website is a good preventive measure to manage the legal risk. Legal risk is
important in any IT system, but in Web systems it is even more important due to the content
of the site, which may offend people or be in violation of copyright laws or other regulations
(e.g., privacy protection). Auditing e-commerce is also more complex since, in addition to the
website, one needs to audit order taking, order fulfillment, and all support systems.

Questions

1. What defenses help prevent occupational fraud?

2. What level of employee commits the most occupational fraud?

3. What is the purpose of internal controls?

4. What federal law requires effective internal controls?

5. Explain the concepts of intelligence analysis and anomaly detection.

6. Name the major categories of general controls.

7. Explain authentication and name two methods of authentication.

8. What are the six major objectives of a defense strategy?

5.5 Frameworks, Standards, and Models
A number of frameworks, standards, and models have been developed to guide cyber defense
strategies.

Risk Management and IT Governance Frameworks
Two widely accepted frameworks that guide risk management and IT governance are Enter-
prise Risk Management (ERM) and Control Objectives for Information and Related Tech-
nology (COBIT) 5.

Enterprise Risk Management Framework ERM is a risk-based approach to
managing an enterprise developed by the Committee of Sponsoring Organizations of the
Treadway Commission (COSO). ERM integrates internal control, the Sarbanes–Oxley Act man-
dates, and strategic planning.

ERM consists of eight components, listed in Table 5.11.

156 CHAPTER 5 Cybersecurity and Risk Management Technology

These eight components can be viewed from a strategic, operations, reporting, and com-
pliance perspective at all level of the organizations. Taking a portfolio view of risk, management
must consider how individual risks are interrelated and apply a strong system of internal
controls to ensure effective enterprise risk management. Those involved in ERM include
management, Board of Directors, Risk officers, and internal auditors. ERM is intended to be
part of routine planning processes rather than a separate initiative. The ideal place to start is
with buy-in and commitment from the board and senior leadership.

COBIT 5 COBIT 5, is the internationally accepted IT governance and control framework
created by the International Systems Audit and Control Association (ISACA) to align IT with
business objectives, delivering value, and manage associated risks. It provides a framework for
management, users, and IS audit, control, and security practitioners that allows them to bridge
the gap between control requirements, technical issues, and business risks.

COBIT 5 is the leading framework for the governance and security of IT. COBIT 5, the most
current version of the COBIT 5 framework is based on five principles, shown in Figure 5.11.
COBIT 5 contains highly relevant guidance for IT practitioners and business leaders regarding

TABLE 5.11 Enterprise Risk Management Components

Component Description
Internal environment Assess risk management philosophy and culture

Objective setting Determine relationship of risk to organizational goals

Event identification Differentiate between risks and opportunities; negative/
positive impact

Risk assessment Assess risk probability and impact

Risk response Identify and evaluate risk responses

Control activities Develop policies and procedures to ensure implementation of
risk responses

Information and communication Identify, capture, and communicate information

Monitoring Conduct ongoing and separate evaluations of risk-related activities

1. Meeting
Stakeholder

Needs

5. Separating
Governance

From
Management

4. Enabling a
Holistic

Approach

3. Applying a
Single

Integrated
Framework

2. Covering
the

Enterprise
End-to-End

COBIT 5
Principles

FIGURE 5.11 COBIT 5 principles.

Frameworks, Standards, and Models 157

governing and protecting data and information. COBIT 5 encourages each organization to
customize COBIT to fit its priorities and circumstances and can be downloaded from isaca.org.

Three of the five COBIT 5 principles are most applicable to security:

1. A system needs to be in place that considers and effectively addresses enterprise
information security requirements. At a minimum, this would include metrics for the
number of clearly defined key security roles and the number of security-related inci-
dents reported.

2. An established security plan has been accepted and communicated throughout the
organization. This would include level of stakeholder satisfaction with the security
plan, the number of security solutions that are different from those in the plan and the
number of security solutions deviating from the enterprise security architecture that
can lead to security gaps and potentially lengthen the time to resolve security or com-
pliance issues.

3. Information security solutions are implemented throughout the organization. These
should include the number of services and solutions that align with the security plan and
security incidents caused by noncompliance with the security plan.

By following these three principles, using a specified set of IT-enabling processes, and tak-
ing additional steps to move from an application centric focus to a data centric focus, organiza-
tions that use COBIT 5 can improve the governance and protection of their data and information.

While COBIT 5 provides sound and comprehensive improvement recommendations to
start the security governance journey, organizations clearly need to move beyond reactive
compliance and security to proactively mandating the need for data privacy and security
enterprise-wide. In this way data are always protected.

ERM and COBIT 5 can be used separately or jointly. As with most improvement methodol-
ogies, the key to success is to start using them one step at a time.

Industry Standards
Industry groups impose their own standards to protect their customers and their members’
brand images and revenues. One example is the Payment Card Industry Data Security
Standard (PCI DSS) created by Visa, MasterCard, American Express, and Discover. PCI is
required for all members, merchants, or service providers that store, process, or transmit
cardholder data. PCI DSS requires merchants and card payment providers to make certain
their Web applications are secure. If done correctly, this could reduce the number of Web-
related security breaches.

The purpose of the PCI DSS is to improve customers’ trust in e-commerce, especially when
it comes to online payments, and to increase the Web security of online merchants. To motivate
following these standards, the penalties for noncompliance are severe. The card brands can
fine the retailer, and increase transaction fees for each credit or debit card transaction. A find-
ing of noncompliance can be the basis for lawsuits.

IT Security Defense-In-Depth Model
The Defense-in-Depth Model encourages a multilayered approach to information security.
The basic principle is that when one defense layer fails, another layer provides protection. For
example, if a wireless network’s security was compromised, then having encrypted data would
still protect the data, provided that the thieves could not decrypt it.

The success of any type of IT project depends on the commitment and involvement of
executive management, also referred to as the tone at the top. The same is true of IT security.
This information security tone makes users aware that insecure practices and mistakes will not
be tolerated. Therefore, an IT security model begins with senior management commitment and
support, as shown in Figure 5.12. The model views information security as a combination of
people, policies, procedures, and technology.

158 CHAPTER 5 Cybersecurity and Risk Management Technology

To use the Defense-in-Depth Model an organization must carry out four major steps:

Step 1: Gain senior management commitment and support Senior managers’ influence
is needed to implement and maintain security, ethical standards, privacy practices, and
internal control. IT security is best when it is top-driven. Senior managers decide how stringent
infosec policies and practices should be in order to comply with laws and regulations. Finan-
cial institutions are subject to strict security and anti-money laundering (AML) rules because
they face numerous national and international regulations and have high-value data. Advertis-
ing agencies and less regulated firms tend to have more lenient rules. Other factors influencing
infosec policies are a corporation’s culture and how valuable their data are to criminals.

For instance, management may decide to forbid employees from using company
e-mail accounts for nonwork purposes, accessing social media during work hours, or
visiting gambling sites. These decisions will then become rules stated in company policy,
integrated into procedures, and implemented with technology defenses. Sites that are
forbidden, for instance, can be blocked by firewalls.
Step 2: Develop acceptable use policies and IT security training Organizations need
to put in place strong policies and processes that make responsibilities and accountabilities
clear to all employees. An acceptable use policy (AUP) explains what management has
decided are acceptable and unacceptable activities, and the consequences of noncompli-
ance. Rules about tweets, texting, social media, e-mail, applications, and hardware should
be treated as extensions of other corporate policies—such as physical safety, equal oppor-
tunity, harassment, and discrimination. No policy can address every future situation, so
rules need to be evaluated, updated, or modified. For example, if a company suffers a mal-
ware infection traced to an employee using an unprotected smartphone connected to the
company network, policies to restrict or prohibit those connections might be advisable.
Step 3: Create and Enforce IT security procedures and enforcement Secure proce-
dures define how policies will be enforced, how incidents will be prevented, and how an
incident will be responded. Here are the basic secure procedures to put in place:
a. Define enforcement procedures Rules that are defined in the AUP must be enforced

and enforcement procedures must be applied consistently. Procedures for monitoring
employee Internet and network usage are defined at this stage.

b. Designate and empower an internal incident response team (IRT) The IRT typi-
cally includes the CISO, legal counsel, senior managers, experienced communicators,
and key operations staff. Minimizing the team size and bureaucracy can expedite
decision making and response. Because there may be significant liability issues, legal
counsel needs to be involved in incident response planning and communication.

c. Define notification procedures When a data breach occurs the local police depart-
ment, local office of the FBI, Securities and Exchange Commission (SEC), the U.S. Secret
Service, or other relevant agency need to be notified immediately. Federal and state laws
or industry regulations may define how and when affected people need to be notified.

Step 4 Hardware and software
(kept up-to-date)

Step 3 IT security procedures
and enforcement

Step 2 Acceptable use policies
and IT security training

Step 1 Senior management
commitment and support

FIGURE 5.12 IT security defense-in-depth model.

Frameworks, Standards, and Models 159

d. Define a breach response communications plan Effective incident response com-
munication plans include personnel and processes with lists, channels, and social
media needed to execute all communications that might be needed.

e. Monitor information and social media sources Monitor Twitter, social media, and
news coverage as a standard procedure to understand how people are responding to
the incident and criticizing the company. Damage control procedures may be needed.

When an incident occurs, the organization is ready to respond intelligently—having
the correct information to be honest, open, and accountable, and to communicate with
consumers and other important audiences as quickly as possible.
Step 4: Implement Security Tools: Hardware and software The last step in the
model  is implementation of software and hardware needed to support and enforce the
AUP and secure practices. The selection of hardware and software defenses is based on
risk, security budget, AUP, and secure procedures. Every device that connects to an organi-
zation’s network; every online activity and mobile app of employees; and each file sent or
received are access points. Technology defense mechanisms need to be:
• able to provide strong authentication and access control of industrial grade
• appropriate for the types of networks and operating systems
• installed and configured correctly
• tested rigorously
• maintained regularly
How much does a cyberattack really cost an organization? Regulatory fines, public rela-

tions costs, breach notification and protection costs, and other consequences of large-scale
data breaches are easy to see and quantify. However, the effects of a cyberattack can linger for
years, resulting in a wide range of intangible costs tied to a damaged reputation, disruption of
operations, loss of IP or other strategic assets. The latter are much more difficult to measure
since they are not easily quantifiable.

No matter which frameworks, standards, and controls are used to assess, monitor, and
control cyber risk, a balanced approach to measuring direct costs and intangible impacts asso-
ciated with cyberattacks must be used to paint an accurate picture of the damage sustained
and to guide the creation of increased security measures going forward.

Questions

1. Who created the Enterprise Risk Management Framework? What is its purpose?

2. What are the five principles of COBIT 5? Explain.

3. What is the difference between internal and external controls?

4. Why do industry groups have their own standards for cybersecurity? Name one standard.

5. Are measurements of direct costs sufficient to reflect total damage sustained by a cyberattack?

6. What four components comprise the IT security defense-in-depth model?

7. What are the four steps in the IT security defense-in-depth security model?

8. Explain why frameworks, standards, and models are important parts of a cybersecurity program.

Key Terms
acceptable use policy (AUP) 158
access control 152
administrative controls 152
advanced persistent threat (APT) 139

adware 135
Anonymous 139
application controls 154
assets 139

attack vector 138
audit 155
backdoor 134
biometric control 148

160 CHAPTER 5 Cybersecurity and Risk Management Technology

Explore: Online and Interactive Exercises

1. Visit http://www.informationisbeautiful.net/visualizations/worlds-
biggest-data-breaches-hacks

a. Choose two companies where data breaches have occurred.

b. Explain the reasons for these breaches and discuss how they
could have been avoided.

2. Visit https://www.identityforce.com/resources/quiz and take the
Identity Theft Quiz. What was your score? Explain ways in which you could
improve your score so that you are not as much at risk for identity theft.

3. Visit https://www.identityforce.com/blog/2016-data-breaches and
choose one of the major data breaches listed. Read about the data breach.
What lessons did you learn from the article?

black hat 133
botnet 136
bring your own apps (BYOA) 143
bring your own device (BYOD) 143
business continuity plan 149
business impact analysis (BIA) 163
command and control (C&C) channel 144
consumerization of information technology
(COIT) 143
contract hacker 134
Control Objectives for Information and
Related Technology (COBIT) 5 155
corporate governance 151
critical infrastructure 140
cyberthreat 131
data breach 130
Data incident 130
data tampering 137
distributed denial-of-service (DDoS)
attack 135
do-not-carry rules 149
enterprise risk management (ERM) 155
fraud 150

general controls 154
gray hat 133
hacking 133
hacktivist 133
intellectual property 141
internal control (IC) 153
internal threats 137
intrusion detection system (IDS) 137
intrusion prevention system (IPS) 146
IT governance 156
LulzSec 139
malware 135
mobile biometrics 148
occupational fraud 150
patches 144
payload 136
Payment Card Industry Data Security
Standard (PCI DSS) 157
permanent denial-of-service (PDoS) 137
phishing 134
physical controls 152
ransomware 135
remote access trojan (RAT) 136

remote wipe capability 148
risk 146
rogue app monitoring 148
rootkit 135
service pack 144
signature 136
social engineering 133
spam 136
spear phishing 135
spyware 135
telephony denial-of-service (TDoS) 137
threat 130
time-to-exploitation 144
trojan 135
Trojan horse 136
vector 136
Virus 135
voice biometrics 148
vulnerability 130
white hat 133
worm 135
zero-day exploit 136
zombie 136

Assuring Your Learning

Discuss: Critical Thinking Questions

1. Why is cybercrime expanding rapidly? Discuss some possible
solutions.

2. In addition to hackers, what kinds of cybercriminals do organiza-
tions need to defend against?

3. What are the major motives of cybercriminals?

4. In what ways do users make themselves vulnerable to cybercrimes?

5. Why do malware creators alter their malware?

6. Why should you set a unique password for each website, service,
and device that you use?

7. How can malware be stopped from stealing or disclosing data from
an organization’s network?

8. What impact might huge fines have on how much a company budg-
ets for IT security defenses?

9. Why are BYOD, BYOA, and do-not-carry rules important to IT secu-
rity? Why might users resist such rules?

10. Why do users refuse to use strong passwords even though they
know how dangerous weak passwords are?

11. How can the risk of occupational fraud be decreased?

12. Why should information control and security be of prime concern
to management?

13. Explain what firewalls protect and what they do not protect.

14. Why are authentication and authorization important in
e-commerce?

15. Some insurance companies will not insure a business unless the
firm has a computer disaster recovery plan. Explain why.

16. Explain why risk management should involve the following ele-
ments: threats, exposure associated with each threat, risk of each
threat occurring, cost of controls, and assessment of their effectiveness.

17. Discuss why the Sarbanes–Oxley Act focuses on internal control.
How does that focus influence information security?

Case 5.2 161

Case 5.2
Business Case: Lax Security at LinkedIn Exposed
On any social network, most users mistakenly believe that their privacy is
only as good as the privacy of their most careless—or temporary—friend.
In fact, weak passwords and hackers can deprive users of all privacy.

When the business social networking site LinkedIn was hacked
(Figure 5.13), hackers stole 6.5 million passwords and e-mail addresses.
This data breach was discovered by IT security experts when they found
millions of LinkedIn passwords posted on a Russian underground web-
site (Figure 5.14). Experts also determined that a hacker named Dwdm
was asking underground members for help in cracking the stolen

passwords. Within only 2 days, most passwords were cracked. Why
were LinkedIn’s passwords cracked so quickly? The simple answer is
that LinkedIn was using an outdated encryption method instead of up-
to-date industry-standard encryption. As a result, members’ passwords
were really only camouflaged—and crackable.

LinkedIn Criticized for Bad Data Security
What could hackers do to your online accounts if they had your pass-
words for 48 hours and you did not know? That is what LinkedIn
allowed to happen by waiting 2 days before notifying members that
their passwords had been stolen. The company took a lot of criticism

Hackers breached Linkedln’s
network and stole 6.5 million of its
customers’ passwords, which had
been only lightly encrypted. They
were posted to a Russian hacker
forum for all to see and steal.

Data Breach

Linkedln
Hack attack led to 6.5 million
e-mail addresses and passwords
being compromised.

Over $1 million in costs
associated with forensic work,
investigating, and addressing
the breach.

Seven-figure investments in IT
infrastructure to update and
harden network and data security.

Costs

FIGURE 5.13 LinkedIn data breach overview.

©
K

ev
in

B
rit

la
nd

/A
la

m
y

FIGURE 5.14 LinkedIn did not discover its own data breach
and, when informed of it, delayed notifying members.

4. Research vendors of biometrics. Select one vendor and discuss three
of its biometric devices or technologies. Prepare a list of major capabili-
ties. What are the advantages and disadvantages of its biometrics?

5. Visit https://learn-umbrella.cisco.com/product-videos/what-is-
a-secure-internet-gateway to watch the video “What is a Secure In-
ternet Gateway?”, where Dan Hubbard, Cloud Security Product CTO at

Cisco explains how security needs to adapt to keep up with the evolv-
ing workforce and how Cisco Umbrella can assist companies protect
their employees wherever they choose to work.

a. Describe two things that you learned from watching this video.

b. Do you think Cisco Umbrella would be an effective tool for
companies to use? Explain why or why not.

Analyze & Decide: Apply IT Concepts to Business Decisions

1. Many firms concentrate on the wrong questions and end up throw-
ing a great deal of money and time at minimal security risks while ig-
noring major vulnerabilities. Why?

2. Assessing how much a company is legally obligated to invest in cy-
bersecurity remains a challenge. Since there is no such thing as perfect
security (i.e., there is always more that you can do), resolving these
questions can significantly affect cost.

a. When are a company’s security measures sufficient to com-
ply with its obligations? For example, does installing a firewall
and using virus detection software satisfy a company’s legal
obligations?

b. Is it necessary for an organization to encrypt all of its data?

3. Assume that the daily probability of a major earthquake in Los An-
geles is .07%. The chance of your computer center being damaged dur-
ing such a quake is 5%. If the center is damaged, the average estimated
damage will be $1.6 million.

a. Calculate the expected loss (in dollars).

b. An insurance agent is willing to insure your facility for an annual
fee of $15,000. Analyze the offer, and discuss whether to accept it.

4. Should an employer notify employees that their usage of comput-
ers is being monitored? Why or why not?

5. Twenty-five thousand messages arrive at an organization each year.
Currently, there are no firewalls. On average, 1.2 successful hackings
occur each year. Each successful hack attack results in a loss of about
$130,000 to the company. A major firewall is proposed at a cost of
$66,000. The estimated useful life is three years. The chance that an in-
truder will break through the firewall is 0.0002. In such a case, the dam-
age will be $100,000 (30%), $200,000 (50%), or there will be no dam-
age. There is an annual maintenance cost of $20,000 for the firewall.

a. Should management buy the firewall?

b. An improved firewall that is 99.9988% effective and that costs
$84,000, with a life of three years and annual maintenance cost of
$16,000, is available. Should this firewall be purchased instead of
the first one?

162 CHAPTER 5 Cybersecurity and Risk Management Technology

for not notifying members via Twitter or Facebook immediately.
According to the chief executive of the Public Relations Consultants
Association, Francis Ingham, LinkedIn ignored the first rule of crisis
management, which is to be first to tell your customers.

What surprised customers and IT security experts was that a com-
pany that collects and profits from vast amounts of data had taken a
negligent approach to protecting it. Figure 5.15 explains why it was sur-
prising and alarming that LinkedIn’s password protection was weak.

E-mail Addresses are Universal Usernames
At most e-commerce and social sites, usernames are e-mail addresses—
making them our universal username for online accounts. If the e-mail
is a work account, then everyone also knows where we work and our
login name. Therefore, knowing users’ usernames and passwords pro-
vides authorized access to corporate accounts with almost no risk of
being detected. Hackers attacked LinkedIn to gain access to over 161
million members’ credentials as a means to gain access to much more
valuable business networks and databases.

Business Risks and Collateral Damage
The hack caused the following business risks and collateral damage.

• Takeover of members’ other accounts by hackers, fraudsters, and
other criminals. Hackers know that people reuse passwords; once
their LinkedIn accounts are linked to Facebook and Twitter, far too
much information may be revealed. Knowing where people worked
and their e-mail accounts allowed hackers to quickly use the stolen
LinkedIn passwords to log in to corporate accounts, online bank
accounts, and so on to steal more data or transfer funds.

• Damage to LinkedIn’s biggest revenue source—its advertising
business. LinkedIn’s financial success is tied to its advertising rev-
enues, which in turn are based on the number of active members
and membership growth.

• Fines for violating privacy laws and regulations. Any company
exposing the confidential data of customers or employees faces
steep fines. Regulators impose harsh penalties for breaking pri-
vacy laws and not taking reasonable care to defend against data

breaches. Strict data privacy laws in states such as Massachusetts
and California could keep LinkedIn fighting legal battles for years.

• Cleanup costs. The cleanup cost LinkedIn nearly $1 million and
another $2–$3 million in upgrades. Forensic work on the pass-
word theft cost another $500,000 to $1 million.

Data Security: A Top Management Concern
Data security is a senior management concern and responsibility. It
affects a company’s operations, reputation, and customer trust, which
ultimately impact revenue, profits, and competitive edge. Yet, defenses
that could help to prevent breaches are not always implemented.

Some experts argue that senior management continues to skimp
on basic protections because computer security is not regulated—that
is, until a business suffers a major crisis. After the data breach, LinkedIn
implemented improved password storage encryption, hired private
security and forensics experts, and called in the FBI to help investigate
the security breach.

Comparison with Other Cyberattacks
While 6.5 million leaked passwords represent a serious breach, it
affected a relatively small percent of the more than 175 million mem-
bers LinkedIn had at that time. Overall, the LinkedIn breach, while
somewhat costly, did not do as much harm as those experienced by
other hacked companies such as Global Payments, Sony, and Certifi-
cate Authority DigiNotar, which were literally hacked out of business.

Just the Beginning
Four years after the data breach, the number of released account details
was found to be 117 million rather than 6.5 million. In May, 2016, Russian
hacker “Peace,” who sold the Yahoo data breach information in the Open-
ing Case, made available for purchase LinkedIn account details on a mar-
ketplace in the Dark Web for $2,300. In response to the massive breach
of additional accounts, LinkedIn required the affected accountholders
to change their passwords and urged all other users to change theirs as
well. In addition, LinkedIn spent about $4 million repairing and upgrad-
ing their security infrastructure to combat future leaks (Hackett, 2016b).

Questions
1. LinkedIn does not collect the credit card or other financial

account information of its members. Why then would profit-
motivated hackers be interested in stealing LinkedIn’s stored
data? What data would hackers be most interesting in accessing?

2. Companies are often slow to self-detect data breaches so a
cyberattack can occur without a company even knowing it has a
problem. What effect do you think LinkedIn’s failure to self-detect
its massive data breach had on its popularity and credibility?

3. Most corporate security incidents are uncovered by a third party,
like a security firm, that picks up on evidence of malicious activity.
Why do you think IT security experts and not LinkedIn discovered
the data breach?

4. Explain why LinkedIn’s lax approach to members’ information
security and weak passwords was very surprising to members
and information security professionals.

5. Identify and evaluate the actual and potential business risks and
damages from LinkedIn’s data breach.

6. In your opinion, was LinkedIn negligent in protecting its main
asset? Explain.

• Linkedln’s business model: collect and profit from data.

• Linkedln was not some cash-poor startup company.
The company had piles of cash from its successful
initial public offering (IPO) in May 2011. Once it went
public, Linkedln, like all public companies, had to
report hack attacks to the SEC.

• Linkedln’s net income for the first quarter of 2012 was
$5 million, more than double its $2.1 million net income
in the first quarter 2011. Linkedln had a lot to protect . . .
and lose.

Its most valuable asset is data

It’s a high-tech, public company with a brand
image to protect

It had a lot of net income to protect

FIGURE 5.15 Three reasons why LinkedIn’s underinvestment in
data security did not make business sense.

Sources: Compiled from Franceschi-Bicchierai (2016), Hackett (2016b), and
Ponemon Institute (2017).

Case 5.3 163

Case 5.3
Video Case: Botnets, Malware Security, and
Capturing Cybercriminals
Gunter Ollmann, vice president of research at Damballa, Inc., explains
what companies have learned from the Operation Aurora attacks
against major companies. In the video, you will learn why it is difficult
for law enforcement to track and prosecute cybercriminals, includ-
ing botnet operators who now launch targeted botnet attacks with
the help of automated tools. Also discussed is the effectiveness of
Microsoft’s legal action to shut down the C&C (command and control)

network of the Waladec botnet. Visit searchsecurity.techtarget.com/
video/Botnets-malware-and-capturing-cybercriminals to view the
video, read its transcript and answer the following questions.

Questions
1. Why are botnets used?

2. What is needed to get started in the botnet industry? Explain why.

3. Given your answers, what should users and organizations do and/
or not do to reduce the threat of botnets?

IT Toolbox

Conducting a Cost–Benefit Analysis
It is usually not economical to prepare protection against every
possible threat. Therefore, an IT security program must provide a
process for assessing threats and deciding which ones to prepare
for, which ones to ignore and which ones to provide reduced protec-
tion against. Two commonly used cost-benefit analysis tools are risk
assessment and business impact analysis. Risk assessment relies
solely on quantitative measures, while the business impact analysis
takes into account both qualitative and quantitative indicators.

• Risk assessment

Risk assessments are done using an app or spreadsheet. The
basic computations are shown here:

1 2Expected loss P P L
where

P1 = probability of attack (estimate, based on judgment)

P2 = probability of attack being successful (estimate, based
on judgment)

L = loss occurring if attack is successful

Example:
An organization estimates that the probability of a cyberattack

is 2% and the attack has only a 10% chance of being successful. If the
attack is successful, the company estimates that it will lose $1 million.

This would be expressed as:

1 2.02, .10, $1,000,000P P L

Then expected loss from this particular attack is

1 2 0.02 0.1 $1,000,000 $2,000P P L

• Business impact analysis

A business impact analysis (BIA) estimates the consequences
of disruption of a business function and collects data to develop
recovery strategies.

Potential loss scenarios are first identified during the risk assess-
ment. Operations may also be interrupted by the failure of a supplier
of goods or services or delayed deliveries. There are many possible
scenarios that should be considered.

The BIA identifies both operational and financial impacts result-
ing from a disruption. The financial impacts are easier to assess, but
the operational impacts are more difficult to determine because of
their qualitative nature. Several examples of operational and finan-
cial impacts to consider are shown in Table 5.12.

The losses assessed using these two methods should be com-
pared with the costs for possible recovery strategies to determine net
risk. The BIA report should also prioritize the order of events for resto-
ration of the business, with processes having the greatest operational
and financial impacts being restored first.

TABLE 5.12 Business Disruption Qualitative and Quantitative Impacts

Type Metric Description
Financial Quantitative Lost sales and income

Delayed sales or income

Increased expenses (e.g., overtime labor, outsourcing, expediting costs)

Regulatory fines

Contractual penalties or loss of contractual bonuses

Operational Qualitative Customer dissatisfaction or defection

Delay of new business plans

164 CHAPTER 5 Cybersecurity and Risk Management Technology

References
Abadi, M., and D.G. Andersen. “Learning to Protect Communications

with Adversarial Neural Cryptography.” Cornell University Library,
October 24, 2016.

Balakrishnan, A. “U.S. Accuses Russia of Hacking Yahoo.” CNBC,
March 15, 2017.

Berman, R. “Alice, Bob, and Eve Are Neural Networks. And They Have
Secrets.” Big Think, November 1, 2016.

Breach Level Index. “2015: The Year Data Breaches Got Personal.”
February 18, 2016.

Burgess, M. “How Google’s AI taught itself to create its own encryp-
tion.” Wired, October 31, 2016.

Department of Justice. “The USA Patriot Act: Preserving Life and
Liberty”. 2001. Accessed from https://www.justice.gov/archive/ll/
highlights.htm

Duan, E. “DressCode and its Potential Impact for Enterprises.” Trend-
Micro, September 29, 2016.

Fiegerman, S. “Verizon says Yahoo’s massive breach could impact
deal.” CNN, October 13, 2016.

Franceschi-Bicchierai, L. “Another Day, Another Hack: 117 Million
LinkedIn Emails and Passwords.” Motherboard Vice, May 18, 2016.

Gelinne, J., J. Fancher, and E. Mossburg. “The Hidden Costs of an IP
Breach: Cyber Theft and the Loss of Intellectual Property.” Deloitte
Review, Issue 19, July 25, 2016.

Gibson Dunn. “President Obama Signs Federal Trade Secrets Law.”
May 11, 2016.

Goldman, J. “All-Time High of 1,093 Data Breaches Reported in U.S.
in 2016.” E-Security Planet, January 24, 2017. Accessed from: http://
www.esecurityplanet.com/network-security/all-time-high-of-
1093-data-breaches-reported-in-u.s.-in-2016.html

Goodin, D. “Godless Apps, Some Found in Google Play, Can Root 90%
of Android Phones.” ArsTechnica, June 23, 2016.

Hackett, R. “Yahoo’s Titanic Data Breach Highlights Risk to M&A.” For-
tune, September 23, 2016a.

Hackett, R. “LinkedIn Lost 167 Million Account Credentials in Data
Breach.” Fortune, May 18, 2016b.

Kan, M. “Hackers Now Have a Treasure Trove of User Data with the
Yahoo Breach.” International Data Group, September 22, 2016.

KPMG. Consumer Loss Barometer. 2016. https://assets.kpmg
.com/content/dam/kpmg/cn/pdf/en/2016/08/consumer-loss-
barometer-v1

Lee, D. “‘State’ Hackers Stole Data from 500 Million Users.” BBC, Sep-
tember 23, 2016.

Matwyshyn, A., and H. Bhargava. “Will Yahoo’s Data Breach Help
Overhaul Online Security?” Knowledge@Wharton: University of
Pennsylvania, September 27, 2016. Accessed from http://knowl-
edge.wharton.upenn.edu/article/will-yahoos-data-breach-help-
overhaul-online-security/

Morris, A., D. Nathan, and A. Ayyar. “Broker-Dealers and Their
Auditors Face Increased Regulatory Scrutiny.” Bloomberg Legal,
November 3, 2016.

Murgia, M. “Cyber experts look to usual suspects in Yahoo hack.”
Financial Times, September 25, 2016.

Ponemon Institute. “2017 Cost of Data Breach Study: Global Over-
view” June, 2017.

PricewaterhouseCoopers. “US Cybersecurity: Progress Stalled.”
July, 2015.

RT.com. “Buyer Beware: US Is Biggest Creator of Malicious Mobile
Apps.” February 4, 2015.

Sterling, G. “Bing Reaches 20 Percent Search Market Share Milestone
in US.” SearchEngineLand, April 16, 2015.

Verizon. “2016 Data Breach Investigations Report.” Accessed from:
http://www.verizonenterprise.com/verizon-insights-lab/dbir/2016

165

CHAPTER 6

Search, Semantic, and
Recommendation Technology

CHAPTER OUTLINE

Case 6.1 Opening Case: Mint.com Uses Search
Technology to Rank Above Established Competitors

6.1 Using Search Technology for Business
Success

6.2 Organic Search and Search Engine
Optimization

6.3 Pay-Per-Click and Paid Search Strategies

6.4 A Search for Meaning—Semantic Technology

6.5 Recommendation Engines

Case 6.2 Business Case: Deciding What to Watch—
Video Recommendations at Netflix

Case 6.3 Video Case: Power Searching with Google

LEARNING OBJECTIVES

6.1 Describe how search engines work and identify ways that
businesses gain competitive advantage by using search
technology effectively.

6.2 Explain how to improve website ranking on search
results pages by optimizing website design and creating
useful content.

6.3 Describe how companies manage paid search advertising
campaigns to increase awareness and drive sales volume.

6.4 Describe how semantic Web technology enhances the accuracy
of search engines results and how businesses can optimize their
websites to take advantage of this emerging technology.

6.5 Describe how recommendation engines are used to enhance
user experience and increase sales on e-commerce websites.

Introduction
Every day, over 1.5 billion people around the world use what seems to be a simple tool to find
information online—a search engine. We sometimes take for granted that behind a relatively
simple user interface, an increasingly complex set of search engine technologies are at work,

166 CHAPTER 6 Search, Semantic, and Recommendation Technology

helping us find the information we need to do our jobs, conduct research, locate product
reviews, or find information about the television shows we watch. Because most search engine
services are free, people are not generally aware that “Search” has become a multibillion- dollar-
a-year business. More importantly, the way search engines work and how they rank-order the
links displayed when we conduct a search have huge implications for millions of other busi-
nesses. Because consumers typically don’t look past the first few pages of search results, hav-
ing your business appear at the top of a search results page can make a big difference in how
much traffic your website gets. In this chapter, you will read about how search engines work
and how they determine which websites are listed at the top of search results. You will also read
about the strategies companies use to increase their presence on search results pages includ-
ing search engine optimization (SEO) and pay-per-click (PPC) advertising.

Semantic technologies are increasingly being used by search engines to understand
Web page content. In this chapter you will read about the ways that search engines are using
semantic technology to improve performance, increasing relevant pages and decreasing the
number of irrelevant pages that appear in search results.

Finally, you will read about recommendation engines. These tools attempt to anticipate
online information you might be interested in. Netflix uses recommendation engines to sug-
gest movies you might like to watch and news organizations use them to recommend stories
you might want to read on their websites. Amazon credits its recommendation technology for
increasing sales by suggesting products that customers might want to buy.

Business managers need to understand search and recommendation technologies because
their influence in directing potential consumers to business websites is already significant and
expected to grow substantially in the future.

Case 6.1 Opening Case

Mint.com Uses Search Technology to Rank Above
Established Competitors

Company Overview
Mint is a popular, Web-based personal finance service that makes it
easy for users to keep track of bank, credit card, and other financial
accounts using a computer or mobile device. Customers can also use
the service to create budgets and monitor progress toward financial
goals. Since it began in 2006, the company has grown rapidly despite
competition from more established companies. In 2009, Mint was
acquired by Intuit, the maker of TurboTax and Quicken financial soft-
ware. Today, over 20 million people use Mint’s free financial manage-
ment service (Table 6.1).

The Business Challenge
In the months leading up to the 2006 launch of Mint.com, a personal
finance service, the leadership team faced a formidable challenge: How

to establish name awareness and brand equity in a market filled with
established competitors, without spending a lot of money? Mint knew
it would be competing in a market space already populated by familiar
brands like Quicken Online and Microsoft Money Online. Since online
platforms and communication channels tend to favor existing compa-
nies with established audiences and reputations, the team knew they
had to come up with a powerful strategy for overcoming the estab-
lished brands.

Mint’s Content Marketing Strategy
As a Web-based service, it was critical for Mint.com to rank high on
search engine results pages (SERPs) when consumers used sites like
Google or Bing to find information about personal finance services and
related topics. Consumers are more likely to visit websites that appear
at the top of SERPs. While the service was still in the beta (trial) stage
of development, workers at Mint developed an aggressive strategy to
optimize the brand’s ranking on popular search engines. Their strat-
egy involved building the company’s Web presence on criteria used by

G
eb

er
86

/E
+

/G
et

ty
Im

ag
es

Ia
nD

ag
na

ll
C

om
pu

tin
g/

A
la

m
y

St
oc

k
Ph

ot
o

fil
o/

D
ig

ita
lV

is
io

n
Ve

ct
or

s/
G

et
ty

Im
ag

es

Introduction 167

TABLE 6.1 Opening Case Overview

Company Mint

History Mint was launched in 2006 as a free, Web-based personal finance app by founder Aaron Patzer. In 2009,
the company was acquired by the financial software company Intuit.

Growth Within two years of launch, Mint claimed over 1.5 million users. By 2012, the company claimed
10 million users and by 2016, the number of users rose to over 20 million.

Product lines Mint’s original service allowed users to track balances and transactions on credit card, investment, and
bank accounts as well as to create budgets and establish financial goals. In addition, Mint now offers
users a bill pay service and credit score monitoring.

Social technology Prior to the release of its flagship personal finance app, Mint created a large following of prospective
users with MintLife, a blog that offered valuable advice targeted to young professionals.

Search technology Mint utilized an aggressive SEO strategy to rank highly on search results pages. Specific actions
included the following:
• Creation of useful personal finance content on its blog, MintLife
• Use of targeted keywords in website content
• Established audiences on popular social media sites such as Facebook and Twitter
• Used various strategies, including sponsorship of third-party blogs, to generate links (or “backlinks”)

to Mint.com from other websites

Website Mint.com

search engines to determine SERP ranking. The strategy focused on
the following:

• Increasing the number of other websites that linked back to
Mint’s website (called “backlinks”)

• Creating interesting and useful content about personal finance
topics that prospective customers would find helpful

• Identifying keywords and phrases used by prospective customers
when searching for personal financial services, and creatively
inserting these words and phrases into website content

• Regularly updating and adding to their collection of personal
finance content

• Establishing a presence on popular social media sites, expanding
their audience on those sites, and encouraging the audience to
share links to Mint’s website content

Months prior to the launch of its personal finance service, Mint
rolled out a personal finance blog called MintLife and quickly devel-
oped a reputation for providing helpful financial advice targeted to
young professionals. Blog posts on MintLife were creatively seeded
with keywords and phrases the team had identified as likely to be
used by prospective customers when conducting Internet searches
for financial services. Mint also created landing pages on their web-
site containing content optimized for keywords and phrases related
to financial services. As search engines tracked this content, Mint
began to lay a foundation for eventually being viewed by search
engines as a credible authority for personal finance topics. New
posts were regularly added to the blog, which further enhanced
Mint’s ranking since search engines favor websites with lots of
content (content depth) and regular updates. To further establish
its position as a useful and authoritative site, Mint sponsored sev-
eral third-party blogs and cultivated relationships with authors of
established finance and money management blogs. Mint’s founder,
Aaron Patzer, gave hundreds of interviews, resulting in print media
and online articles about the start-up company. These and other
actions resulted in more third-party websites posting links back to

Mint.com. These “backlinks” were tracked by search engines and
resulted in additional increases to the site’s ranking on SERPs. Popu-
lar search engines also track a company’s presence on social media
and the extent to which users share information about the company
and its products. Mint’s blog featured content in a variety of inter-
esting formats: videos, podcasts, infographics, and so on. Users on
social news sites like Reddit.com frequently shared and “upvoted”
interesting infographics from Mint’s blog. Links to other types of blog
content were shared by users on Facebook, Twitter, and other social
media platforms. As a result, Mint’s expanding audiences on Face-
book, Twitter, and other social media platforms further enhanced
the new company’s SERP ranking.

Results
In 2007, Mint launched its new financial services website into a market
where it already enjoyed considerable name recognition and aware-
ness. Within 2 years, the service acquired 1.5 million users and was
purchased by Intuit for $170 million. The company continued its suc-
cessful content marketing strategy, climbing to 10 million users in 2012
and over 20 million users today.

Questions
1. Why did Mint invest the time and effort to publish a financial

services blog almost two years before the launch of its service?

2. How did Mint use social media sites to increase its ranking on the
search results pages of popular search engines?

3. Why did Mint use keywords and phrases associated with personal
finance when creating content for its blog?

4. Why did Mint put so much emphasis on improving the rank of its
website on SERPs?

5. Why did Mint use infographics, videos, and other types of rich
media in its financial services blog?

Sources: Compiled from Sukhraj (2015), Bulygo (2013a), Obi-Azubuike (2016),
Prince (2016), Greene (2016).

168 CHAPTER 6 Search, Semantic, and Recommendation Technology

6.1 Using Search Technology
for Business Success
Search engines like Google, Bing, Yahoo, and others have traditionally been regarded as a
consumer technology. But search technology has become an important business tool with
many different uses and applications. In this section, you will learn how search engines work
and the role they play in generating revenue and consumer awareness for organizations. You
will also discover how businesses use enterprise search technology to unlock hidden content
with their organizations. Finally, you will read about how search and Internet technology is
evolving to provide more accurate and useful results.

How Search Engines Work
People use the word search engine to refer to many different kinds of information retrieval (IR)
services that find content on the World Wide Web. These services vary in significant ways.
Understanding how these services differ can improve the quality of results obtained when con-
ducting a search for online information. Listed below is a brief description of different IR ser-
vices for finding Web content:

• Crawler search engines rely on sophisticated computer programs called spiders, crawlers,
or bots that surf the Internet, locating Web pages, links, and other content that are then
stored in the search engine’s page repository. The most popular commercial search engines,
Google and Bing, are based on crawler technology.

• Web directories list Web pages organized into hierarchical categories. Originally, Web direc-
tories were created and maintained by human editors who decided how a website would
be categorized. Today, many Web directories use technology to automate new website list-
ings. Web directories are typically classified as “general” directories that cover a wide-range
topical categories, or “niche” directories that focus on a narrow range of topics. Examples
of popular general directories include Best of the Web, JoeAnt, and LookSmart. Wikipedia
maintains a list of general and niche Web directories.

• Hybrid search engines combine the results of a directory created by humans and results
from a crawler search engine, with the goal of providing both accuracy and broad coverage
of the Internet.

• Meta-search engines compile results from other search engines. For instance, Dogpile
generates listings by combining results from Google and Yahoo.

• Semantic search engines are designed to locate information based on the nature and
meaning of Web content, not simple keyword matches. The goal of these search engines
is to dramatically increase the accuracy and usefulness of search results. Semantic search
engines are described in more detail in Section 6.4.

Web Directories
Before crawler search engines became the dominant method for finding Web content, people
relied on directories created by human editors to help them find information. Web directories are
typically organized by categories (for instance, see the categories listed on Best of the Web). Web
page content is usually reviewed by directory editors prior to its listing in a category to make sure
it is appropriate. This reduces the number of irrelevant links generated in a search. The review
process, however, is very slow compared to the automated process used by crawlers (described
in the following section). As a result, the listings in a Web directory represent a relatively small
portion of the Web. Directories are particularly useful when conducting searches on a narrow
topic, such as identifying suppliers of a specific type of product or service. Companies who need

Search engine an application
for locating Web pages or other
content (e.g., documents, media
files) on a computer network.
Popular Web-based search
engines include Google, Bing,
and Yahoo.

Spiders also known as
crawlers, Web bots, or simply
“bots,” spiders are small computer
programs designed to perform
automated, repetitive tasks over
the Internet. They are used by
search engines for scanning Web
pages and returning information
to be stored in a page repository.

Using Search Technology for Business Success 169

to identify vendors or suppliers may consult a niche Web directory created for just this purpose.
For example, see the Web directory at business.com.

How Crawler Search Engines Work
The two most popular commercial search engines on the Web, Google and Bing, are based
on crawler technology. Behind the relatively simple interfaces of these two powerful search
engines, a great deal of complex technology is at work (Figure 6.1). Because modern search
engines use proprietary technology in the race to stay ahead of competitors, it is not possible
to tell exactly how they decide what websites will appear in a SERP. While they each produce
different results, it is possible to describe the basic process shared by most crawler search
engines. The following description is based on publications by Grehan (2002) and Oak (2008).

1. The crawler control module assigns Web page URLs to programs called spiders or bots. The
spider downloads these Web pages into a page repository and scans them for links. The
links are transferred to the crawler control module and used to determine where the spi-
ders will be sent in the future. (Most search engines also allow Web masters to submit
URLs, requesting that their websites be scanned so they will appear in search results.
These requests are added to the crawler control queue.)

2. The indexer module creates look-up tables by extracting words from the Web pages and
recording the URL where they were found. The indexer module also creates an inverted
index that helps search engines efficiently locate relevant pages containing keywords
used in a search. (See Figure 6.2 for examples of an inverted index.)

3. The collection analysis module creates utility indexes that aid in providing search results.
The utility indexes contain information about things such as how many pages are in a web-
site, the geographic location of the website, number of pictures on a Web page, Web page
length, or other site-specific information the search engine may use to determine the rel-
evance of a page.

4. The retrieval/ranking module determines the order in which pages are listed in a SERP.
The methods by which search engines determine website listing order varies and the spe-
cific algorithms they use are often carefully guarded trade secrets. In some cases, a search
engine may use hundreds of different criteria to determine which pages appear at the top
of a SERP. Google, for instance, claims to use over 200 “clues” to determine how it ranks
pages (Google.com, 2014).

Page repository a data
structure that stores and manages
information from a large number
of Web pages, providing a fast and
efficient means for accessing and
analyzing the information at a
later time.

Crawler control module
a software program that controls
a number of “spiders” responsible
for scanning or crawling through
information on the Web.

Page Repository

WWW

Text Utility

Indexer
Module

Structure

Collection
Analysis Module

Surfer–Client QueriesSpiders/
Crawlers

Crawler Control
Indexes

URL
Submissions

1. URL
2. URL
3. URL.

Query
Formulation

Results

Ranking
..

FIGURE 6.1 Components of crawler search engine (Adapted from Grehan, 2002).

170 CHAPTER 6 Search, Semantic, and Recommendation Technology

5. Web pages retrieved by the spiders, along with the indexes and ranking information, are
stored on large servers (see IT at Work 6.1).

6. The query interface is where users enter words that describe the kind of information they
are looking for. The search engine then applies various algorithms to match the query string
with information stored in the indexes to determine what pages to display in the SERP.

Each search engine utilizes variations and refinements of the aforementioned steps in
an attempt to achieve superior results. The Web search industry is highly competitive and
the proprietary advances in search technology used by each company are closely guarded
secrets. For instance, even the first step in the process, crawling the Web for content, can vary
greatly depending on the strategic goals of the search engine. Some search engines limit the
number of pages scanned at each website, seeking instead to use limited computing power
and resources to cover as many websites as possible. Other search engines program their spi-
ders to scan deep into each website, seeking more complete coverage of each site’s content.
Still other search engines direct their spiders to seek out websites that contain certain types
of content, such as government sites or shopping (e-commerce) sites. Another decision that
search engines make regarding spiders is the amount of resources directed at searching
new websites versus devoting resources to exploring previously indexed pages for updates
or changes.

One of the many challenges faced by large commercial search engines is storage. In the
simplest sense, the crawler approach to search requires a company to store a copy of the Web
in large data centers. In addition to the petabytes of storage required to maintain this copy of
the Web, the search engine must also store the results of its indexing process and the list of
links for future crawls.

Petabyte a unit of measurement
for digital data storage. A petabyte
is equal to one million gigabytes.

Document ID

1
To the heart, real love
always endures.

Though passion may
cool, love remains true.
True love kindles the
passion in my heart.

2

3

Content URL

Search Query:

Page Index

Inverted Index

ID Term Document: Position

1 1:3, 3:7

1:4

1:5, 2:5, 3:2

1:6

1:7

2:7, 3:2

2:1

2:2, 3:5

2:3

2:4

2:6

3:3

3:8

heart

real

love

always

endures

true

though

passion

may

cool

remains

kindles

heart

2

3

4

5

6

7

8

9

10

11

12

13

True love

Results Ranking (based on position)

True love kindles the passion in my heart.

Though passion may cool, love remains true.

Documents with
both terms: 2 and 3

FIGURE 6.2 Search engines use inverted indexes to efficiently locate Web content based on search query terms.

Using Search Technology for Business Success 171

IT at Work 6.1

Google Data Centers
Not only does Google maintain a copy of the Internet for its search
engine services, it is also constantly updating a map of the entire
planet for users of its popular Google Earth application. In addition,
the company is making a full-text, searchable copy of over 129,864,880
known books, equal to 4 billion pages or 2 trillion words. And then
there are applications like Gmail, serving roughly 425 million people
and YouTube, where 300 hours of video are uploaded every minute!
Add all this up, and Google is facing perhaps the biggest data storage
challenge ever. So where does Google store all of these data?

Challenges: Energy, Performance, and Security
Information collected by Google is housed on over 1 million servers
spread across 12 different facilities worldwide. The facilities are
large, factory-like installations containing row upon row of racked
and stacked servers. Cooling systems, required to keep servers from
overheating, are a significant component of any large data center
(Figure 6.3). Google pioneered the software systems that connect
the company’s servers and make it possible for various applications
to access data stored on the machines. Unlike other companies that
purchase servers from outside suppliers, Google builds its own.
Based on its experience creating the hardware, software, and facil-
ities necessary to power the company on a global scale, Google is
recognized as a leader in data center operations.

The company’s data centers, including the servers, are built
with energy efficiency, reliability, and performance in mind. As
Google is a leading provider of Internet services, its data infrastruc-
ture must keep up with growing consumer demand for speedy
performance and reliability. A typical Google search delivers millions
of pages of results in less than half a second. Consumer expectations
for performance have grown so high that waiting more than a few
seconds for an e-mail to load or a search to run can cause frustration.

More recently, Google has had to contend with revela-
tions that the U.S. National Security Agency (NSA) breached
its server network security. This follows cyberattacks in 2010
and 2011 by hackers suspected of being associated with the
Chinese government. Protecting company data from criminals
is a significant challenge in itself, but Google is understandably

frustrated by the fact that it must now fight off cyber-attacks from
two world superpowers, one of which is its own government.

Environmental Impact
Industrywide, data centers used 70 billion kilowatt-hours of
electricity in 2014, representing a 4% increase from the amount
used in 2012. Industrywide, data center energy use and the related
environmental impact have become an issue of growing concern.
Google is widely recognized as operating some of the most efficient
data centers in the world, but many critics are disturbed by the
industry’s overall level of energy consumption. According to some
estimates, data centers account for about 2% of the world’s energy
use and the fast rate of growth is cause for concern (see Figure 6.4).
Google has taken an active approach to reducing its environmental
footprint. Beginning in 2017, Google will source 100% of its energy
needs for offices and data centers from renewable sources. See
Google’s data center Web page https://www.google.com/about/
datacenters for additional information.

Google Data Center Statistics
• Number of servers worldwide Over 1 million

• Number of data centers Nine in North America, one in
South America, two in Asia, and four in Europe

• 2016 Capital investment in data centers Approximately
$11 billion

• Data processing volume Over 100 petabytes a day

• Average energy efficiency PUE* = 1.12

• Energy use Continual use of about 260 megawatts of
electricity, approximately 0.01% of global energy consumption

• Energy use comparisons Owns about 3% of servers world-
wide, but only uses about 1% of data center industry energy

• Renewable energy Claims that 100% percent of its energy
use comes from renewable sources

*PUE stands for Power Usage Effectiveness. A PUE of 2.0 means that for
every watt of power devoted to computing, an additional watt is spent
on cooling, power distribution, and overhead. The Data Center Industry
average PUE falls between 1.8 and 1.89.

Sources: Jacobson (2010), Grifantini (2011), Newman (2011), Schneider
(2011), Glanz (2011, 2012), Gallagher (2012), Venkatraman (2012), Anthony
(2013), Miller (2013), Sverdlik (2016).

B
lo

om
be

rg
/G

et
ty

Im
ag

es

FIGURE 6.3 Pipes pass through the chiller plant at the
Google, Inc., data center in Changhua, Taiwan. Google doubled
its spending plan for its new data center in Taiwan to $600
million amid surging demand from Asia for its Gmail and
YouTube services.

©
a

sh
ar

ky
u/

Sh
ut

te
rs

to
ck

FIGURE 6.4 New, large-scale data centers being constructed
for companies like Google, Microsoft, and Facebook house
thousands of servers and are creating concern among
environmentalists over increases in energy consumption.

172 CHAPTER 6 Search, Semantic, and Recommendation Technology

Why Search Is Important for Business
Search engines have become a part of our everyday life. They are free, easy to use, and become
more powerful and effective every day. Most of us take them for granted and are generally
unaware of the complex technologies that power these tools. For the average Web user,
it may not be vitally important to understand how search technology is evolving. But for
business managers, understanding the potential power of search technology is crucial and
becoming more important every day. It has long been recognized that access to information
is a competitive advantage. Search technology impacts business in each of the following ways:

• Enterprise search—finding information within your organization
• Recommendation engines—presenting information to users without requiring them to

conduct an active search
• Search engine marketing (SEM)—getting found by consumers on the Web
• Web search—finding crucial business information online

Each of these important search technology applications are described in what follows.

Enterprise Search Enterprise search tools are used by employees to search for and
retrieve information related to their work in a manner that complies with the organization’s
information-sharing and access control policies. Information can come from a variety of sources,
including publicly available information, enterprise information (internal records) found in
company databases and intranets, as well as information on individual employee computers
(Delgado, Renaud, & Krishnamurthy,  2014). Enterprise search tools allow companies to gain
competitive advantages by leveraging the value of internal information that would otherwise
remain hidden or “siloed.” Information can be inaccessible as a result of incompatible technol-
ogies in various units, lack of coordination or cooperation between units, security concerns,
and concerns about the cost of making information accessible (Thomas, 2013; Walker, 2014).

In most organizations today, a large portion of employees are “knowledge workers”
(e.g., business analysts, marketing managers, purchasing agents, IT managers, etc.). Access
to information has a significant impact on their productivity. Enterprise search tools allow
workers to extract internal information from databases, intranets, content management sys-
tems, files, contracts, policy manuals, and documents to make timely decisions, adding value
to the company and enhancing its competitive advantage.

Structured versus unstructured data One of the challenges encountered by devel-
opers of enterprise search tools is that information is not always in the same format. Data exist
in two formats: structured or unstructured. Structured data can be defined as highly orga-
nized information, which is easily searchable using simple search engine algorithms or related
procedures. Unstructured data, sometimes called messy data, refers to information that is not
organized in a systematic or predefined way. Unstructured data files are also more likely to con-
tain inaccuracies or errors. Examples of unstructured data include e-mails, articles, books, and
documents. Unstructured data accounts for a majority of all the data present on computers
today. Originally, enterprise search tools worked only with structured data. Many newer sys-
tems claim to work with unstructured information as well, although there is great variability in
terms of how well they actually do this.

Security issues in enterprise search Unlike a Web search, enterprise search tools must
balance the goal of making information widely available throughout the organization with the
need to restrict access based on an employee’s job function or security clearance. Limiting access
to certain documents or data is referred to as access control. Enterprise search tools introduce
the potential for a number of security breaches or access of unauthorized information. Most of
these can be addressed as long as the organization’s IT workers install and maintain the search
system’s security features, including security integrations with other enterprise programs. An
audit of requests logs should be conducted regularly to look for patterns or inconsistencies.

Using Search Technology for Business Success 173

Enterprise search vendors Market analysts Frost and Sullivan (Prnewswire.com, 2013)
estimate that the global market for enterprise search tools was over $1.47 billion in 2012; it is
predicted that the market will grow to over $5 billion by 2020. Clearly, organizations around
the world recognize the value of this technology. Several different companies make and sell
enterprise search systems, Autonomy, Google, Coveo, and Perceptive Software being the top
contenders (Andrews & Koehler-Kruener, 2014). Vendors can be broken down into the following
three categories:

• Specialized search vendors (for instance, Attivio, Endeca, Vivisimo): Software designed to
target specific user information needs

• Integrated search vendors (for instance, Autonomy, IBM, and Microsoft): Software designed
to combine search capabilities with information management tools

• Detached search vendors (for instance, Google, ISYS): Software designed to target flexi-
bility and ease of use

With so many options available for enterprise search, it is important that organizations
conduct a careful needs analysis prior to acquisition.

Recommendation Engines Recommendation engines represent an interesting
twist on IR technology. Unlike Web search engines that begin with a user query for information,
recommendation engines attempt to anticipate information that a user might be interested in.
Recommendation engines are used by e-commerce sites to recommend products; news orga-
nizations to recommend news articles and videos; Web advertisers to anticipate the ads people
might respond to; and so on. They represent a huge potential for businesses and developers.
While the use of recommendation engines is widespread, there is still much work to be done to
improve the accuracy of these fascinating applications. You can read more about recommen-
dation engines in Section 6.5.

Search Engine Marketing Most traditional advertising methods target customers
who are not actively engaged in shopping for a product. Instead, they are watching television,
listening to the radio, reading a magazine, or driving down the road, paying little attention to
the billboards they pass. To most people, advertising represents an unwelcome interruption.
On the other hand, people using search engines are actively looking for information. As a result,
they are much more likely to be interested in product and service information found in SERPs
as long as it is related to the topic they are searching for. Efforts to reach this targeted audience
are much more likely to produce sales. That’s why search engine marketing (SEM) has become
an important business strategy. Industry experts report that people generally engage in three
basic types of searches:

1. Informational search Using search engines to conduct research on a topic. This is the
most common type of search.

2. Navigational search Using a search engine to locate particular websites or Web pages.
3. Transactional search Using a search engine to determine where to purchase a product

or service.

You might think businesses would be primarily interested in transactional searches, but all
three types are important and play a key role in the buying process. Say you are interested in
purchasing a new tablet computer. Your first step is likely to engage in an informational search,
attempting to learn about the product category of mobile tablet devices. Businesses should
offer content on their websites and social media sites for consumers seeking general product
information. An informational search also represents an opportunity to influence consumers
early in the purchasing process.

After researching a product category, you might try finding websites of particular com-
panies to learn more about individual tablet computer brands (navigational search). Com-
panies need to design their websites so that they can be found easily by search engines.

Search engine marketing
(SEM) a collection of online
marketing strategies and tactics
that promote brands by increasing
their visibility in SERPs through
optimization and advertising.

174 CHAPTER 6 Search, Semantic, and Recommendation Technology

Finally, you might try to determine where to buy your tablet computer by searching on
terms like “lowest price,” “free shipping,” and so on. This is an example of a transactional search.

Search engine marketing (Figure 6.5) consists of designing and advertising a Web page,
with the goal of increasing its visibility when consumers conduct the three types of searches
just described. SEM strategies and tactics produce two different, but complementary outcomes:

1. Organic search listings are the result of content and website design features intended to
improve a site’s ranking on SERPs that result from specific keyword queries. No payments
are made to the search engine service for organic search listings.

2. Paid search listings are a form of advertising and are purchased from search engine
companies. The placement and effectiveness of paid search ads on SERPs are a function of
several factors in addition to the fees paid by advertisers. You will read more about these
factors in Section 6.3.

Businesses utilize search engine optimization (SEO) to improve their website’s organic
listings on SERPs. SEO specialists understand how search engines work and guide companies
in designing websites and creating content that will produce higher organic SERP rankings than
competitive websites.

Paid search listings are often referred to as pay-per-click (PPC) advertising because adver-
tisers pay search engines based on how many people click on the ads. Typically, PPC ads are
listed separately from organic search results. Managing an effective PPC ad campaign involves
making strategic decisions about what keyword search queries you want to trigger the display
of your ad. You will read more about PPC or paid search advertising in Section 6.3.

Social media optimization refers to strategies designed to enhance a company’s standing
on various social media sites. Increasingly, search engines evaluate a company’s presence on
social media to determine its reputation, which in turn influences how the company is ranked
in SERPs. You will read more about social media strategies in Chapter 7.

Growth of search engine marketing As companies begin to realize the power of SEM,
more money is being spent on this highly effective strategy. In 2016, businesses spent an esti-
mated $65 billion on SEO services to improve the rank or listing order of their organic listings on
SERPs. This figure is expected to rise to almost $80 billion in 2020 (Sullivan, 2016). In addition,
the research firm eMarketer (2016) estimates that spending on PPC search advertising reached
$86.25 billion in 2016, an increase of 15.4% from the year before. Both types of spending, SEO
and PPC, illustrate how important search marketing is to businesses these days. Companies
now spend more on SEM than they do on television or print advertising. Unlike most traditional
advertising methods, return on investment (ROI) can be calculated for SEM by tracking click-
through rates (CTRs), changes in site traffic, and purchasing behavior.

Click-through rates (CTRs) the
percentage of people who click
on a hyperlinked area of a SERP
or Web page.

Search Engine
MarketingSEO

Search Engine
Optimization

SMO
Social Media
Optimization

PPC
Pay-Per-Click
Advertising

FIGURE 6.5 Search engine marketing
integrates three different strategies: search
engine optimization, pay-per-click advertising,
and social media optimization.

Using Search Technology for Business Success 175

Mobile Search and Mobile SEO Mobile devices have become ubiquitous. With the
emergence of smartphones and tablet computers, mobile devices now account for over half of
all Web traffic. In some developing countries, mobile devices account for an even larger share
of Internet use since they are less expensive than computers. Since more people are using mo-
bile devices to surf the Web, it should come as no surprise that more Internet searches are
conducted using mobile devices instead of computers.

With the dramatic increase in mobile device usage, companies need to make sure their
websites and content can be found via mobile search. This means optimizing mobile websites
differently from desktop sites. Two issues essential to mobile SEO include:

1. Properly configuring the technical aspects of the mobile site so that it can be crawled and
indexed by search engines.

2. Providing content that is useful to people using mobile devices. Webmasters should con-
sider how people use their mobile devices differently from computers and adjust content
on their mobile websites accordingly. For instance, if consumers are likely to use their
mobile device to check product reviews while shopping in a store, make this information
easy to find on the mobile website.

When designing a mobile site for e-commerce, Web developers should make sure that
information about store location, product reviews, and promotional offers is easily available and
optimized so that it will appear in a mobile SERP. Mobile shoppers also use barcode scanning
apps as a kind of mobile search engine for locating product reviews and price comparisons while
shopping in stores. This practice, called showrooming, is becoming increasingly popular with con-
sumers and creating a great deal of frustration and worry on the part of brick-and-mortar retailers.

Social Search Most major social media websites (i.e., Facebook, YouTube, Twitter,
LinkedIn, etc.) have search engines designed to help users find content on their platforms. Of
course, some search tools are better than others. It probably comes as no surprise that Face-
book users have access to some advanced search features. People can search for friends by
name or find information related to their friends using more complex queries such as “Movies
liked by friends who liked The Godfather” or “Music liked by friends who liked Lady Gaga.” Face-
book search can be used to find services, events, places, and groups. You can use it to find some
place to eat with a search phrase like “Seafood restaurants in New Orleans.” Clearly, Facebook
hopes to leverage the content and connections created by users to power a search tool that
people will use instead of Google, Bing, or some other general Web search engine.

Recently, Facebook added a new image search feature powered by artificial intelligence
that allows users to search for pictures using words that describe what’s in the picture instead
of relying on tags and captions. For instance, you might search for “Santa Claus photo” and
the search engine will be able to find photos with Santa Claus even if no tags or text associate
the picture with Santa Claus. Developers say that eventually the image search will be able to
recognize photos based on objects, actions (e.g., walking, running, dancing), and other descrip-
tive terms. Eventually, this technology could be used to perform similar searches for video and
other immersive formats (Candela, 2017).

Facebook undoubtedly can devote more resources to innovations of this nature than
other social media platforms. However, while other platforms may take longer to develop
sophisticated social search tools, most have the same motivations as Facebook when it comes
enhancing user experience and providing a mechanism for highlighting content from individ-
uals and organizations with commercial interests. With over 2 billion searches conducted on
Facebook each day, businesses will undoubtedly be willing to pay for ways to reach this sizable
social media audience in much the same way that they currently advertise on Google, Bing, and
other Internet search engines (Kraus, 2015; Constine, 2016).

Personal Assistants and Voice Search Major Internet technology firms Apple,
Amazon, Google, and Microsoft and a host of smaller firms have launched intelligent personal
assistant (IPA) systems that threaten to disrupt conventional SEM paradigms. IPA software
is typically designed to help people perform basic tasks like turning on/off lights and small

176 CHAPTER 6 Search, Semantic, and Recommendation Technology

appliances, activating household alarm systems, and searching the Internet for music, videos,
weather, and other types of information. While IPAs are still in the growth stages of the product
life cycle, forecasted demand for the foreseeable future seems strong.

The typical IPA system is a voice-activated program that uses commands that approx-
imate natural language. For instance, to learn about the weather, you might ask Amazon’s
IPA, “Alexa, what’s the weather for this weekend?” To get Apple’s IPA to play a specific music
genre, you might say, “Siri, play some R&B music.” In the not too distant future, we can expect
voice-activated IPAs will be integrated with mobile devices, televisions, automobiles, and even
hotel rooms.

Business that have become skilled at using SEO and PPC campaigns to drive traffic to their
websites will have to go back to the drawing board to figure out how the rise in voice search will
affect some fundamental marketing strategies. Currently, IPAs act like a kind of filter, screening
search results and often basing answers on a single source. Just as businesses once faced the
challenge of reformatting website content for smaller screens on mobile devices, they must
now determine how to serve up information in a format optimized to make it attractive to a
variety of IPAs acting as proxies for their owners.

Web Search for Business Commercial search engines and Web directories are use-
ful tools for knowledge workers in business. To use search engines effectively, workers should
familiarize themselves with all the features available on the search engine they use. Since
Google is the most popular search engine, we highlight some of those features in the following
list. Many of these features are also available on Bing.com.

• Focused search You can focus your search to information in different formats—Web
pages, videos, images, maps, and the like—by selecting the appropriate navigation button
on the SERP page.

• Filetype If you are looking specifically for information contained in a certain file format,
you can use the “filetype:[file extension]” command following your keyword query. For in-
stance, the search “private colleges filetype:xls” will produce links to MS Excel files with
information related to private colleges. Use this command to find Adobe files ( ), MS
Word files ( x), MS PowerPoint files (.pptx), and so on.

• Advanced search To narrow your search, go to the Advanced Search panel. From this
page, you can set a wide range of parameters for your search, including limiting the search
to certain domains (e.g., .gov, .org, .edu), languages, dates, and even reading level. You can
also use this to narrow your search to a particular website.

• Search tools button Allows you to narrow your results to listings from specific locations
or time frames.

• Search history Have you ever found a page using a search engine, but later had trouble
finding it again? If you are logged into your Google account while using the search engine,
it’s possible to review your search history. It will show you not only your search queries but
also the pages you visited following each query.

These are just a few of the many features you can use to conduct a power search. While you
are in college, take the time to become proficient with using different search engine features.
Not only will it help with your immediate research needs, it will help you in your career as well.
At the end of this chapter, we include information for a free online Power Search course offered
by Google. This is a good way to enhance your ability to find the information you need.

Finding intellectual property Your business may have an interest in protecting certain
kinds of intellectual property being used without permission on the Web. This might include
confidential reports, images, copyrighted blog posts, creative writing (e.g., poetry, novels, etc.),
and so on. You can use search engines to find where someone may have posted your intellec-
tual property on the Web without permission (see Osher, 2014). You can search for text-based
work by simply using queries containing strings of text from the material you’re looking for.
Images can be found by using Google’s reverse image search engine. Tin Eye is an alternative
reverse image search engine with a number of interesting features.

Using Search Technology for Business Success 177

Real-time search Sometimes you need information about things as they happen. For in-
stance, you may be interested in monitoring news stories written about your company or you
might need to know what people are saying about your brand or a political candidate on Twit-
ter. For these situations, you’ll need a real-time search tool.

Say your company wants to explore accepting Bitcoin payments. (Bitcoin is a digital
currency that was launched in 2009.) After engaging in a traditional Web search to learn about
the currency, you decide you want to learn about public interest in Bitcoin and find news
stories that have recently been published about the currency. You might consider using the
following tools:

• Google Trends This tool will help you identify current and historical interest in the topic
by reporting the volume of search activity over time. Google Trends allows you to view the
information for different time periods and geographic regions.

• Google Alerts Use Google Alerts to create automated searches for monitoring new Web
content, news stories, videos, and blog posts about some topic. Users set up alerts by
specifying a search term (e.g., a company name, product, or topic), how often they want to
receive notices, and an e-mail address where the alerts are to be sent. When Google finds
content that match the parameters of the search, users are notified via e-mail. Bing has a
similar feature called News Alerts.

• Twitter Search You can leverage the crowd of over 650 million Twitter users to find
information as well as gauge sentiment on a wide range of topics and issues in real time.
Twitter’s search tool looks similar to other search engines and includes an advanced search
mode. In addition to real-time search, the Twitter search tool is also an example of social
search, which was explained earlier in the chapter.

Social bookmarking search Social bookmarking sites like Diigo provide a way for users
to save links to websites they want to access at a later time. When saving page links, users tag
them with keywords that describe the page’s content. The bookmarked links form a graph of
content on the Web that can be used by others. Because the Web pages are tagged by humans,
search results are often more relevant than results from commercial search engines. Pinterest
is a variation on the social bookmarking idea, allowing users to save and share images they find
online. You can find information about various topics by searching Pinterest to see what other
users have collected on the subject.

Vertical search As described previously, large commercial search engines use indicators
of popularity or reputation to determine website quality. This seems to work well for a gener-
alized Web search, but it might not be effective when users search on very specific topics such
as rare disease, which, by definition, does not generate a lot of activity on the Web. Crawlers do
not often index pages in the lower levels of less popular websites. Vertical search engines are
programmed to focus on Web pages related to a particular topic and to drill down by crawling
pages that other search engines are likely to ignore. Vertical search engines exist for a variety
of industries. Ironically, the best way to find a vertical search engine is to search for it on a
commercial search engine like Google or Bing.

Questions

1. What is the primary difference between a Web directory and a crawler-based search engine?

2. What is the purpose of an index in a search engine?

3. Why are companies increasingly interested in enterprise search tools capable of handling unstruc-
tured data?

4. What is the difference between SEO and PPC advertising?

5. Describe three different real-time search tools.

178 CHAPTER 6 Search, Semantic, and Recommendation Technology

6.2 Organic Search and Search Engine
Optimization
The goal of SEO practitioners is to help organizations increase traffic to their websites. They
accomplish this by optimizing websites in an effort to increase visibility and ranking on SERPs.
Using Web analytics programs like Google Analytics, companies can determine how many peo-
ple visit their site, what specific pages they visit, how long they spend on the site, and what
search engines are producing the most traffic (see Figure 6.6). More sophisticated SEO prac-
titioners will also attempt to determine what keywords or phrases generated traffic to their
website. These are just a few of the many metrics used to measure the effectiveness of SEO
strategies. In the sections that follow, we will use the most popular search engine, Google, to
explain the basics of SEO. Most of what we write, however, will also apply to other popular
search engines.

Strategies for Search Engine Optimization
As mentioned at the beginning of this chapter, all search engines use somewhat different pro-
prietary algorithms for determining where a website will appear in search results. As a result,
it is not possible to tell what specific factors will be used or how much weight they will carry
in determining SERP ranking. Over time, there has been a significant increase in the number
of factors that search engines like Google use to determine how a site is listed on a SERP. The
general consensus among SEO experts is that Google probably uses over 200 different factors.
To make things even more challenging, Google updates its algorithm hundreds of times a year.
This presents somewhat of a moving target for SEO professionals hired to improve the organic
SERP listings of their clients.

Why Does Google Keep Changing Its Algorithm? Google’s overall goal is to
constantly improve the experience of people using its search engine. Over time, Google engi-
neers have developed ways to predict if a website will provide a positive experience for people
using its search engine. Whenever a new way to improve user experience is found, they imple-
ment the change by updating the algorithm. Google also employs sophisticated technologies
like artificial intelligence and semantic search algorithms to enhance the search experience.

©
fr

an
ck

re
po

rte
r/i

St
oc

kp
ho

to

FIGURE 6.6 Tools like Google Analytics are used to monitor changes in website traffic as a result of
SEO practices.

Organic Search and Search Engine Optimization 179

Artificial intelligence constantly monitors how users respond to search results and modifies
the listing algorithm to improve results. Semantic technology helps Google do a better job
of understanding the content on a Web page and matching that content with the words and
phrases people use to conduct a search.

Ranking Factors: On-Page and Off-Page SEO To understand how Google ranks
website listings in search results, we begin by dividing ranking factors into on-page factors and
off-page factors.

On-page factors are elements of the Web page that can be directly controlled by the pub-
lisher or Web page creator. SEO professionals attempt to improve a website’s SERP listing by opti-
mizing on-page factors related to content, functionality, and HTML programming (Sullivan, 2015).

Content Perhaps one of the biggest changes Google has made to its ranking algorithm
over the years is an increased emphasis on high-quality content. Content marketing is a strategy
that has gained popularity in recent years because of the significant weight assigned to high-
quality content when determining search results and its role in attracting increased Web traffic.
Some specific ways that Google determines if a website has high-quality content include the
following:

• The quality of writing on the Web page
• The presence of relevant keywords and phrases associated with the topic
• How “fresh” or up-to-date the content is
• Use of multiple content formats (i.e., news, video, podcast, blog, and social content)
• Depth or quantity of topical content
• Links that point to other well-respected and trustworthy websites
• The proportion of relevant to irrelevant text about a topic
• Barriers to content have a negative impact on user satisfaction. Examples include making

people register, provide names, or fill out forms to get to content.

Functionality and programming Website functionality has an impact on SERP rankings.
Pages that don’t load quickly or display well on mobile devices are less likely to result in a
positive user experience. Information in a page’s HTML (programming language) source code
also influences ranking algorithms. Functionality and programming can be assessed by factors
such as the following:

• How easily search engine programs can “crawl” the Web page.
• How well the Web page works with mobile devices.
• How quickly the Web page loads.
• Availability of secure (https://) connections for visitors.
• Minimal presence of duplicate content on the website.
• Page URLs that contain keywords.
• Use of topical words and phrases in source code metadata (e.g., title tags, page descrip-

tions, keywords).
• How frequently users click on a listing in search results. Click-through rates (CTR) are

determined in part by how attractive a website’s listing is on a SERP. The way a SERP listing
looks is the result of the Web page’s HTML source code.

• Hacked websites, sites that infect users with malware, and sites that fail to clean up spam
or irrelevant content in comment sections are all factors that negatively impact user
experience.

Off-page factors can be influenced but not directly controlled by SEO professionals. Many
off-page factors are strongly related to a website’s relevance and credibility. Other off-page
factors are related to personalized search, a relatively new effort by Google to improve user
experience (Sullivan, 2015).

180 CHAPTER 6 Search, Semantic, and Recommendation Technology

Relevance and credibility Google uses a number of different metrics to determine if a
website is a trustworthy source of information on a particular topic. Many of these metrics are
based on user behavior and how the site is represented on other websites and social media
platforms:

• Backlinks to the target website on other well-respected and trustworthy websites. The
use of backlinks is based on the assumption that people who create website content are
more likely to place links to high-quality websites than poor-quality sites on their Web
pages. Google assigns a PageRank score to each Web page based on the quality and
quantity of backlinks associated with the page. Since the PageRank score is believed to be
a heavily weighted factor, SEO professionals have developed several creative strategies for
increasing legitimate backlinks to their websites while avoiding certain tactics that Google
disapproves of. Google downgrades websites that use methods that artificially inflate their
backlink count.

• Click-through rate (CTR) is also an indicator of relevance. Users are more likely to click on
SERP listings related to the information they’re searching for.

• Amount of advertising on the website—Too many ads detract from topical website content.
• Dwell time—This is a measure of how long a user remains on a page. Users stay on pages

with useful content longer than pages that lack useful content.
• Sites listed in respected Web directories are more likely to contain quality content because

they have been reviewed by human editors. Positive comments on review sites like
Yelp.com and Zagat.com also have a positive impact on a website’s reputation.

• High-quality or helpful websites are more likely to be discussed on social media. Examples
include comments on Facebook and Google+, shares, Tweets, Likes, and so on.

• Site traffic—Sites with high-quality content tend to get more traffic over time.

Personalized search Google uses information about the person conducting the search
in an effort to enhance their experience:

• User location—the country, city or area the user is from
• Past experience—Google SERPs can be influenced by search and Web browsing history
• Social experience—the extent to which the user or people in their network engaged

with or discussed the website favorably on social media including Google+, Facebook,
Twitter, and so on

Content and Inbound Marketing
The ultimate goal of search engines is to help users find information. Sometimes it seems that
SEO practitioners lose sight of this and spend too much time chasing down hundreds of factors
they think are being used by search engine ranking algorithms. At worst, SEO can represent an
attempt to “game the system” or trick search engines into ranking a site higher than its content
deserves (see the discussion of black hat SEO in the next section).

Perhaps the most important action an organization can take to improve its website’s rank-
ing and satisfy website visitors is to provide helpful content that is current and updated reg-
ularly. When SEO practices are combined with valuable content, websites not only become
easier to find but also contribute to building brand awareness, positive attitudes toward the
brand, and brand loyalty.

Inbound marketing represents an alternative approach to traditional outbound
marketing strategies (e.g., mass media advertising). Inbound marketers attract customers
to their websites with content that is informative, useful, or entertaining. Inbound marketing
campaigns are based on strategies that integrate content generation, SEO, and social media
tactics. In Chapter  7, you will read more about how inbound marketers integrate content,
SEO, and social media strategies in powerful marketing campaigns that deliver sales and
profit. See Figure 6.7.

Organic Search and Search Engine Optimization 181

Black Hat versus White Hat SEO: Ethical Issues in Search
Engine Optimization
Search engines regularly update their algorithms to improve results. Two well-known Google
updates called Panda (released in 2011) and Penguin (released in 2012) were designed to
improve the ranking of websites with quality content and downgrade poor-quality sites. Both
updates are designed to defeat what are commonly referred to as “black hat SEO” tactics.
People who employ black hat SEO tactics try to trick the search engine into thinking a web-
site has high-quality content, when in fact it does not. With stronger detection systems now
in place, websites that use these tactics (or even appear to use them) will be severely down-
graded in Google’s ranking system. Some examples of black hat SEO tactics are defined in the
following list:

Link spamming—Generating backlinks for the primary purpose of SEO, not adding value
to the user. Black hat SEOs use tricks to create backlinks. Some examples include adding
a link to a page in the comments section of an unrelated blog post, or building sites called
“link farms” solely for the purpose of linking back to the promoted page.
Keyword tricks—Black hat SEOs will embed several high-value keywords on pages with
unrelated content to drive up traffic statistics. For instance, an e-commerce site might
embed words like “amazon” (a word that frequently shows up in search queries) in an
attempt to get listed on SERPs of people looking for amazon.com.
Ghost text—This tactic involves adding text on a page that will affect how a website is
listed on SERPs. The text may not have anything to do with the real content of the page, or
it may simply repeat certain words to increase the content density. The text is then hidden,
usually by making it the same color as the background.
Shadow pages—Also called “ghost pages” or “cloaked pages,” this black hat tactic involves
creating pages that are optimized to attract lots of people. The pages, however, contain a
redirect command so that users are sent to another page to increase traffic on that page.

These particular tactics are no longer effective as a result of updates to the Google ranking
system. Most likely, other search engines have adopted similar measures. However, there will
always be people who take shortcuts attempting to achieve higher SERP rankings. Businesses
must be careful when hiring SEO consultants or agencies to make sure they do not use prohib-
ited SEO techniques. When these actions are discovered, Google and other search engines will
usually punish the business by dramatically reducing their visibility in search results.

Communication
is interactive
or two-way

Marketers
promote

company by
educating or
entertaining

Typical
strategies:

content, social
media, and SEO

Customers
seek

out the
business

Inbound Marketing

Communication
is one-way

Businesses
broadcast

messages that
interrupt

customers

Typical
strategies:
TV, radio,

print, outdoor,
cold calls

Businesses
seek out

customers

Outbound Marketing

FIGURE 6.7 Inbound marketers use valuable content, search engine optimization,
and social media to attract customers.

182 CHAPTER 6 Search, Semantic, and Recommendation Technology

Questions

1. Search engines use many different “clues” about the quality of a website’s content to determine how
a page should be ranked in search results. Explain how a search engine uses specific factors to deter-
mine the quality of a website’s content.

2. SEO professionals strive to increase a Web page’s PageRank score which is based on the quality and
quantity of backlinks. Explain what a backlink is and why search engines use the PageRank score to
determine the order in which websites are listed in SERPs.

3. Explain why the so-called black hat SEO tactics are ultimately short-sighted and can lead to significant
consequences for businesses that use them.

4. What is the fundamental difference between on-page and off-page SEO factors?

5. Explain why providing high-quality, regularly updated content is the most important aspect of any
SEO strategy.

6.3 Pay-Per-Click and Paid Search
Strategies
In addition to organic listings, most search engines display paid or sponsored listings on their
SERPs. These advertisements provide revenue for the search engine and allow it to offer Web
search services to the general public for free. They also provide a way for smaller organizations
with new websites to gain visibility on SERPs while waiting for their SEO strategies to produce
organic results. Most major search engines differentiate organic search results from paid ad list-
ings on SERPs with labels, shading, and placing the ads in a different place on the page. Some
critics have complained that paid advertisements receive preferential page placement and are
not clearly distinguished from organic listings. However, at the time of this publication, it is
easy to distinguish ads from organic results on Google and Bing SERPs. Defenders of the search
engine companies argue that since the paid ads make it possible for everyone to use search
services for free, the preferential page placement is justified.

Creating a PPC Advertising Campaign
There are five steps to creating a PPC advertising campaign on search engines.

1. Set an overall budget for the campaign.
2. Create ads−most search engine ads are text only, but this is likely to change in the future.
3. Select keywords and other parameters associated with the campaign.
4. Set up billing account information.
5. Modify key words and ad copy based on results.

Search advertising allows businesses to target customers who are likely to purchase their
products. They do this by selecting keywords that correspond to search queries that potentially
identify someone as a customer. For instance, a company that sells women’s purses may want to
appear on a SERP when someone conducts a search using any of the following terms or phrases:

• Purse
• Handbag
• Women’s purses
• Ladies’ purses
• Designer purses
• Designer handbags

Pay-Per-Click and Paid Search Strategies 183

Google and other search engines provide advertisers with tools for evaluating the impact
of different keywords or phrases. These tools typically display information about how often
people use the word in a search and also recommend alternative words to consider using in
the campaign. Advertisers “bid” on having their ads appear when someone searches on one
of their keywords. Higher bids result in a greater probability that the ad will appear in search
results associated with the keyword. However, this might also deplete the advertiser’s budget
more quickly. On the other hand, if a bid is too low, the ad might not appear at all. Keyword
tools usually provide information about typical bid prices for each keyword or phrase. Smart
advertisers start with a modest bid and increase it over time to achieve the ad placement rate
they desire.

The likelihood of ad placement is also influenced by a quality score representing the
search engine’s estimate of how successful the ad will be. Quality scores are determined by
factors related to ad relevance and user experience factors. Relevant ads closely match the
intent of the user’s search. The expected CTR indicates how likely the ad will be clicked on.
The user’s landing page experience is determined by things such as how relevant, transparent,
and easy-to-navigate the page is. According to Google, quality scores are determined by sev-
eral factors:

• Expected keyword CTR
• The past CTR of your URL
• Past effectiveness (overall CTR of ads and keywords in the account)
• Landing page quality (relevance, transparency, ease of navigation, etc.)
• Relevance of keywords to ads
• Relevance of keywords to customer search query
• Geographic performance—account success in geographic regions being targeted
• How well ads perform on different devices (quality scores are calculated for mobile,

desktop/laptop, and tablets)

Relevant ads that produce sales are good for all parties. The search engine makes more
money from clicked ads, the advertiser benefits from increased revenue, lower costs-per-click,
and more favorable ad placement. When ads are relevant and landing pages are functional and
contain relevant information, customers are more likely to find and purchase what they are
looking for.

In addition to selecting keywords and setting bid prices, advertisers also set parameters
for the geographic location and time of day they want the ad to appear. These factors allow for
additional customer targeting designed to help advertisers reach the consumers most likely to
purchase their products.

Companies need to consider the fit between ad content and landing page content and
functionality. For instance, sometimes companies create product-oriented ads, but then link
to the main page of their website instead of a page with information about the product in the
ad. Other factors include landing page design, call to action (CTA) effectiveness, and quality
of the shopping cart application. It does not make sense to spend money on a PPC campaign
designed to drive consumers to an unattractive and dysfunctional website.

One of the attractive features of PPC ad campaigns is that managers can monitor results
in real time and make adjustments to the campaign parameters if necessary. Advertisers fre-
quently set up A/B tests to evaluate the relative effectiveness of two different ads. After a period
of time, the advertiser checks to see which ad is producing better results and discontinues
use of the less effective ad for the remainder of the campaign. Some advertisers run A/B tests
throughout the campaign, constantly testing ad copy and other elements in the spirit of con-
tinuous improvement. You can learn more about advertising on the major commercial search
engines by visiting the following websites:

• Google adwords.google.com
• Bing advertise.bingads.microsoft.com
• Yahoo advertising.yahoo.com

184 CHAPTER 6 Search, Semantic, and Recommendation Technology

Metrics for Paid Search Advertising
In addition to more effective targeting, one of the key benefits of online advertising is the ability
to evaluate its contribution to sales revenue more effectively. PPC advertisers use the following
metrics to gauge the effectiveness of their campaigns:

Click-through rates (CTRs) By themselves, CTRs do not measure the financial perfor-
mance of an ad campaign. But they are useful for evaluating many of the decisions that go
into a campaign, such as keyword selection and ad copy and ad attractiveness.
Keyword conversion High CTRs are not always good if they do not lead to sales. Since
the cost of the campaign is based on how many people click an ad, you want to select key-
words that lead to sales (conversions), not just site visits. PPC advertisers monitor which
keywords lead to sales and focus on those in future campaigns.
Cost of customer acquisition (CoCA) This metric represents the amount of money
spent to attract a paying customer. To calculate CoCA for a PPC campaign, you divide the
total budget of the campaign by the number of customers who purchased something from
your site. For instance, if you spent $1,000 on a campaign that yielded 40 customers, your
CoCA would be $1,000/40 = $25 per customer.
Return on advertising spend (ROAS) The campaign’s overall financial effectiveness is
evaluated with ROAS (revenue/cost). For example, if $1,000 was spent on a campaign that
led to $6,000 in sales, ROAS would be $6,000/$1,000 = $6. In other words, for every dollar
spent on PPC ads, $6 was earned.

Questions

1. What would most people say is the fundamental difference between organic listings and PPC listings
on a SERP?

2. What are the five primary steps to creating a PPC advertising campaign on search engines?

3. In addition to the “bid price” for a particular keyword, what other factor(s) influence whether or not
an advertisement will appear on a search results page? Why don’t search engines use just the adver-
tiser’s bid to determine if an ad will appear on search results pages?

4. How do on-page factors influence the effectiveness of PPC advertisements?

5. What factors determine an ad’s quality score?

6. Describe four metrics that can be used to evaluate the effectiveness of a PPC advertising campaign.

6.4 A Search for Meaning—Semantic
Technology
If there is one thing history has taught us, it is that the future is hard to predict. It might seem
silly to predict what the future Internet will look like when it’s clear so many people are having
trouble understanding all the implications of the present Internet. However, forward-thinking
businesses and individuals are beginning to plan for the next evolution which is sometimes
called Web 3.0.

The current Web is disjointed, requiring us to visit different websites to get content, engage in
commerce, and interact with our social networks (community). The future Web will use context,
personalization, and vertical search to make content, commerce, and community more relevant
and easier to access. With the addition of mobile technology, this Web will be always accessible.

• Context defines the intent of the user; for example, trying to purchase music, to find a job,
to share memories with friends and family.

A Search for Meaning—Semantic Technology 185

• Personalization refers to the user’s personal characteristics that impact how relevant the
content, commerce, and community are to an individual.

• Vertical search, as you have read, focuses on finding information in a particular content
area, such as travel, finance, legal, and medical.

What Is the Semantic Web?
Semantic refers to the meaning of words or language. The semantic Web is one in which
computers can interpret the meaning of content (data) by using metadata and natural
language processing to support search and retrieval, analysis and information amalgamation
from both structured and unstructured sources. Semantic technologies are being developed
that will create a new, richer experience for Web users.

Tim Berners-Lee, creator of the technology that made the World Wide Web possible, is
director of the World Wide Web Consortium (W3C). This group develops programming standards
designed to make it possible for data, information, and knowledge to be shared even more
widely across the Internet. The result of these standards is a metadata language, or ways of
describing digital information so that it can be used by a wide variety of applications.

Much of the world’s digital information is stored in files structured so they can only be read
by the programs that created them. With metadata, the content of these files can be labeled
with tags describing the nature of the information, where it came from, or how it is arranged. At
the risk of sounding too dramatic, metadata transforms a connected, but largely uninterpreta-
ble Web (network) of pages into a large database that can be searched, analyzed, understood,
and repurposed by a variety of applications.

It is helpful to think about the semantic Web against the background of earlier Internet
functionality (see Table  6.2). The early Internet allowed programmers and users to access
information and communicate with one another without worrying about the details associ-
ated with the machines they used to connect to the network and store the information. The
semantic Web continues this evolution, making it possible to access information about real
things (people, places, contracts, books, chemicals, etc.) without knowing the details associ-
ated with the nature or structure of the data files, pages, and databases where these things
are described or contained (Hendler & Berners-Lee, 2010). This will greatly expand the ways in
which we search for and find information related to our needs and interests.

The Language(s) of Web 3.0
The early Web was built using hypertext markup language (HTML). Web 2.0 was made
possible, in part, by the development of languages like XML and JavaScript. The semantic Web
utilizes additional languages that have been developed by the W3C. These include resource
description framework (RDF), Web ontology language (OWL), and SPARQL protocol and
RDF query language (SPARQL). RDF is a language used to represent information about
resources on the Internet. It will describe these resources using metadata uniform resource
identifiers (URIs) like “title,” “author,” “copyright and license information.” It is one of the
features that allow data to be used by multiple applications.

TABLE 6.2 Evolution of the Web

Web 1.0 (The Initial Web)
A Web of Pages

Pages or documents are “hyperlinked,” making it easier than ever
before to access connected information.

Web 2.0 (The Social Web)
A Web of Applications

New applications and technologies allow people to easily create,
share, and organize information.

Web 3.0 (The Semantic Web)
A Web of Data

Using metadata tags, artificial intelligence, natural language
processing, and other semantic tools, computers can be used to
access specific information across platforms and applications,
regardless of the original structure of the file, page, or document. It
turns the Web into a giant readable database.

186 CHAPTER 6 Search, Semantic, and Recommendation Technology

As the acronym SPARQL implies, it is used to write programs that can retrieve and manip-
ulate data stored in RDF format. OWL is the W3C language used to categorize and accurately
identify the nature of things found on the Internet. These three languages, used together, will
enhance the element of context on the Web, producing more fruitful and accurate information
searches. The W3C continues its work, with input by programmers and the broader Internet
community, to improve the power and functionality of these languages.

Semantic Web and Semantic Search
As you have read, the semantic Web is described by metadata, making it easier for a broad
range of applications to identify and utilize data. One of the barriers to creating a semantic Web
based on metadata, however, is the tagging process. Who will tag all the data currently on the
Web? How can we be sure that such data will be tagged correctly? Will people purposely tag
data incorrectly to gain some kind of advantage in the same way that black hat SEO tactics are
used to mislead search engines?

Semantic search engines can be programmed to take advantage of metadata tags,
but their usefulness would be very limited if that was the only way they could understand
Web content.

Metadata tags, therefore, are just one approach used by semantic search engines to under-
stand the meaning of online content. In addition to metadata tags, semantic search engines
use a variety of other strategies to find meaning:

• Natural language processing
• Contextual cues
• Synonyms
• Word variations
• Concept matching
• Specialized queries
• Artificial Intelligence

Semantic search will seek to understand the context or intent of users looking for
information in an effort to increase the relevance and accuracy of results (DiSilvestro, 2013).
For instance, if a search engine understood the proper context of a search query containing the
word “Disneyworld,” it would know if the user was

• planning a vacation, or
• looking for a job at the theme park, or
• interested in the history of Disney World.

Semantic Search Features and Benefits So what can semantic search engines
do that is so much better compared to search engines that work solely on keyword match-
ing? Grimes (2010) provides a list of practical search features based on semantic search
technology.

Related searches/queries The engine suggests alternative search queries that may pro-
duce information related to the original query. Search engines may also ask you, “Did you
mean: [search term]?” if it detects a misspelling.

Reference results The search engine suggests reference material related to the query,
such as a dictionary definition, Wikipedia pages, maps, reviews, or stock quotes.

Semantically annotated results Returned pages contain highlighting of search terms,
but also related words or phrases that may not have appeared in the original query. These
can be used in future searches simply by clicking on them.

A Search for Meaning—Semantic Technology 187

Full-text similarity search Users can submit a block of text or even a full document to
find similar content.
Search on semantic/syntactic annotations This approach would allow a user to indi-
cate the “syntactic role the term plays—for instance, the part-of-speech (noun, verb, etc.)—
or its semantic meaning, whether it’s a company name, location, or event.” For instance,
a keyword search on the word “center” would produce too many results. Instead, a search
query could be written using a syntax such as the following:

center

This would only return documents where the word “center” was part of an organization’s
name (e.g., Johnson Research Center or Millard Youth Center). Google currently
allows you to do something similar to specify the kind of files you are looking for (e.g.,
filetype:pdf).
Concept search Search engines could return results with related concepts. For instance,
if the original query was “Tarantino films,” documents would be returned that contain the
word “movies” even if not the word “films.”
Ontology-based search Ontologies define the relationships between data. An ontol-
ogy is based on the concept of “triples”: subject, predicate, and object. This would allow
the search engine to answer questions such as “What vegetables are green?” The search
engine would return results about “broccoli,” “spinach,” “peas,” “asparagus,” “Brussels
sprouts,” and so on.
Semantic Web search This approach would take advantage of content tagged with
metadata as previously described in this section. Search results are likely to be more accu-
rate than keyword matching.
Faceted search Faceted search provides a means of refining or filtering results based
on predefined categories called facets. For instance, a search on “colleges” might result
in options to “refine this search by. . .” location, size, degrees offered, private or public,
and so on. Many e-commerce websites provide users with faceted search features, allow-
ing shoppers to filter search results by things like price, average rating, brand name, and
product features.
Clustered search This is similar to a faceted search, but without the predefined cat-
egories. Visit Carrot2.org to better understand this concept. After conducting a search,
click on the “foamtree” option to see ways to refine your search. The refining options are
extracted from the content in pages of the initial search.
Natural language search Natural language search tools attempt to extract words from
questions such as “How many countries are there in Europe?” and create a semantic
representation of the query. Initially, this is what people hoped search engines would
evolve toward, but Grimes wonders if we have become so accustomed to typing just one or
two words into our queries that writing out a whole question may seem like too much work.

You may recognize some of these search enhancements when using popular commercial
search engines like Google or Bing. That is because they have been building semantic tech-
nologies into their systems to improve user experience. You are encouraged to explore other
search engines with semantic search features like DuckDuckGo and SenseBot.

Semantic Web for Business
What opportunities and challenges does the semantic Web hold for businesses? Perhaps
the most immediate challenge faced by businesses is the need to optimize their websites for
semantic search. Because search engines are responsible for directing so much traffic to business
websites, it will be important that companies take advantage of semantic technologies to
ensure they continue to remain visible to prospective customers who use search engines. While

188 CHAPTER 6 Search, Semantic, and Recommendation Technology

the details of semantic SEO are beyond the scope of this book, we can illustrate one important
benefit of semantic website optimization. Websites optimized for semantic technology with
metadata produce richer, more attractive listings on SERPs. Google calls these rich snippets
(see Figure 6.8).

Note how detailed the organic search listing in Figure 6.8 is compared to a basic listing.
These enhanced search listings are more visually attractive and produce greater CTRs.

Businesses need to stay up to date with advances in semantic search so that they can
continuously optimize their sites to increase traffic from major search engines.

Questions

1. List five different practical ways that semantic technology is enhancing the search experience of users.

2. How do metadata tags facilitate more accurate search results?

3. Briefly describe the three evolutionary stages of the Internet?

4. Define the words “context,” “personalization,” and “vertical search.” Explain how they make for better
information search results.

5. What are three languages developed by the W3C and associated with the semantic Web?

6.5 Recommendation Engines

A lot of times, people don’t know what they want until you show it to them.
—Steve Jobs (quoted in Business Week, May 12, 1998)

Think about the challenge faced by large e-commerce websites like Amazon or Netflix.
Brick-and-mortar retailers can capture people’s attention in the store with eye-catching point-
of-purchase displays or suggestive selling by store employees. However, these are not options

FIGURE 6.8 The Google search listing for this New
York-based grocery chain is more attractive because
it uses metadata from the business’s website. (Google
and the Google logo are registered trademarks of
Google, Inc., used with permission.)

Recommendation Engines 189

for retail websites. They need an effective way of recommending their vast array of products
to customers. Most e-commerce sites provide website search tools based on the technologies
previously discussed in this chapter. Relying on customers to find products through an active
search, however, assumes customers know what they want and how to describe it when form-
ing their search query. For these reasons, many e-commerce sites rely on recommendation
engines (sometimes called recommender systems). Recommendation engines proactively
identify products that have a high probability of being something the consumer might want
to buy. Amazon has long been recognized as having one of the best recommendation engines.
Each time customers log into the site, they are presented with an assortment of products
based on their purchase history, browsing history, product reviews, ratings, and many other
factors. In effect, Amazon customizes their e-commerce site for each individual, leading to
increased sales. Consumers respond to these personalized pages by purchasing products
at much higher rates when compared to banner advertisements and other Web-based pro-
motions. At Amazon, the recommendation engine is credited with generating 35% of sales
(Arora, 2016).

Recommendation Filters
There are three widely used approaches to creating useful recommendations: content-based
filtering, collaborative filtering, and hybrid strategies (Asrar, 2016).

Content-Based Filtering Content-based filtering recommends products based on
the product features of items the customer has interacted with in the past (Figure 6.9). Inter-
actions can include viewing an item, “liking” an item, purchasing an item, saving an item to
a wish list, and so on. In the simplest sense, content-based filtering uses item similarity to
make recommendations. For instance, the Netflix recommendation engine attempts to recom-
mend movies that are similar to movies you have already watched (see IT at Work 6.2). Music-
streaming site Pandora creates its recommendations or playlists based on the Music Genome
Project©,a system that uses approximately 450 different attributes to describe songs. These
detailed systems for describing movies and songs enhance Netflix’s and Pandora’s positions
in highly competitive industries because of their ability to offer superior recommendations to
their customers.

3. Recommendation: “Based on your rating of fruity
cocktail umbrella drink you may also like…”

1. Customer likes fruity cocktail
umbrella drink

2. Computer searches products for fruity
cocktail umbrella drink

FIGURE 6.9 Content-based filtering produces recommendations based on
similarity of product features.

190 CHAPTER 6 Search, Semantic, and Recommendation Technology

Collaborative Filtering Collaborative filtering makes recommendations based on a
user’s similarity to other people. For instance, when a customer gives a product a high rating,
he or she may receive recommendations based on the purchases of other people who also
gave the same product a high rating. Sometimes, websites will explain the reason for the rec-
ommendations with the message “Other people who liked this product also bought. . .” Many
collaborative filtering systems use purchase history to identify similarities among customers. In
principle, however, any customer characteristic that improves the quality of recommendations
could be used (see Figure 6.10).

In an effort to develop increasingly better recommendation engines, developers are
exploring a number of creative ways to predict what consumers might like based on patterns of
consumer behavior, interests, ratings, reviews, social media contacts and conversations, media
use, financial information, and so on.

In addition to content filtering and collaborative filtering, two other approaches to rec-
ommendation engines are mentioned in the literature: knowledge-based systems and
demographic systems. Knowledge-based systems use information about a user’s needs to rec-
ommend products. This kind of system is useful for developing recommendations for products

IT at Work 6.2

Violent Nightmare-Vacation Movies and Other Fun
Movie Genres at Netflix
Alexis Madrigal (2014) reverse-engineered Netflix’s list of movie
genres and was surprised to learn the company uses approxi-
mately 76,897 different ways to describe movies, creating the
potential for some unusually specific movie recommendations.
Christian Brown (2012) compiled a list of humorous and some-
times disturbing movie categories, a few of which are listed below:

10. Cerebral Con-Game Thrillers

9. Visually Striking Father–Son Movies

8. Violent Nightmare-Vacation Movies

7. Understated Independent Workplace Movies

6. Feel-Good Opposites-Attract Movies

5. Witty Dysfunctional-Family TV Animated Comedies

4. Period Pieces about Royalty Based on Real Life

3. Campy Mad-Scientist Movies

2. Mind-Bending Foreign Movies

1. More like Arrested Development

The fact that Netflix went to the trouble of creating so many
detailed and descriptive labels suggests that a content-based fil-
tering strategy is at use in the company’s recommendation system.

3. Recommendations: “Other customers who like pink beverage also like…”

1. Customer likes pink beverage 2. Other customers like pink beverage

FIGURE 6.10 Collaborative filtering bases recommendations on similarity to other
customers.

Recommendation Engines 191

that consumers do not shop for very often. For instance, an insurance company may ask a cus-
tomer a series of questions about his or her needs, and then use that information to recom-
mend policy options. Demographic systems base recommendations on demographic factors
corresponding to a potential customer (i.e., age, gender, race, income, etc.). While similarity
to other customers might play a role in developing these recommendations, such systems are
different from collaborative filtering systems that typically rely on information about a person’s
behavior (i.e., purchase, product ratings, etc.).

Systems are being developed that leverage big data streams from multiple sources to
refine and enhance the performance of current systems.

Limitations of Recommendation Engines While recommendation engines have
proven valuable and are widely used, there are still challenges that must be overcome. Four
commonly cited limitations are described as follows:

Cold start or new user Making recommendations for a user who has not provided any
information to the system is a challenge since most systems require a starting point or
some minimal amount of information about the user (Adomavicius & Alexander,  2005;
Burke,  2007). Tiroshi and colleagues (2011) have suggested consumers’ existing social
media profiles from sites like Facebook and Twitter could be used in situations where a
website did not have sufficient information of its own to make recommendations.
Sparsity Collaborative systems depend on having information about a critical mass of
users to compare to the target user in order to create reliable or stable recommendations.
This is not always available in situations where products have only been rated by a few
people or when it is not possible to identify a group of people who are similar to a user with
unusual preferences (Burke, 2007).
Limited feature content For content filter systems to work, there must be sufficient
information available about product features and the information must exist in a struc-
tured format so it can be read by computers. Often feature information must be entered
manually, which can be prohibitive in situations where there are many products (Adoma-
vicius & Alexander, 2005).
Overspecialization If systems can only recommend items that are highly similar to a
user profile, then the recommendations may not be useful. For instance, if the recommen-
dation system is too narrowly configured on a website that sells clothing, the user may
only see recommendations for the same clothing item he or she liked, but in different sizes
or colors (Adomavicius & Alexander, 2005).

Hybrid Recommendation Engines Hybrid recommendation engines develop
recommendations based on some combination of the methodologies described above
(content-based filtering, collaboration filtering, knowledge-based and demographic systems).
Hybrid systems are used to increase the quality of recommendations and address shortcomings
of systems that only use a single methodology. Burke (2007) identified various ways that hybrid
recommendation engines combine results from different recommender systems. To illustrate
the potential complexity and variation in hybrid systems, four approaches are listed below:

• Weighted hybrid Results from different recommenders are assigned a weight and
combined numerically to determine a final set of recommendations. Relative weights are
determined by system tests to identify the levels that produce the best recommendations.

• Mixed hybrid Results from different recommenders are presented alongside of each other.
• Cascade hybrid Recommenders are assigned a rank or priority. If a tie occurs (with two

products assigned the same recommendation value), results from the lower-ranked sys-
tems are used to break ties from the higher-ranked systems.

• Compound hybrid This approach combines results from two recommender systems
from the same technique category (e.g., two collaborative filters), but uses different algo-
rithms or calculation procedures.

192 CHAPTER 6 Search, Semantic, and Recommendation Technology

Recommendation engines are now used by many companies with deep content (e.g., large
product inventory) that might otherwise go undiscovered if the companies depended on cus-
tomers to engage in an active search.

To simplify our description of recommendation engines, most of the examples above
have been based on the e-commerce sites recommending products to customers. However,
this technology is used by many different kinds of business organizations, as illustrated
in Table 6.3.

Questions

1. How is a recommendation engine different from a search engine?

2. Besides e-commerce websites that sell products, what are some other ways that recommendation
engines are being used on the Web today?

3. What are some examples of user information required by recommendation engines that use collab-
orative filtering?

4. Before implementing a content-based recommendation engine, what kind of information would
website operators need to collect about their products?

5. What are the four limitations or challenges that recommendation systems sometimes face?

6. What is a recommendation engine called that combines different methodologies to create recom-
mendations? What are three ways these systems combine methodologies?

TABLE 6.3 Examples of Recommendation Engine Applications

Company How It Uses Recommendation Engines. . .
Amazon Recommends products using multiple filtering methods.

Netflix Approximately 75% of Netflix movies are selected as a result of its
recommendation system.

Pandora This streaming music site creates playlists based on similarity to
initial songs or artists selected by the user.

CNN, Time, Fast Company,
Rolling Stone, NBCNews.com,
Reuters, Us Weekly

These news and entertainment companies all use a recom-
mendation engine (or “content discovery system”) created by
Outbrain.com to suggest additional articles related to the one
site visitors initially viewed.

YouTube YouTube uses a variation of Amazon’s recommendation engine to
suggest additional videos people might like to watch.

Goodreads This social website for readers recommends books based on user
ratings of books they have read.

Samsung Uses recommendation engines built into its “smart TVs” to sug-
gest television programming to viewers.

Facebook and LinkedIn These social networking services use recommendation engines
to suggest people that users may want to connect with.

Apple Helps users find mobile apps they might enjoy.

Microsoft Xbox 360 Suggests new games based on what users have previously shown
an interest in.

Tripadvisor Recommends travel destinations and services based on destina-
tions people have viewed or rated.

Stitch Fix This fashion start-up uses a recommender system in conjunction
with human stylists to select and ship clothing products to
customers, before customers viewed or ordered them!

Assuring Your Learning 193

Assuring Your Learning

Key Terms
access control 172
backlinks 180
click-through rates (CTRs) 174
collaborative filtering 190
collection analysis module 169
content-based filtering 189
crawler control module 169
crawler search engines 168
dwell time 180
enterprise search 172
ghost text 181
hybrid recommendation engines 191
hybrid search engines 168
inbound marketing 180
indexer module 169
informational search 173
keyword conversion 184
keywords 169

link spamming 181
metadata 185
meta-search engines 168
navigational search 173
organic search listings 174
page repository 169
PageRank 180
paid search listings 174
pay-per-click (PPC) 174
petabyte 170
quality score 183
query interface 170
recommendation engines 173
resource description framework (RDF) 185
retrieval/ranking module 169
rich snippets 188
search engine 168
search engine marketing (SEM) 173

search engine optimization (SEO) 174
search engine results page (SERP) 166
semantic search engines 168
semantic Web 185
shadow pages 181
showrooming 175
social media optimization 174
SPARQL protocol and RDF query language
(SPARQL) 185
spiders 168
structured data 172
transactional search 173
uniform resource identifiers (URIs) 185
unstructured data 172
vertical search engines 177
Web directories 168
Web ontology language (OWL) 185

Discuss: Critical Thinking Questions

1. Why is it important that businesses maintain a high level of visibil-
ity on SERPs?

2. Why are organic search listings more valuable than paid search
listings for most companies over the long term? Even though organic
search listings are more valuable, what are some reasons that com-
panies should consider using PPC advertising as part of their search
marketing strategies?

3. Why is relevant and frequently updated content a significant factor
for companies concerned about their visibility on popular search en-
gines? Does the quality of content impact organic results, paid results,
or both? Explain.

4. Explain the differences between Web directories, crawler search en-
gines, and hybrid search engines.

5. Why do search engines consider their algorithms for rank ordering
Web page listings on SERPs to be trade secrets? What would be the
consequences of publicizing detailed information about how a search
engine ranks its results?

6. Why do consumer search engines like Google and Bing require vast
amounts of data storage? How have they addressed this need? What
environmental issues are associated with the way large technology
companies operate their data storage facilities?

7. Explain why enterprise search technology is becoming increasingly
important to organizations. Describe how enterprise search applica-
tions are different from consumer search engines in terms of their func-
tionality, purpose, and the special challenges they must overcome.

8. Explain why people are much more likely to view and pay attention
to product and service information in SERPs compared to traditional

mass media advertising? What strategies are businesses adopting to
take advantage of this trend?

9. Why is it easier to measure the return-on-investment of re-
sources spent on search engine marketing compared to mass media
advertising?

10. How has the widespread adoption of mobile devices impacted the
SEO practices?

11. Identify at least five ways that Google has changed its algo-
rithms in recent years to encourage website developers to do more
than simply list keywords in an attempt to improve their ranking on
search results.

12. The ultimate goal of Google, Bing, and other consumer search en-
gines is to provide users with a positive user experience. What recom-
mendations would you make to a website owner with regard to using
website content to improve the site’s rank on search result listings?

13. Why are “black hat” SEO techniques (see Section 6.2) considered
unethical? Who is harmed by the use of such techniques? What are the
consequences of using these questionable SEO tactics?

14. Explain how search engines determine if websites contain infor-
mation relevant to a user’s search inquiry.

15. Identify and describe the five steps to creating a PPC ad campaign.

16. How does an advertiser’s bid and quality score determine the like-
lihood of PPC ad placement on SERPs? What are the factors that Google
uses to determine an advertiser’s quality score? Why does Google use
the quality score instead of relying solely on the advertiser’s bid?

17. Describe three metrics used by PPC advertisers to evaluate the ef-
fectiveness of their search ad campaign.

194 CHAPTER 6 Search, Semantic, and Recommendation Technology

18. Identify the three things that SEO practitioners can optimize by
making changes to on-page factors. What three things can SEO prac-
titioners attempt to optimize by making changes to off-page factors?

19. Describe five ways that semantic search engines could enhance
functionality for users. How will businesses benefit from the develop-
ment of semantic search functions?

20. Recommender systems use different approaches to generating
recommendations. Explain the difference between content-based

filtering and collaborative filtering. Describe the kind of information
required for each approach to work.

21. What are the alternatives to content-based filtering and collabora-
tive filtering recommender systems? When is it most useful to use these
alternatives?

22. Hybrid recommendation engines utilize two or more filtering
strategies to create recommendations. Describe the four different ap-
proaches to creating a hybrid system.

Explore: Online and Interactive Exercises

1. Select a search query term or phrase based on a class assignment,
a product you plan to purchase, or some area of personal interest. Use
the query at each of the following search engines:

a. Google.com

b. Bing.com

c. Yahoo.com

d. DuckDuckGo.com

For each site, make the following observations:
a. How relevant or useful are the websites listed on the first two
pages of search results?

b. What differences do you observe in terms of how the search
engines list websites on the search results page?

c. Do you see any indication that the search engine is using
semantic technology to generate results (see “Semantic Search
Features and Benefits” in Section 6.4)?

2. Visit the website for fitbit products at fitbit.com and familiarize
yourself with the products and website content.

a. Make a list of nonbranded keywords and phrases (i.e., doesn’t
contain the word “fitbit”) that you would recommend fitbit use to
optimize its pages so they show up in organic search listings.

b. Based on your list of keywords and phrases, make a list
of recommendations for content (i.e., articles, blog posts,
information, etc.) that fitbit should add to its website to increase
the chances that it will show up in organic search results. What
keywords should be emphasized in the content you recommend?

3. Use an existing account, or sign up for an account, at one of
the websites listed in Table 6.3. Make a list of the ways the website

recommends its content, goods, or services to you. Based on your
observations, are you able to determine what kind of recommendation
system is in use by the website?

4. Pretend you are going to purchase an expensive item like a large
flat-screen television or a major appliance from a national retailer
like Best Buy or Sears. Using your mobile phone, attempt to find store
locations, product information, and customer reviews. Next, install
one of the popular shopping and price comparison apps listed below
on your phone:

• Red Laser

• Amazon Price Check, or Flow (also by Amazon)

• Barcode Scanner

• Or, find a similar price-checking app at your mobile app store

Now, go shopping (visit the store). While shopping, use your mobile
app to find product reviews and make price comparisons of the products
you find. Briefly describe your mobile shopping experience. How did
the mobile technology help or hinder your shopping experience? What
challenges does mobile technology pose for traditional retailers?

5. Pretend you are an SEO consultant for a local business or not-
for-profit organization. Visit the organization’s website to familiarize
yourself with the brand, mission, products, services, and so on. Next,
make a list of keywords or phrases that you think should be used to
optimize the site for search engines. Rank-order the list based on
how frequently you think the words are used in searches. Finally, go
to google.com/trends/explore and enter your keywords or phrases,
creating a graph that illustrates how often they have been used in
search queries. Based on what you learn, what keywords or phrases
would you recommend the organization use to optimize its site?

Analyze & Decide: Apply IT Concepts to Business Decisions

1. Perform a search engine query using the terms “data center” +
“environmental impact.” Describe the environmental concerns that
large-scale data centers are creating around the globe and steps that
companies are taking to address these concerns. Read about Google’s
efforts at environment.google. In your opinion, is Google making a
satisfactory effort to minimize the negative impact of its business on
the environment? Explain your answer.

2. Review the information in Section  6.1 about the three types of
searches (informational, navigational, and transactional) that people
conduct on search engines. Put yourself in the role of an SEO consultant
for your college or university. Create a set of content and/or keyword
strategies that you would recommend to your institution’s leaders to
increase the chances of appearing on SERPs resulting from prospective
students conducting each kind of search.

Case 6.2 195

3. Review the information about website relevance and credibility in Sec-
tion 6.3. Next, generate a list of strategies or ways that a website owner
might use to improve its ranking on search results pages by optimizing the
site for relevance and credibility. For instance, if one of your factors is “site
traffic,” you might recommend that the website owner post links to the
website on the company Facebook page to increase traffic. Or, you might
recommend the website run a contest that requires people to visit the site
to enter. This would increase traffic during the contest.

4. Traditional brick-and-mortar stores are increasingly frustrated by
competition with online retailers. Online websites often have a cost
advantage because they do not have to maintain physical storefronts
or pay salespeople, and can use more efficient logistical and opera-
tional strategies. This sometimes allows them to offer better prices
to consumers. With the emergence of recommendation engines, they
appear to be gaining another advantage—the ability to suggest prod-
ucts to customers based on their past shopping history and personal
characteristics. Pretend you are a senior manager for a national retail
chain. How could your company make use of recommendation sys-
tems to suggest products to customers shopping in your store? Outline
a creative approach to this problem that identifies the information you
would need to collect, the in-store technology required, and the man-
ner in which you would inform customers about the personalized rec-
ommendations generated by your system.

5. Select a consumer product or service for which there are at least
three popular brand names. For example, you might choose the cat-
egory “cell phone carriers,” which includes Verizon, AT&T, Sprint, and
T-Mobile. On the Google.com/trends page, type the brand names,

separated by commas, into the search field at the top of the screen
(e.g., Verizon, AT&T, Sprint, T-Mobile). The resulting chart will display
the search query volume by brand, an indicator of how much interest
each brand has received over time. Using the Google Trends data, an-
swer the following questions.

Tip: Before answering the questions below, use Google’s search
engine to find articles on “how to interpret Google trends.” This will help
you better understand the Google trends report and make it easier to
answer the following questions.

a. Using the date setting at the top of the Google Trends page, ex-
plore different periods of time. Briefly summarize how interest in
each brand has changed over the last four years.

b. In the Regional Interest section, you can see how interest in
each brand varies by country or city. In which countries and cities
is each brand most popular?

c. In the Related Searches section, you will see a list of topics
and query terms of interest to people who used one of the brand
names in a search. How does the list of related topics change from
one brand name to another? Do the topic and query term lists give
you any insight into what kind of information people may be inter-
ested in relative to each brand?

d. Using a search engine, see if you can find market share data
for the product or industry you researched on Google Trends. If
you find this information, does there seem to be any relationship
between search volume and market share for the brand names
you explored?

Case 6.2
Business Case: Deciding What to Watch—Video
Recommendations at Netflix
Netflix is the undisputed leader of video streaming services, account-
ing for more than half (53%) of U.S. video streaming subscriptions.
Amazon Prime Video (25%) and Hulu (13%) are the company’s largest
competitors. Netflix is also the oldest company in this group, having
originally started as a DVD by mail rental service. Unlike other com-
panies that dominated the DVD rental business, Netflix successfully
made the transition to on-demand video streaming by investing in
new technology and redefining its business model. The service is
now available in 190 countries and claims over 90 million subscrib-
ers globally.

Netflix executives credit the company’s recommendation system for
driving the “Netflix experience” and boosting profitability (Gomez-Uribe
& Hunt, 2015; Raimond & Basilico, 2016). Surprisingly, the origin of the
recommendation system dates back to 2000, when Netflix was still a DVD
rental service. Recommendations during these early days were based
largely on members’ movie ratings. Ratings often reflect how people want
to be perceived as opposed to how they act. For instance, rating data will
tend to overemphasize how much people like documentaries and foreign
language films, whereas behavioral metrics provide more accurate meas-
ures of how subscribers use the service. Today, when Netflix subscribers
use the online service, they see recommendations generated by multiple

algorithms that use descriptive information about the subscriber and
their past viewing behavior (Gomez-Uribe & Hunt, 2015). Netflix claims
that 75% of the activity on the service is a result of the recommendations
it offers subscribers.

Netflix Analytics
Netflix enjoys a significant advantage over traditional television chan-
nels because the company collects information about how subscrib-
ers use the service. Netflix can make marketing and product decisions
based on several behavioral metrics. You might be surprised at the
details Netflix collects:

• The device you use (tablet, Roku, smart TV, etc.)

• Where (zip code) you watch from

• The days and times you watch

• When you pause, rewind, or fast-forward during viewing

• How you search—the words and phrases used, how long you
search, etc.

• Whether or not you watch the credits following a show

• How many episodes of a series you watched

• Whether or not you watch all episodes in a series

• How long it takes you to watch all episodes in a series

196 CHAPTER 6 Search, Semantic, and Recommendation Technology

• How many hours you spend using the service

• What movies and television shows you watch

• How often you use the service

In addition to making recommendations, Netflix uses the informa-
tion to do the following:

• Identify subscribers who are likely to cancel the service

• Select new movies to add to their catalog

• Decide if a television show should be renewed for another season

• Identify movies and television shows to drop from the catalog

• Determine the days and times to recommend certain movies
or shows

• Determine what to recommend immediately following the view-
ing of another movie or show

• Determine how to describe movies and shows (i.e., long vs. short
descriptions)

Recommendation Algorithms at Netflix

The Netflix home screen can offer up to 40 rows of recommendations
to a subscriber. Each row is generated by a different algorithm
designed to personalize recommendations as well as determine the
order in which movies and shows are listed. Each row is based on a
different theme or rationale for the titles appearing in the row. Netflix
even uses a Page Generation Algorithm to personalize the type of
row-level recommendations and their order when creating the page.
Some examples of the different recommendation rows include the
following:

Genre Rows Several of the rows appearing on the home page are
based on movie or television show genres that Netflix believes the
subscriber will be interested in based on past viewing behavior. Genre
rows are generated by what Netflix calls its Personalized Video Ranker
(PVR). The rows reflect three levels of personalization: (1) the selection
of the genre, (2) the selection of specific titles within the genre, and
(3) the ordering of the titles.

Continue Watching Titles appearing in the Continue Watching
row highlight episodic content that Netflix thinks a subscriber might
want to return to. The Continue Watching ranker evaluates recently
viewed videos for signals that a subscriber intends to resume watching
or is no longer interested in the title. These signals include things like
time since last viewing, point of abandonment (mid-program, end

of program), if other titles have been viewed since, and type of
device used.

Because You Watched The Because you watched (BYW) row is
based on the similarity of recommended videos to past videos watched
by the subscriber. The BYW row is determined by the Sims Ranker,
which generates an ordered list of videos, based on similarity, for
every title in the catalog. Various personalization cues are then used
to further refine the subset of videos that actually appear in the row on
the home page.

Top Picks The goal of the Top Picks row is to feature Netflix’s best
guess as to the videos in its catalog that are most likely to be of interest
to the subscriber. The Top Picks algorithm uses cues from the individual
subscriber along with viewing trend information to recommend titles
from among the most popular or top-ranked videos in the catalog.

Netflix believes that its recommendation system plays a
significant role in user satisfaction and customer retention. A team
of workers regularly updates the system with new algorithms and
modifications to existing ones. Their ultimate goal is to generate such
high-quality recommendations that subscribers will rarely have to
search for videos to watch.

Questions
1. You read about four different types of recommendations that Netf-

lix features on their home page. Think of a new type of recommen-
dation row that Netflix could use and the kind of information or
behavioral metrics that would be needed to generate your recom-
mendations.

2. Based on the information in this case, would you say that Netflix
primarily uses content-based filtering, collaborative filtering, or
both? Explain your answer.

3. Netflix is expanding globally. When Netflix first enters a market,
the recommendation system can face “cold start” or “sparsity”
problems. Explain why this happens and suggests ways that Netf-
lix might deal with these challenges.

4. What metrics do you think Netflix could use to identify subscribers
who are likely to cancel the service?

5. Visit Netflix’s Technology Blog http://techblog.netflix.com. Iden-
tify three challenges that the company faces in generating recom-
mendations for its subscribers.

Sources: Compiled from Bulygo (2013b), Alvino and Basilico (2015), Gomez-
Uribe and Hunt (2015), Arora (2016), Cheng (2016), Lubin (2016), Nicklesburg
(2016), Raimond and Basilico (2016).

Case 6.3
Video Case: Power Searching with Google
This video case is a bit different from what you have seen in other chap-
ters. Google has created two easy-to-follow video courses designed to
teach you how to use search engines more effectively: Power Search-
ing and Advanced Power Searching. Each course contains a series of
videos that you can view at your own pace. Following each video, you

are shown a set of activities and small quizzes that you can use to test
your knowledge. Start with the Power Searching course. Once you
have mastered the basic skills discussed in that course, move on to the
Advanced Power Searching course.

Visit Google’s Search Education Online page powersearching-
withgoogle.com. On this page, you will see links for the two self-guided
courses: Power Searching and Advanced Power Searching. Select

References 197

the Power Searching link and begin viewing the course videos. After
each video, do the related activities and test your knowledge with any
online quizzes or tests that are provided. After you have completed the
Power Searching course, go back and take the Advanced Power Search-
ing course.

While it may take several days to complete both courses, we encour-
age you to do so. The time you invest in learning these power search

techniques will pay off next time you need to use a search engine for a
class- or work-related research project.

Question
1. Describe two or three search techniques you learned from these

tutorial videos that you think will be particularly helpful.

References
Adomavicius, G. and T. Alexander. “Toward the Next Generation of

Recommender Systems: A Survey of the State-of-the-Art and Possi-
ble Extensions.” IEEE Transactions on Knowledge and Data Engineer-
ing 17, no. 6, 2005, 734−749.

Alvino, C. and J. Basilico. “Learning a Personalized Homepage.” tech-
blog.netflix.com, April 9, 2015.

Andrews, W. and H. Koehler-Kruener. “Magic Quadrant for Enterprise
Search.” gartner.com, July 16, 2014.

Anthony, S. “Microsoft Now Has One Million Servers−Less Than
Google, But More Than Amazon, Says Ballmer.” extremetech.com,
July 19, 2013.

Arora, S. “Recommendation Engines: How Amazon and Netflix
Are Winning the Personalization Battle.” martechadvisor.com,
June 28, 2016.

Asrar, S. “A Quick Look at Recommendation Engines and How the New
York Times Makes Recommendations.” knightlab.northwestern.edu,
March 28, 2016.

Brown, C. “43 Increasingly Precise Netflix Custom Genre Recommen-
dations.” TheAwl.com, March 16, 2012.

Bulygo, Z. “How Mint Grew to 1.5 Million Users and Sold for $170 Mil-
lion in Just 2 Years.” blog.kissmetrics.com, November, 2013a.

Bulygo, Z. “How Netflix Uses Analytics to Select Movies, Create Con-
tent, and Make Multimillion Dollar Decisions.” blog.kissmetrics.
com, 2013b.

Burke, R. “Hybrid Recommender Systems.” In Brusilovsky, P., A.
Kobsa, and W. Nejdl (eds.), The Adaptive Web, pp. 377−408. Heidel-
berg: Springer-Verlag Berlin, 2007.

Candela, J. “Building Scalable Systems to Understand Content.” code.
facebook.com, February 2, 2017.

Cheng, R. “Netflix Leads a Streaming Video Market That’s Close to
Peaking.” cnet.com, May 25, 2016.

Constine, J. “Facebook Sees 2 Billion Searches per Day, but It’s Attack-
ing Twitter not Google.” techcrunch.com, July 27, 2016.

Delgado, J., L. Renaud, and V. Krishnamurthy. “The New Face of Enter-
prise Search: Bridging Structured and Unstructured Information.”
Information Management Journal 39, no. 6, 2005, 40−46. Business
Source Premier. Online. February 28, 2014.

DiSilvestro, A. “The Difference Between Semantic Search and Seman-
tic Web.” Search Engine Journal, July 10, 2013.

eMarketer. “Yahoo Ad Revenue to Drop Nearly 14% This Year.” eMar-
keter.com, March 23, 2016.

Gallagher, S. “The Great Disk Drive in the Sky: How Web Giants Store
Big—and We Mean Big—Data.” Arstechnica.com, January 26, 2012.

Glanz, J. “Google Details, and Defends, Its Use of Electricity.” The New
York Times, September 8, 2011.

Glanz, J. “The Cloud Factories: Power, Pollution and the Internet.” The
New York Times, September 22, 2012.

Gomez-Uribe, C. and N. Hunt. “The Netflix Recommender System:
Algorithms, Business Value, and Innovation.” ACM Transactions Man-
agement Information Systems 6, 4, Article 13, December 2015.

Google.com. “Algorithms.” Accessed March 24, 2014.
Greene, K. “Mint Introduces Bill Pay, Helping Millions to Never Miss a

Bill.” blog.mint.com, December 13, 2016.
Grehan, M. “How Search Engines Work.” Excerpted from Search

Engine Marketing: The Essential Best Practice Guide. New York: Inci-
sive Media, 2002.

Grifantini, K. “What It Takes to Power Google.” MIT Technology Review,
September 9, 2011.

Grimes, S. “Breakthrough Analysis: Two + Nine Types of Semantic
Search.” InformationWeek.com, January 1, 2010.

Hendler, J. and T. Berners-Lee. “From the Semantic Web to Social
Machines: A Research Challenge for AI on the World Wide Web.” Arti-
ficial Intelligence 174, 2010.

Jacobson, J. “Google: 129 Million Different Books Have Been Pub-
lished.” PCWorld.com, August 6, 2010.

Kraus, J. “The Advanced Guide to Facebook Graph Search.” sitepoint.
com, August 18, 2015.

Lubin, G. “How Netflix Will Someday Know Exactly What You Want
to Watch as Soon as You Turn Your TV On.” Businessinsider.com,
September 21, 2016.

Madrigal, A. “How Netflix Reverse Engineered Hollywood.” The Atlan-
tic, January 2, 2014.

Miller, R. “Google Has Spent $21 Billion on Data Centers.” Datacenter-
knowledge.com, September 17, 2013.

Newman, J. “6 Things You’d Never Guess About Google’s Energy Use.”
Techland.time.com, September 9, 2011.

Nicklesburg, M. “Study: Amazon Video Is Now the Third-largest Stream-
ing Service, Behind Netflix and YouTube.” geekwire.com, June 22, 2016.

Oak, M. “How Does a Search Engine Work?” buzzle.com, June 5, 2008.
Obi-Azubuike, G. “Five SEO Strategies That Will Grow Your Business.”

Linked.com/pulse, February 4, 2016
Osher, M. “Finding Copyright Infringements of Your Artwork on the

Internet.” MarianOsher.com, February 9, 2014.
Prince, K.T. “Mint by the Numbers: Which User Are You?” blog.mint.

com, April 6, 2016.
Prnewswire.com. “Growth of Big Data in Businesses Intensifies Global

Demand for Enterprise Search Solutions, Finds Frost & Sullivan.”
January 24, 2013.

Raimond, Y. and J. Basilico. “Recommending for the World.” techblog.
netflix.com, February 17, 2016.

198 CHAPTER 6 Search, Semantic, and Recommendation Technology

Schneider, D. “Under the Hood at Google and Facebook.” spectrum.
ieee.org, May 31, 2011.

Sullivan, D. “The Periodic Table of SEO Success Factors: 2015 Edition
Now Released.” SearchEngineLand.com, June 1, 2015.

Sullivan, L. “Report: Companies Will Spend $65 Billion on SEO In
2016.” MediaPost.com, April 21, 2016.

Sukhraj, R. “How Mint Acquired Over 1.5 Million Users Without a Sin-
gle Growth Hack.” Impactbnd.com, June 17, 2015.

Sverdlik, Y. “Here’s How Much Energy All US Data Centers Consume.”
datacenterknowledge.com, June 27, 2016.

Thomas, V. “Six Reasons to Break Out of Your Organization’s Silos.”
mangoapps.com, September 24, 2013.

Tiroshi, A., T. Kuflik, J. Kay, and B. Kummerfeld. “Recommender
Systems and the Social Web.” In Proceedings of the International
Workshop on Augmenting User Models with Real World Experiences to
Enhance Personalization and Adaptation (AUM), Girona, Spain, July
15, 2011. Ardissono, L. and T. Kuflik (eds.), Springer-Verlag Berlin,
Heidelberg, 2012.

Venkatraman, A. “Global Census Shows Data Centre Power Demand
Grew 63% in 2012.” Computerweekly.com, October 8, 2012.

Walker, M. “Data Silos Obstruct Quest for Competitive Advantage.”
datasciencecentral.com, February 11, 2014.

199

CHAPTER 7

Web 2.0 and Social Technology

LEARNING OUTCOMES

7.1 Understand the key technologies that made Web 2.0
possible, and appreciate the opportunities and challenges
that social media represents for business organizations.

7.2 Describe the features and capabilities of large social
networking services.

7.3 Explain how blogs and microblogs facilitate communication
on a global scale.

7.4 Describe how mashups, RSS technology, and monitoring
tools are valuable to business organizations and
individual users.

7.5 Describe how organizations and groups make use of new Web
2.0 collaboration tools and services.

CHAPTER OUTLINE

Case 7.1 Opening Case: Social Customer Service
Takes Off at KLM

7.1 Web 2.0—The Social Web

7.2 Social Networking Services and Communities

7.3 Engaging Consumers with Blogs and
Microblogs

7.4 Mashups, Social Metrics, and
Monitoring Tools

7.5 Enterprise 2.0: Workplace Collaboration and
Knowledge Sharing

Case 7.2 Business Case: Facebook Helps Songkick
Rock the Ticket Sales Industry

Case 7.3 Business Case: AT&T’s “It Can Wait”
Campaign against Distracted Driving

Introduction
Everyone is talking about social media. Chances are you and your friends connect on social
networking services or other forms of social media frequently. Every day, people discover new
ways to share things with their network of friends through messaging, photographs, videos,
and blogs. The digital-savvy, connected generation or millennials—teens and those in their

200 CHAPTER 7 Web 2.0 and Social Technology

early twenties—“get” social media, but might not be able to accurately define it or explain how
companies use social technology to influence brand attitudes and consumer behavior. After you
venture past the big brand names—Facebook, Twitter, YouTube, and LinkedIn—awareness of
social media tools drops off quickly. Most social media use among Millennials is for recreational
or entertainment purposes. There is little understanding of how social media can be used for
marketing, recruiting, research, collaboration on projects, or personal branding.

Facebook has caught the attention of business organizations because the number of peo-
ple who use the site is huge (and continues to grow!). Businesses are also exploring promo-
tional opportunities on sites like Twitter, Pinterest, LinkedIn, and YouTube. Companies get 24/7
advertising, live interaction with customers and prospects, and targeted ads. Organizations are
working feverishly to prompt consumers to engage—to like, tweet, comment, and share their
brand experiences with others. And they are spending a lot of time doing just that. According
to eMarketer (2016), U.S. companies will spend over $72 billion advertising on social networks,
more than they spend on television ($71.29 billion).

In this chapter, you will learn what makes social media social. You will also learn about
social media applications that have both personal and professional uses, and you will learn
how business organizations make use of social media to gain competitive advantages in the
marketplace.

Case 7.1 Opening Case

Social Customer Service Takes Off at KLM
On April 14, 2010, an Icelandic volcano with a difficult name (Eyjafjal-
lajökull) erupted, spewing volcanic ash several kilometers into the
atmosphere and disrupting air travel for 10 million people across
northwestern Europe for days. Like many other airlines in the area,
KLM Royal Dutch Airlines was overwhelmed by stranded passengers
seeking information by phone and e-mail. While some people waited
on hold for hours trying to reach a call center, other passengers turned
to social media to find answers.

Just a year before, KLM had launched an exploratory effort to
figure out how the company could use social media. With a relatively
new Twitter account and Facebook page, the company’s social media
team suddenly found themselves fielding questions from countless
frustrated travelers. The team quickly set up a special social media

room, called in 100 reinforcements from other units in the company,
and began responding around the clock to inquiries coming in on Face-
book and Twitter. This marked the beginning of social customer service
at KLM. Today, KLM employs over 235 social media service agents who
respond each week to 15,000 questions or comments from customers
in 13 different languages (Table 7.1).

Customer Service Is Not an Option
While KLM uses social media to run contests, entertain passengers,
and promote the airline, customer service remains the clear priority for
the social unit. The company believes that in today’s fast paced, com-
petitive environment, customers expect businesses to provide sup-
port services via social media. Companies that fail this test will suffer
consequences.

Li
lja

K
ris

tja
ns

do
/N

or
di

cP
ho

to
s/

G
et

ty
Im

ag
es

Fi
nd

la
y/

A
la

m
y

St
oc

k
Ph

ot
o

Es
te

lle
J

oh
ns

on
/E

ye
Em

/
G

et
ty

Im
ag

es

TABLE 7.1 Opening Case Overview

Company KLM Royal Dutch Airlines
Location Headquartered in Amstelveen, KLM is the national airline of the Netherlands. KLM currently oper-

ates passenger and cargo service to 133 destinations in 70 countries around the world.

History Founded in 1919, KLM is the oldest airline in the world still operating under its original name.

Social media Customers can contact KLM on Twitter, Facebook, Facebook Messenger, and LinkedIn. The company
publishes a blog and also maintains a presence on Pinterest, Google+, YouTube, and Instagram.

Customer service KLM is credited with pioneering the use of social media for customer service. The company is
widely recognized within the airline industry for providing excellent customer support.

Web 2.0—The Social Web 201

Be Where Your Customers Are
A company’s social media strategy can’t be effective if they aren’t
using the same social channels as their customers. Over the years,
KLM has expanded their social media coverage based on the platforms
their customers use. In addition to Facebook and Twitter, custom-
ers can now contact KLM social agents from LinkedIn and Facebook
Messenger. Agents can help customers book or change a flight, check-
in, pick a seat, or assist with any problems that might occur. In addi-
tion, KLM publishes a blog and maintains a presence on Pinterest,
Google+, YouTube, and Instagram.

One-Stop Shopping Means Social Revenue
The KLM model of customer service adheres to a one-stop shop
principle. That means if a customer asks about changing their ticket,
the social agent will look up the information and respond with a cus-
tomized answer instead of just sending a link to the company’s general
terms and conditions Web page. The goal is to resolve the customer’s
issue through the social channel used to contact the company. After
answering questions about flight times, pricing, and other details,
KLM social agents provide a direct link to a payment page where cus-
tomers complete their purchase. For many customers, the process is
simply more convenient than purchasing through another channel.

“Move fast, break things, and don’t be afraid to fail.”

Karlijn Vogel-Meijer, Manager Social Media at KLM, explains that the
company understands mistakes are bound to happen from time to
time, especially if you’re moving fast. The social unit has support from
the top of the organization, giving it the freedom to try new things and
innovate. As a result, KLM is considered a leader for the way its social
customer service unit has pioneered the use of social media to support
customers and maintain high levels of positive sentiment. It’s hard to
be innovative if you’re always worried about making mistakes. As Vogel-
Meijer says, “If you’re afraid, you will stall.” One of KLM’s latest inno-
vations is the use of artificial intelligence, a new technology that helps
agents answer many routine questions they receive from customers.

Faster Response Times
As more companies gear up to engage customers through social
media, customer expectations also increase. When customers contact

a company about a problem, they not only expect an answer, they also
want a response quickly. KLM strives to answer each customer within
30 minutes. At the top of KLM’s Twitter page, the company posts the
average response time and updates it every 5 minutes so even if things
are taking a little longer, customers know when to expect a reply. By
managing expectations, KLM minimizes customer frustrations.

Conclusion
KLM Royal Airlines is recognized around the world as the leader in social
media customer service. Some airlines aren’t prepared for the chal-
lenges associated with social media. For instance, when a disgruntled
customer contacts a call center to make a complaint, only the customer
and company know about the call. When a customer complains on
social media, the world can see the complaint and how well the com-
pany works to resolve the problem. The social media customer service
unit at KLM not only contributes to the company’s high customer satis-
faction and positive sentiment, but also to the bottom line. In 2015, KLM
estimated that each social agent was responsible for approximately
$170,000 in revenue. With 235 social agents, that translates into over
$39 million.

Questions
1. Why does KLM think that Customer Service is the most important

application of social media?

2. How does KLM determine which social media platforms to use?

3. Explain the reasoning behind KLM’s “.  .  .  don’t be afraid to fail”
philosophy.

4. KLM’s “one-stop shop” model probably increases the time a social
agent spends responding to a customer’s inquiry. Why does the
company use this approach?

5. Many airlines have yet to embrace the use of social media like
KLM. What challenges do you think other airlines face when mak-
ing the transition to using social media?

Source: Compiled from Baer and Brown (n.d.), ter Haar (2015), Koetsier (2015),
Simson (2015), Azfar (2016), Hutchinson (2016), KLM (2016), Talkwalker.
com (2017).

7.1 Web 2.0—The Social Web
In your lifetime, there have been dramatic changes in the way people use the Internet. In the
early 1990s, many people did not have regular access to the Internet, and those who did typi-
cally “dialed up” their network from a home or office telephone. Dial-up access meant long
waits as content from Web pages “downloaded” onto the screen. Some users joked that the
letters “www” in a Web address stood for “world wide wait.” E-mail was the primary mechanism
for communicating on the Internet. Online communities were often like public bulletin boards
where all members of the community could read the messages that others posted. Websites
were static, essentially online billboards for the businesses that created them. Online purchas-
ing (e-commerce) was rare and risky because there were few safeguards in place to protect your
credit card information. But all that has changed.

The Constantly Changing Web
Today, most of us access the Internet using wired or wireless broadband technology, consum-
ing bandwidth that was unheard of a few years ago. We expect to be able to stream audio and

202 CHAPTER 7 Web 2.0 and Social Technology

video files, and watch feature-length films over wireless connections and mobile devices. We
surf Web pages that constantly change their appearance in response to how we interact with
them. While e-mail is still a common form of communication in business, young people tend
to view it with disdain in favor of tweets, texts, or social networking sites like Twitter and Face-
book. We keep track of our world, interests, and hobbies by reading blogs and online news-
papers, and use a variety of tools and services for sharing them with others. In addition to
consuming content, we add comments or reviews and signal our appreciation for the content
by retweeting or clicking a “Like” button.

Increasingly, Internet users are becoming content creators—they write their own blogs,
post videos on YouTube, share personal experiences on Facebook, and share pictures using
sites like Flickr or Photobucket. E-commerce continues to grow and evolve, in some cases
changing entire industries. E-books are now more popular than print books on sites like
Amazon. More people purchase music from sites like iTunes or use streaming-music sites like
Pandora or Spotify than purchase music on CDs. Sites like Travelocity and Orbitz have almost
completely replaced traditional travel agencies and agents. Many people are more likely to use
sites like eBay and Craigslist to get rid of unwanted household items instead of holding garage
sales or placing classified ads in a local paper. One of the biggest changes in online retail is the
use of social features by e-commerce sites. Most online retailers make use of customer reviews,
customer ratings, and information sharing on social networks.

While there are many exciting examples of companies that have embraced the potential of
Web 2.0 technologies and the emerging social culture that characterizes our modern online
experience, many businesses, agencies, and individuals have been slow to understand the chal-
lenges and opportunities created by the social Web. Smart managers are constantly evaluating
how changes in social media and related technologies affect their business and industry. Busi-
nesses and business professionals must devote time and resources to consistently monitoring
technological innovation and related changes in consumer behavior in order to remain relevant,
taking advantage of potential opportunities to create competitive advantages when they arise.

Invention of the World Wide Web
The World Wide Web (WWW) was invented by Tim Berners-Lee and launched in 1991. Its use
outside of scientific and academic circles was uncommon until the mid-1990s. Web access from
homes was mostly via telephone lines, slow 56-kbps (kilobits per second) dial-up modems, and
paid subscription network services such as CompuServe and America Online (AOL). Websites
were primitive static designs that served as online billboards or postcards. You can view
archived websites using the Wayback Machine. During that time, e-mail was viewed as a
sophisticated communications tool that most people accessed at work or on college campuses,
but not from home.

As the above description suggests, communication was primarily unidirectional. There
were no easy-to-use conduits for widespread social interaction. The average user was the
target or recipient of communications, not a creator.

A Platform for Services and Social Interaction
Now the Web is a platform for all kinds of activity—shopping, entertainment, news, education,
research, and business processes like logistics and electronic funds transfer (EFT). Homes maintain
broadband wireless networks to connect multiple users simultaneously to the Internet from
computers, tablets, video game systems, and video-streaming devices like the Roku box. In
addition to the aforementioned activities, new technologies gave rise to websites with features
and services that make it easy for people to interact with one another. As a result, these services
collectively are referred to as social media. While the applications that are labeled as Web 2.0 may
simply be an extension of earlier advances, it is the change in user behavior that matters most to
business organizations around the world. The new technologies dramatically increase the ability
of people to interact with businesses and each other, sharing and finding information, and forming
relationships. This perspective explains why Web 2.0 is often called the social Web (Table 7.2).

Web 2.0 a term used to
describe a phase of World Wide
Web evolution characterized
by dynamic Web pages, social
media, mashup applications,
broadband connectivity, and user-
generated content.

World Wide Web (WWW) a
network of documents on the
Internet, called Web pages,
constructed with HTML markup
language that supports links
to other documents and media
(e.g., graphics, video, audio, etc.).

Broadband refers to wide
bandwidth technologies
that create fast, high-volume
connections to the Internet and
World Wide Web.

Social media a collection of
Web applications based on Web
2.0 technology and culture that
allows people to connect and
collaborate with others by creating
and sharing digital content.

Web 2.0—The Social Web 203

Emergence of Social Applications, Networks, and Services
Starting in 2000, a series of developments in the technology and business environment occurred
that set the stage (infrastructure) for Web 2.0.

1. Broad bandwidth (broadband) Internet access became faster and more widely avail-
able due to large-scale adoption of broadband technology. Website load times shrank
from a minute to instantaneous. Huge bandwidth is required to support byte-intensive
music downloads and streaming video and movie services. As residential broadband con-
nections became common place and public broadband connections increased in coffee
shops, malls, college campuses, and other community centers, people began to rely on
applications that required fast, high-volume data connections. These broadband connec-
tions increased the overall attractiveness and accessibility of the Internet—laying the foun-
dation for interactivity and the social Web.

2. Sustainable business models After the dot.com bust in the late 1990s when many badly
conceived Internet businesses failed, a new breed of business emerged. These businesses
had realistic revenue models. Companies like Amazon, Google, eBay, and others began to
demonstrate that it was possible to create e-commerce and consumer service sites that
could generate revenue and become not only self-sustaining, but also profitable.

3. New Web programming technologies New Web programming languages and technologies
were developed that made it possible for programmers to create dynamic and feature-rich
websites. In some cases, these new features and website capabilities created new business
opportunities, which in turn led to increased demand for Web access. Increased Web usage
then led to larger potential markets for businesses with successful revenue models. The busi-
nesses frequently reinvested earnings into expanding their technological capabilities in an
effort to attract even more customers. This cycle of enhanced technological features leading
to greater value for the consumer/Web user and then to more people using the Web continues
today. Some of these new Web technologies are described in more detail in Tech Note 7.1.

4. Application programming interfaces (API) and software development kits (SDK) One
of the big differences between Web 1.0 and Web 2.0 is the extent to which business
organizations are willing to share information (data) with other organizations and
developers who are creating new programs or services. For instance, Google Maps might
allow a restaurant review website like Yelp.com to use its mapping application to create a
feature on the Yelp site showing restaurant locations on a map. Historically, businesses have
been highly protective of their intellectual property and were generally unwilling to share it
with other companies. However, many companies that emerged during the Web 2.0 era have
come to recognize the benefits of certain types of sharing or collaboration with others. For
instance, when Google makes part of its mapping program available to others, it increases
the number of people using the Google product and expands its share of the marketplace.
In turn, Yelp frequently makes some of its data available to companies developing new
applications. For instance, Trulia is a real estate company that helps people find new homes.

TABLE 7.2 Web 1.0 versus Web 2.0

Web 1.0—The Early Web Web 2.0—The Social Web
Static pages, HTML Dynamic pages, XML, and Java

Author-controlled content User-controlled content

Computers Computers, cell phones, televisions, PDAs, game systems, car dashboards

Users view content Users create content

Individual users User communities

Marketing goal: influence Marketing goal: relationships

Data: single source Data: multiple sources, for example, mashups

204 CHAPTER 7 Web 2.0 and Social Technology

They use information from Yelp.com to help their customers learn about businesses, res-
taurants, grocery stores, and other amenities in the neighborhood they are considering,
without having to leave the Trulia website. Businesses have learned that when done right,
sharing information often creates synergies that benefit all involved. From a technology
standpoint, two programming tools make this data sharing possible: APIs and SDKs.
An API is a set of commands and programming standards used by developers to write appli-
cations that can communicate with other applications. In other words, it aids developers in
determining how their applications can pass data back and forth with some other application.
An SDK is a bit more complex than an API. SDKs are a collection of software tools used by
developers for writing applications that run on a specific device or platform. For instance,
a Facebook SDK helps third-party developers write programs that will run on Facebook.
Businesses that share data with other companies usually write the APIs and SDKs that
define the rules and restrictions for information sharing. In that way, they retain control
over who uses their data and how it is used. If a developer simply needs to share data with
another website, they will likely use an API created by the other website. If they are creating
an application that will actually run on the other website, they will most likely use an SDK
developed by the other website. Together, APIs and SDKs have fundamentally changed
the degree to which businesses share their information resulting in a vastly improved and
more useful World Wide Web.
While APIs and SDKs can either be proprietary (user pays a fee) or open source, most
popular APIs are open source, which means that anyone can use them for free, although
other conditions may be placed on their use. Visit programmable Web.com for a listing of
popular APIs.

Tech Note 7.1

AJAX Technologies and APIs
AJAX technologies, or asynchronous JavaScript and XML (AJAX),
is a term referring to a group of technologies and programming lan-
guages that make it possible for Web pages to respond to users’
actions without requiring the entire page to reload. AJAX makes it
possible for Web developers to create small apps that run on a page
instead of a server. This capability makes programs run much faster,
eliminating a key source of frustration with the early Web. Another
important programming development is the API, which acts as a

software gateway programmers can use to pass data back and forth
between two or more applications, platforms, or websites (see IT at
Work 7.1). With AJAX and APIs, website programmers can import
data from other sources to create new functions and features that
we have come to associate with social media applications (see the
discussion of mashups later in this chapter).

AJAX technologies include JavaScript, extendable markup
language (XML), document object model (DOM), hypertext
markup language (HTML), XMLHttpRequest, and cascading
style sheets (CSS), all of which are defined in Table 7.3.

TABLE 7.3 AJAX Technologies for Web 2.0

Hypertext markup language (HTML): The predominant language for Web pages; it is used, along with CSS, to
describe how things will appear on a Web page.

Cascading style sheets (CSS): A language used to enhance the appearance of Web pages written in a
markup language.

Document object model (DOM): A programming API for documents. Programmers use it to manipulate (e.g.,
build, add, modify, delete, etc.) HTML documents.

Extensible markup language (XML): A set of rules and guidelines for describing data that can be used by other
programming languages. It makes it possible for data to be shared across the Web.

JavaScript: An object-oriented language used to create apps and functionality on websites. Examples of JavaScript
apps include pop-up windows, validation of Webform inputs, and images that change when a cursor passes over them.

XMLHttpRequest: A JavaScript object that serves as an API used by programs to retrieve data or resources from
a URL without requiring a page load. It plays an important role in providing programmers with the ability to create
dynamic and interactive Web pages and applications.

Sources: van Kesteren et al. (2014), techterms.com (2014), W3C (2015), Grigorik (2017) .

Web 2.0—The Social Web 205

Why Managers Should Understand Web Technology
You might ask yourself why business managers who are not directly involved in managing an
organization’s website should be concerned about the underlying technology of Web 2.0 and
social media. The answer is that these technologies determine website features and capabil-
ities. In other words, they determine what is possible on the Web. Understanding how Web
technology is evolving helps managers identify strategic opportunities and threats as well as
the ways in which a company might develop sustainable competitive advantages in the mar-
ketplace. Therefore, it is important to monitor the ongoing development of APIs, Web develop-
ment languages, and other technologies that affect the functioning of the Web.

APIs For instance, APIs associated with Facebook determine the nature of apps that can
be written to interact with core Facebook features. Major changes to the Facebook APIs
are often rolled out to much fanfare because they define opportunities for developing new
ways for users to create and share content on Facebook and across the Web, as described
in IT at Work 7.1.

At Facebook’s annual developer conferences in 2010 and 2011, founder Mark Zuckerberg made
announcements about changes in Facebook APIs that would extend the social networking
giant’s presence across the Web through the use of social plug-ins, which are listed in Table 7.4.
See the discussion of Open Graph in Section 7.2.

IT at Work 7.1

Myntra Leverages Facebook APIs and SDKs
for Success in Mobile Fashion Sales
Myntra is India’s largest fashion e-commerce company, serving
millions of customers and featuring over 2,000 of the world’s top
fashion brands. The company generates sales of over 200,000 items
from its mobile app on any given day. Myntra is recognized as being
the world’s first mobile-only e-commerce platform and reportedly
sold $500 million in gross merchandise volume in FY2015–16. Using
Facebook’s Open Graph API and SDK, the company was able to
install features that let customers easily post information to their
Facebook pages without leaving the Myntra app. This makes it pos-
sible for the company to leverage the social network of each cus-
tomer to increase brand awareness and interest in the marketplace.

Using Facebook’s SDK, the company implemented Facebook Login
for its app as well as developed programs to access customer
insight data and a range of analytics about the performance of their
Facebook ads, conversion channels, and the success of various cus-
tomer retention strategies. As a result of these integrations with
Facebook, Myntra experienced significant growth and credits Face-
book for as much as 25% of its sales revenue. In addition, Myntra
improved the effectiveness of ad targeting and reduced advertising
costs after learning that customers who use Facebook Login to
access the e-commerce app were 32% more likely to convert (make
a purchase) than other customers.

Sources: “Myntra – Best of Fashion” at developers.facebook.com.

TABLE 7.4 Facebook Social Plug-Ins Used Across the Web

Plug-in What It Does. . .
Like button Shares pages from a website back to a user’s Facebook profile with a single click.

Send button Allows users to send content from a website to their Facebook friends.

Comments This plug-in allows users to comment on a Web page’s content using their Face-
book profile and shows the activity to the user’s friends in a newsfeed.

Embedded Places content from any public Facebook post on to a website post or blog.

Facepile This feature displays the profile photos of the people who have connected with a
Facebook page or app.

Login button Shows profile pictures of the user’s friends who have already signed up for a
Website in addition to a login button.

Source: Facebook (2014).

206 CHAPTER 7 Web 2.0 and Social Technology

Plug-ins Plug-ins are buttons or features on non-Facebook sites that interact with Face-
book in some way. For instance, CNN.com might include a Recommend button on all its
news articles. When a Facebook user presses the button, a link to the story is automatically
created on the user’s Facebook page. You don’t have to be a Web programmer to follow
and understand public announcements about API updates from Facebook, Google Maps,
YouTube, Twitter, and other popular social media platforms. Using the monitoring tools
discussed later in this chapter, you can stay informed about these changes and begin to
assess how they will impact you as an individual, website developer, or business manager.

Communicating on the Web
Collectively, social media apps have shifted the locus of control for mass communications from
large organizations to one shared with individual users. Now people as well as organizations
control both the message and the medium. Instead of an organization broadcasting a single
message to a mass audience, a massive number of conversations take place among any num-
ber of people and organizations.

No one has complete control over the message or the medium, yet everyone can play a
part. The challenge for businesses today is to change mindsets and develop strategies that
take advantage of social media. Instead of a focus on developing sophisticated ways of getting
their message heard, companies must now develop sophisticated strategies for listening and
responding to what their consumers are saying.

Because of the relatively low cost and ease of use, social media is a powerful force for
democratization; the network structure enables communication and collaboration on a mas-
sive scale. Figure 7.1 shows the emergence of mass social media. The figure compares tradi-
tional and social media and illustrates the new tools of social media, for example, blogs and
video blogs (vlogs), as being in the consumer’s control. With traditional media, content is
tightly controlled and brand messages are “pushed” out to users, often in the form of an ad
interruption. With social media, users are frequently attracted or “pulled” to content that is
interesting to them and they have greater freedom to decide if, when, and how they want to
interact with such content.

Notice that traditional media content goes from the technology to the people, whereas in
social media, people create and control the content.

Network
Effects

Consumer
Control

Videos

Podcasts

Tweets

Forums

Wikis

Enterprise 2.0 Platforms

Institutional
Control

Traditional Media

Social
Consumption

Social Media

Blogosphere

Blog

Blog

Blog Blog

Blog Blog

Print

Via comments,
Inst. Msg., Feedback,

Ratings, Reviews

Radio

Movies
Media
Outlets

S
hi

ft

Observe

Publish

Contribute

Television

FIGURE 7.1 The emergence and rise of mass social media.

Web 2.0—The Social Web 207

Social Media Applications and Services
Early descriptions of Web 2.0 would often identify the applications listed in Table 7.5 as typical
of social media. You will read more about each of these applications later in the chapter.

Few applications fit neatly into these categories anymore because of feature convergence.
For instance, Facebook started as a social networking service (SNS), but now has features that
span almost all of the categories in Table 7.5. It is a sharing site used by many to distribute photos.
It is increasingly common for people to tag or label photos with the names of people in the picture,
making it easy to find and display photos of individuals that have been saved in multiple locations
on Facebook. Users can maintain blogs on their Facebook page and Facebook hosts thousands
of apps that pull data from sources outside of the social network, making it a huge mashup app.

Likewise, YouTube started as a sharing site, making it easy for people to share video clips
with others. However, YouTube now contains many features that make it difficult to distin-
guish from an SNS. The same is true of Flickr, a photo-sharing site that has really become a
community platform for people interested in photography.

While some original social media applications are still present on the Web today, thou-
sands of newer applications have sprung up and continue to blur the lines of the original social
media application categories.

Social Media Is More than Facebook, YouTube, and Twitter
Many people think that social media is limited to a few iconic companies or brand names: Face-
book, Twitter, YouTube, and LinkedIn. While those companies have certainly capitalized on the
new technology and tend to dominate their respective market niches, social media is a term
describing a range of technologies that are used across the Internet and are part of most web-
sites you use today.

While you may be familiar with using social media for recreational purposes or com-
municating with friends and family, businesses use social technologies for a wide variety of
other benefits:

• Collaboration
• Communication and engagement with customers (marketing)
• Image and reputation management (public relations)
• Communication and engagement with employees and partners (management)
• Talent acquisition and recruiting (human resources)
• Research and knowledge management
• Productivity and information utilities
• Fund raising

Social networking service
(SNS) an online platform or
website that allows subscribers
to interact and form communities
or networks based on real-life
relationships, shared interests,
activities and so on.

TABLE 7.5 Web 2.0 Applications

Application Description
Social networking service Online communities

Blogs Online journals

Mashups/widgets/RSS Web applications that pull data from various sources and display on
another page to create new functionality

Social bookmarking/tags An application for tagging or labeling online content for later retrieval

Wikis A collaborative application that allows multiple people to create and
edit online content

Sharing sites Websites that make it easy for users to upload and share digital
content like photos, videos, or music

208 CHAPTER 7 Web 2.0 and Social Technology

The following section lists some of the key elements of social media that distinguish it from
other types of media.

Elements of Social Media: What Makes It Different? In order to understand
what makes the modern Web so different from its earlier incarnation, it is helpful to understand
the differentiating features and benefits made possible by XML, Java Script, APIs, and related
technologies.

User-generated content (UGC) In contrast to traditional media—TV, radio, and
magazines—social media makes it possible for users to create and share their own content.
Using social technologies, people share photographs, music, and video with the world.
They express themselves using the written word in stories, articles, and opinion pieces
that they publish on their own websites or other platforms. They rate products and write
reviews. Many individuals and groups have become Internet celebrities as a result of the
shows they created for YouTube. And because of YouTube’s revenue-sharing policy, those
that attract the largest audiences earn millions of dollars.
Content control Most content creation and sharing is done without editorial review. As
a result, users decide for themselves what they want to create and share. Social technolo-
gies have shifted control of online content to a broad base of users. It is users who deter-
mine what content “goes viral” or becomes highly popular through sharing, not advertising
agencies or companies with large advertising budgets.
Conversation With the advent of social media, a paradigm shift occurred in marketing
communications from a broadcast (one-way) model to a conversation (two-way) model.
Dialogue takes place in the form of one-to-one, one-to-many, and many-to-one formats.
Social media websites contain features that allow people to talk back in a variety of ways.
Community (common values, culture) Many social media technologies ultimately
result in the creation of communities. Like their offline counterparts, these online commu-
nities are made up of people who share a bond of common interests, values, norms, and
even sanctions. Some communities are highly structured, whereas others may be more
fluid and informal. As businesses learn to communicate on Web 2.0, some will attempt
to create communities made up of consumers who have a strong interest in a particular
brand. Social networking services lend themselves to this type of strategy, but brand com-
munities can be developed around blogs, wikis, sharing sites, and many other types of
social media applications.
Categorization by users (tagging) Newer Web technologies have begun to allow users
to decide for themselves how to categorize and label information they find online. This has
created the potential for powerful forms of collaboration and information sharing as well
as alternative forms of information search (see the discussion of social bookmarking later
in the chapter).
Real people (profiles, usernames, and the human voice vs. the corporate “we”)
Social media technologies allow people to express their individuality through the crea-
tion of online identities. In traditional media, communication and expression come from
celebrities or corporate spokespersons. Web 2.0 provides people with the tools to create
personal brands that characterize their personal, professional, or creative identity.
Connections (followers, friends, members, etc.) There are many ways to establish
additional levels of connection and reflect some level of a relationship. You can become
someone’s friend on Facebook. Follow someone on Twitter. Subscribe to a person’s blog.
Perhaps just as important, these connections can be severed when one party wants to end
the relationship.
Constant updating (real time, dynamic) Unlike the static Web of the 1990s, social tech-
nologies reflect our constantly evolving relationships, opinions, political views, religious
beliefs, and values. The social Web is a constant stream of communications that never
turns off and can sometimes be overwhelming. Popular examples of this characteristic
include Twitter, Facebook Live, or Snapchat.

Web 2.0—The Social Web 209

Content separated from form Data from one source can be used or exported to other
platforms. This allows users to organize and display content in ways they find most helpful.
For instance, with a really simple syndication (RSS) aggregator, users pull content from a
number of sources into a single location, making it easier to follow news stories and blog
posts from multiple sites. Someone writing about local restaurants can pull content from
food critics, customer comments, and map location information from a variety of sources
and aggregate this information into a single site, making it easier for users to get a com-
plete picture of a restaurant without having to surf around to different sites.
Equipment independence Increasingly, people access the Web and social media from
a variety of computers and mobile devices, including laptops, tablets, smartphones, video
game systems, DVD players, and televisions. In the near future, you might access the Web
from such home appliances as your refrigerator or even a kitchen countertop. (Check out
the amazing new technology featured on videos by Corning Glass. Go to YouTube and
search for “A Day Made of Glass” using the YouTube search engine.)

With Web 2.0, Markets are Conversations
As you have read, the availability of Web 2.0 applications is changing not only how people
behave but also the way they think about things. This new way of thinking is captured in a pro-
vocative list of 95 statements called the Cluetrain Manifesto (Levine et al., 2000). Perhaps the
fundamental principle of the Manifesto is described by its first thesis: Markets are conversations.
Other excerpts from the Manifesto are listed in Table 7.6. Over time, successful companies will
learn to engage customers in conversations as an alternative to the unidirectional or broadcast
method of communication. While the Cluetrain Manifesto seemed idealistic, impractical, and
revolutionary when it was first written in 2000, we are starting to see increasing examples of
individuals and companies turning those principles into action.

While many companies still struggle with the concept of conversation, Forrester researchers
Charlene Li and Josh Bernoff (2008) describe a number of companies that recognize the power
of what they call the groundswell, “a spontaneous movement of people using online tools to
connect, take charge of their own experience and get what they need—information, support,
ideas, products, and bargaining power—from each other”. Li and Bernoff identify five key stra-
tegic priorities that companies should focus on to leverage the groundswell:

1. Listening Monitoring what your customers say on social media. By listening to what
customers say to your company and what they say to each other, organizations can gain
valuable insights.

2. Talking While listening is perhaps the most important priority, businesses still need to
develop their message and communicate to their target audience(s).

3. Energizing Using a variety of tactics, companies can create and maintain relationships
with brand advocates who will support and promote the brand to their friends and follow-
ers on the Web. Energizing brand advocates is analogous to generating word-of-mouth
communications in traditional marketing.

TABLE 7.6 Excerpts from the Cluetrain Manifesto

Select Cluetrain Theses

• “Markets are conversations.”
• These conversations enable powerful forms of social organization and knowledge exchange.
• People have figured out they obtain better information and support from one another than from

vendors. So much for corporate rhetoric about adding value to commoditized products.
• Companies should realize their markets are often laughing. At them.

Source: Levine et al. (2000).

210 CHAPTER 7 Web 2.0 and Social Technology

4. Supporting Using social media to deliver effective and convenient customer service is
one way to support your customers. Some businesses create communities where custom-
ers can help each other with product-related issues and questions.

5. Embracing Many companies are utilizing social media to solicit new product ideas
and suggestions for improving customer satisfaction from current customers. Managers
are often surprised to learn that customers have great ideas for how the company can
do better.

These groundswell strategies identify the most significant activities that companies should
focus on with regard to using social media.

In the rest of this chapter, we describe a variety of social media applications that are
growing in popularity. We highlight some of the most attractive features, and encourage you
to explore them firsthand. Most are free, so they are easy to try. You are also encouraged to
stay on top of new trends and applications by following online sources like Mashable, Social
Media Today, and Social Media Examiner. The only way to truly understand the social media
environment is to immerse yourself in it, experiencing it directly. We think it is both fascinating
and fun, and hope you will too.

Questions

1. How has Web 2.0 changed the behavior of Internet users?

2. What are the basic tools or applications that characterize Web 2.0?

3. Why is Web 2.0 referred to as the social Web?

4. What are some of the benefits or advantages that Web developers gain from using AJAX technologies?

5. What are some of the most important messages for business organizations in the Cluetrain Manifesto?

6. What is feature convergence? Give some examples of this trend with regard to social media apps.

7.2 Social Networking Services
and Communities
Online or virtual communities parallel physical communities, such as neighborhoods, clubs,
and associations, except they are not bound by political or geographic boundaries. These
communities offer several ways for members to interact, collaborate, and trade. Virtual or
online communities have been around for a long time and predate the World Wide Web. The
Usenet provided the initial platform for online communities by making it possible for users to
exchange messages on various topics in public newsgroups, which are similar in many ways to
online bulletin board systems. While the Usenet is technically not part of the Internet, much of
its content can be accessed from Internet sites like Google Groups or subscription-based news
services like Giganews and Astraweb.

Online communities can take many forms. For instance, some people view the blogosphere
(all the blogs on the Web) as a community. YouTube is a community of people who post, view,
and comment on videos. Epinions is a community of people who share their experiences
and opinions about products and companies. Flickr, Photobucket, and similar sites are
photo-sharing communities. Wikipedia is a community of people who create, edit, and main-
tain an online knowledge base. Twitter is a community, or perhaps several communities, of
people who frequently exchange short, 140-character messages with one another about a
variety of topics. Obviously, social networking sites like Facebook and LinkedIn are commu-
nities and have seen tremendous growth in recent years. People today spend a significant
portion of their time on social networks (see Figure 7.2). For better or worse, social media has

Social Networking Services and Communities 211

changed the way we interact with others, how we communicate with companies and brands,
how we learn about local and international events, and how we define relationships, reputa-
tion, privacy, group affiliations and status.

Social network analysis (SNA) is the mapping and measuring of relationships and flows
between people, groups, organizations, computers, or other information- or knowledge-
processing entities. The nodes in the network are the people and the groups, whereas the links
show relationships or flows between the nodes. SNA provides a visual and a mathematical
analysis of relationships. In its corporate communications, Facebook has begun using the
term social graph to refer to the global social network reflecting how we are all connected to
one another through relationships. Berners-Lee (2007) extended this concept even further
when he coined the term giant global graph. This concept is intended to illustrate the
connections between people and/or documents and pages online. Connecting all points on
the giant global graph is the ultimate goal for creators of the semantic Web, which you read
about in Chapter 6.

Online communities have received increasing attention from the business community.
Online communities can be used as a platform for the following:

• Selling goods and services
• Promoting products to prospective customers; for example, advertising
• Prospecting for customers
• Building relationships with customers and prospective customers
• Identifying customer perceptions by “listening” to conversations
• Soliciting ideas for new products and services from customers
• Providing support services to customers by answering questions, providing information,

and so on
• Encouraging customers to share their positive perceptions with others; for example, via

word of mouth

Semantic Web an extension of
the World Wide Web that utilizes
a variety of conventions and
technologies that allow machines
to understand the meaning of
Web content.

Facebook

Average Weekly Minutes on Social Media

Snapchat

Instagram

Pinterest

Tumblr

Twitter

Google+

LinkedIn

Vine

1,039
900

332
111

317
95
115

79
174

62
120

40
47

21
16
20
19

6
0 200 400 600 800 1000 1200

Age 18 to 34 Age 35+

FIGURE 7.2 Data collected in 2016 illustrate that people spend more time on Facebook
than any other social networking site. One of the newest social platforms, Snapchat, is
already in second place across age groups, although younger people spend almost three
times longer on the service than older people. (Adapted from comScore, 2016.)

212 CHAPTER 7 Web 2.0 and Social Technology

• Gathering information about competitors and marketplace perceptions of competitors
• Identifying and interacting with prospective suppliers, partners, and collaborators (See the

discussion of Enterprise 2.0 in the next section.)

The Power of the Crowd
In recent years, several companies have created online communities for the purpose of
identifying market opportunities through crowdsourcing. Crowdsourcing is a model of
problem solving, production and idea generation that marshals the collective talents of a
large group of people. Using Web 2.0 tools, companies solicit, refine, and evaluate ideas for
new products and services based on input from their customers. Business organizations that
have implemented this approach include Fiat, Sara Lee, BMW, Kraft, Procter & Gamble, and
Starbucks. See Table 7.7 for a list of other examples.

Crowdfunding
More recently, businesses and entrepreneurs have turned to the crowdsourcing model to raise
money for business start-ups or projects. A number of crowdfunding sites have become
popular in recent years, including GoFundMe and Kickstarter. Each crowdfunding site is
governed by different rules that establish the kinds of projects or organizations that can use
them and the types of crowdfunding allowed on the site (Table  7.8). Crowdfunding sites

TABLE 7.7 Examples of Crowdsourcing Websites

Category Crowdsource Websites

R&D crowdsourcing

InnoCentive—Challenge Driven Innovation
Yet2—Innovation and IP Marketplace
NineSigma—Technology Problem Solving
Hypios—Problem Solving for Advanced Technology

Crowdsourcing for
marketing, design,
and ideas

Brand Tags—Brand Identification from the Crowd
Guerra Creativa—Logos and Designs
LeadVine—Leads and Referrals
Challenge.gov—Solutions to Government Problems

Crowdsourced product
ideas

Procter & Gamble—Crowdsource Product Ideas for P&G
Quirky—Community Sourced Product Ideas
CafePress—Buy, Sell, Create Your Product

Crowdsourcing HR &
freelance work

Amazon Mechanical Turk—“A Marketplace for Work”
Clickworker—Cloud-based global workforce
Topcoder—Crowd Coding

Crowdfunding websites

ArtistShare—New Artist Projects
Kickstarter—Large, general crowdfunding site
GoFundMe—For personal fundraisers
Crowdrise—Funding for Inspiring Social Causes

Peer-2-peer websites

Wikipedia—Online Encyclopedia Produced by the People
Quora—Answers from Experts, Amateurs and Insiders, Voted
Up or Down
Yahoo Answers—Another P2P Question & Answer Site
Diigo—Crowdsourced Web Bookmarks, Tags, and More

Adapted from Board of Innovation Crowdsourcing Examples. See this page for additional examples.

Social Networking Services and Communities 213

typically collect a percentage of the money raised, but even this can vary, so it is important to
read the terms of service (TOS) agreement carefully before selecting a site to raise money on.
See Crowdfunding.com for a list of the most popular sites.

Social Networking Services
Social networking services represent a special type of virtual community and are now the dom-
inant form of online community. With social networking, individual users maintain an identity
through their profile and can be selective about which members of the larger community they
choose to interact with. Over time, users build their network by adding contacts or friends.
On some social networks, organizations create an identity by establishing discussion forums,
group pages, or some other presence.

The number of SNSs has grown tremendously in recent years. It is expected that the social
networking sector will segment and consolidate in the future just like other industries. Among
general-purpose SNS platforms, Facebook is the clear leader with over 2 billion active users.
Facebook’s dramatic growth over the past decade has been unparalleled in the social media
world (Table 7.9).

Terms of service (TOS)
agreement a formal listing of
the policies, liability limits, fees,
user rights and responsibilities
associated with using a particular
service. Users are typically
required to acknowledge they
have read, understand, and agree
to the TOS before they are allowed
the service to use.

TABLE 7.8 Types of Crowdfunding

Donations Often used by charities and political campaigns. Contributors do not receive anything
tangible in exchange for their donation, just the knowledge that they are supporting
a cause they like or believe in. (In some cases, contributors may be eligible for a tax
write-off.)

Rewards Contributors receive some kind of “perk” or thank-you gift. Often, it is something
related to the project. For instance, people who contribute to a filmmaker’s project
may receive a copy of the finished work on DVD.

Credit Contributors essentially make microloans to fund projects and expect to be repaid
with interest.

Equity Contributors make “micro investments” and receive a proportional ownership stake
in the company. It is likely that regulatory agencies that oversee equities markets in
the United States and other countries will establish rules governing or even restrict-
ing this type of crowdfunding.

Royalties Contributors receive a percentage of the sales revenue generated by a project. For
instance, people who contribute to a musician’s recording project might receive roy-
alties from the sale of the artist’s music.

Sources: Outlaw (2013), Wikipedia (2014).

TABLE 7.9 Facebook Statistics

2 billion monthly active users as of July 5, 2017

1.28 billion daily active users on average for March 31, 2017

1.74 billion mobile monthly active users as of December 31, 2016

1.15 billion mobile daily active users on average for December 2016

Approximately 85.8% of daily active users are outside the United States and Canada

Facebook owns several other companies including Instagram (mobile photo-sharing), WhatsApp
(mobile messenger app), Oculus (virtual reality), Moves (activity log), and Masquerade (selfie filters)

Over 2 billion people use one of Facebook’s mobile messenger apps every month—Facebook
Messenger or WhatsApp. That’s in addition to the billions of active Facebook.com users

Source: Facebook, Inc.

214 CHAPTER 7 Web 2.0 and Social Technology

Facebook Dominates Social Networking
Facebook was launched in 2004 by a former Harvard student, Mark Zuckerberg. Facebook
features include Profile, News Feed, Messenger, Groups, Events, Photo and Video Sharing,
Search and Pages (for individuals, groups, and organizations to create public profiles). Apart
from these basic applications, users can add any of the millions of Facebook apps that have
been developed by others. Today over 90% of users access the site from mobile devices
instead of desktop computers and over 50% only access the site from a mobile device (see
Figure 7.3).

Facebook pioneered the Newsfeed feature, a constantly updated stream of status updates
and postings from a user’s friends. Today the Newsfeed also contains sponsored posts (adver-
tisements) as part of Facebook’s growing advertising program. In late 2011, Facebook intro-
duced another major revision to its site called Timeline. The Timeline app is designed to show
the chronological progression of key events in a person’s life as illustrated by his or her Facebook
status updates, photos, songs listened to, bad haircuts, as well as changes in occupations, loca-
tions, relationships, and the like. The Timeline feature effectively curates all the content users
share on the networking service. When it was initially launched, many users were surprised by

While SNS sites share some common features, they are not all alike. As the category
matures, sites are differentiating themselves in a variety of ways:

• Target age group
• Geographic location of users
• Language
• Area of interest, for example, music, photography, gaming, travel
• Social versus professional networking (see IT at Work 7.2)
• Interface, for example, profile page, microblog, virtual world, emphasis on graphic versus

text content

IT at Work 7.2

Recruiters Use Professional Networking Sites
Susan Heathfield, a human resources (HR) expert at About.com,
maintains that it is no longer sufficient to post job openings on
monster.com, Careerbuilder.com, and Craigslist.com. Job postings
on these large sites often generate hundreds of applications from
unqualified candidates. This can be overwhelming for recruiters
and very inefficient. Instead, many have turned to professional
networking sites like LinkedIn. Heathfield identified a number of
specific ways that businesses recruiters use LinkedIn to increase
their effectiveness:

• Identify potential candidates among their existing network of
professionals.

• Ask their network to identify or recommend candidates for
a position.

• Evaluate potential employees based on references and refer-
rals from their existing network.

• Actively search for relevant keywords or qualifications in the
profiles of LinkedIn users.

• Ask current employees to search among their LinkedIn net-
works for potential candidates.

• Post job openings on LinkedIn.

• Request introductions to potential candidates through their
existing network of professionals.

• Use Inmail (the internal LinkedIn e-mail system) to contact
potentially qualified individuals.

It is clear that recruiters have come to embrace LinkedIn as
an effective and cost–efficient way of generating qualified candi-
dates. As LinkedIn’s global presence grows, this will provide an
important benefit to companies who need to fill positions inter-
nationally.

IT at Work Questions
1. Why have monster.com, Careerbuilder.com, and Craigslist.

com lost their effectiveness?
2. Why have HR departments turned to professional net-

working sites like LinkedIn?
3. Why is it so essential for career-minded workers to build

a professional social network? What can this network
do for you?

Source: Heathfield (2012).

Social Networking Services and Communities 215

what they felt was a radical interface change and were uncomfortable with how easy it became
for others to access old, long forgotten posts and status updates. Facebook responded with pri-
vacy features that gave users greater control over who could view their content.

When Zuckerberg created Facebook, he had very strong social ambitions aimed at helping
people connect to others on the Web. Facebook was initially an online social space for college
and high school students. It started by connecting students to all others at the same school.
In 2006 Facebook expanded to anyone 13 years or older with a valid e-mail address. The lack
of privacy controls was among the biggest reasons why some business people resisted joining
Facebook during its early years.

In 2008 Facebook introduced controls that allowed users to set different access levels to
information for various groups of people in their network; for example, family, friends from
school, friends from work, and so on. For instance, close friends might see your mobile phone
number, music favorites, e-mail address, and so forth, while other friends might see only the
basic information. Facebook is sometimes criticized for its approach to user privacy, high-
lighting an ongoing tension between the corporate goals of Facebook, which depends on a high
level of access to user data, and the desire of individual users to control access to their personal
information. See IT at Work 7.3 for additional information about social media privacy issues.

Facebook has expanded to the rest of the world with the help of its foreign-language
members: Engineers first collected thousands of English words and phrases throughout the site
and invited members to translate those bits of text into another language. Members then rated
translations until a consensus was reached. The Spanish version was created by about 1,500
volunteers in less than a month. The German version was created by 2,000 volunteers in less than
2 weeks. In early March 2008, Facebook invited French members to help out. They completed the
translations in a few days. Facebook exists in over 100 different languages and approximately
85% of users reside outside of the United States and Canada. In May 2012, Facebook went public
with its initial public offering (IPO), selling company shares on the NASDAQ stock exchange. It
raised over $16 billion, making it the third largest ever IPO in U.S. history. While founder Mark
Zuckerberg sold some 30 million shares during the offering (for $1.15 billion), he continues to
own approximately 15% of the company. His net worth of over $56.7 billion places him among
the 10 wealthiest people on the planet.

The Open Graph Initiative A primary reason that Facebook expands is the network
effect: More users mean more value. In April 2010, Zuckerberg announced Facebook’s new
initiative called Open Graph. The goal was to connect all the different relationships that exist
on the Internet by linking websites to Facebook. Programmers at external websites were
encouraged to include a Facebook “Like” button on their websites. That way, when a Facebook
member visits the website, they can click “Like” and their relationship with that website will be
reflected back on their Facebook page for friends to see.

Network effect from the field
of economics, the network effect
explains how the perceived value
of a product or service is affected
by the number of people using the
product or service.

©
in

ca
m

er
as

to
ck

/A
la

m
y

FIGURE 7.3 People are increasingly using mobile devices
like smartphones and tablets to access Facebook and other
social media sites.

216 CHAPTER 7 Web 2.0 and Social Technology

Social Logins Facebook also encourages other websites to allow people to use their Face-
book username and password to sign in or create accounts. For instance, if you are a Facebook
member and you visit ESPN (a sports news site) or Yelp (a local directory service), you can
sign into the sites using your Facebook username and password. Facebook will then share your
profile information with those sites. A number of services compete for social logins including
Google+, Twitter, Yahoo, and LinkedIn. Toward the end of 2015, Facebook had the largest share
of social logins (62%) followed by Google+ with 24% (Peterson, 2016).

Google Takes on Facebook with G+
Launched in June 2011, Google+, or G+, was the search engine giant’s attempt to capture a
share of the social networking market. Determining how well Google+ has performed takes a
bit of calculation. Officially, there are over 2.5 billion G+ user accounts. But that figure is mis-
leading because everyone that signs up for Google’s popular e-mail service (Gmail) automati-
cally gets a Google+ account. More realistic estimates of activity and users on Google+ suggest
that the service probably has between 4 and 6 million users that engage, interact, and post
publically (Gallagher, 2015). Having failed to meet early expectations for a big Facebook ver-
sus Google+ rivalry, one might wonder why the company continues to maintain the social
platform. Some speculate that Google wants to maintain its position in social logins where it
holds second place. Others have suggested that while few people actually use Google+, there
is value in all those billions of profiles. The profiles, combined with the data you generate using

IT at Work 7.3

Addressing Social Media Privacy Concerns
Privacy rights are too easily abused. Governments and industry
associations are trying to control these abuses through legisla-
tion and professional standards, but they frequently fail to provide
adequate protection. One of the most effective deterrents is fear of
backlash from abuses that become public and cause outrage. So it
is important to identify privacy issues that pertain to social media
and specifically SNSs. Examples of privacy violations include the
following:

• Posting pictures of people on social networking sites without
their permission

• Tricking people into disclosing credit or bank account
information or investing in “work at home” scams

• Sharing user information with advertisers without users’
knowledge or consent

• Disclosing an employer’s proprietary information or trade
secrets on social networking sites

• Posting information on social networking sites that could com-
promise people’s safety or make them targets for blackmail

Taking Control of Your Privacy
The most important thing that users can do to protect themselves
is to understand that they are responsible for protecting their
own information. The basic solution is common sense. Unfortu-
nately, most social networking sites create the illusion of privacy
and control. This sometimes can lull even the most vigilant users
into making mistakes. Sites like Facebook make us feel like our
information is only going to be seen by those we have allowed to

become part of our network. Wrong. Listed below are common-
sense guidelines:

• Do not post private data. Nothing, absolutely nothing you put
on a social networking site is private. You should avoid posting
personal information including full birth date, home address,
phone number, and the like. This information can be used for
identity theft.

• Be smart about who you allow to become part of your net-
work. It is not uncommon for teenagers to “friend” hundreds
of individuals on their Facebook accounts. With this many con-
tacts, there is no way to protect profile or other information.

• Do not rely on current privacy policies or terms of service
(TOS) agreements. Social networking sites change their
privacy policies regularly. Many have accused Facebook of
doing this specifically to wear down user vigilance with regard
to maintaining desired privacy settings. Regularly review
your social network service privacy policies explained in the
TOS. Set your privacy settings at the level offering maximum
protection—operating as if you have no privacy whatsoever.

• Minimize your use of applications, games, and third-party
programs on social networking sites until you have carefully
investigated them. They can expose you to malicious pro-
grams or viruses. Do not automatically click on links that look
as if they were sent to you by members of your network.

IT at Work Questions
1. Which of these guidelines is the easiest to follow? Which is

the toughest? Explain why.
2. Why is it recommended that you not post private data on a

social network, even those with privacy settings?

Social Networking Services and Communities 217

other Google products, helps the company better understand your interests as well as how to
effectively place ads in front of you as you surf the Web. Finally, the company does announce
updates, improvements, and changes to Google+ from time to time. Given Google’s tradition of
regularly evolving products, perhaps the social network will emerge as something useful down
the line. For now, Google hasn’t given any indication that it is ready to close the doors on its
social network (Pierce, 2015).

Be in the Now with Snapchat
Compared to social networks like LinkedIn (2003), Facebook (2004), YouTube (2005), and
Twitter (2006), Snapchat, founded in 2011, is a relative newcomer to the social media big
leagues. But that hasn’t kept Snapchat from quickly becoming one of the most popular
social platforms, second only to Facebook in terms of where people spend their time (see
Figure 7.2). As Facebook’s appeal among teens and young adults declines, the mobile-only
Snapchat service has become the hot new social platform for an age group that some experts
call digital natives or The App Generation because they were born after digital technologies
became ubiquitous. Snapchat’s core feature set can be described as a fun messaging app
that emphasizes communication through pictures and videos instead of the text-based mes-
sages people have sent to each other for years. Snapchat’s rapid growth during a relatively
short life has been nothing but spectacular and mirrors in a way its most distinctive fea-
ture when compared to other social platforms—Snapchat picture and video messages self-
destruct within 10 seconds after being viewed. When founder and CEO Evan Spiegel first
pitched the concept to classmates at Stanford University, they frowned on disappearing
picture idea, claiming nobody would use an app that couldn’t save messages. On most other
social platforms, it seems the goal is to curate or build a collection of photos, posts, and mes-
sages. Facebook even goes so far as to remind users of updates and pictures posted years
ago, encouraging people to reshare the memories with others in their network. One of the
post popular features on Twitter is the “retweet,” or the act of forwarding an interesting tweet
from someone to other people in your network. That’s not allowed on Snapchat—no saving,
no forwarding, no looking back through silly pictures that special someone sent you when
your relationship was, well, less complicated. According to some of its most rabid fans, Snap-
chat is all about the “now.” Most likely you’ve heard stories about people who spend hours
cultivating their personal brand by sharing carefully retouched photographs of themselves,
regularly posting witty status updates, and telling stories of doing “absolutely amazing
things, all the time, with tons of friends.” Snapchat seems to rebel against the rehearsed and
unnaturally choreographed public images that people sometimes become obsessed with
creating on other sites. Instead, Snapchat encourages users to have fun and be a little crazy
while using the service. In the emerging Snapchat culture, spontaneous silliness is the norm.
Some speculate that young people enjoy the app because it allows them to ignore all those
warnings from parents, teachers, and future employers about posting incriminating images
on social media. According to Snapchat, any consequences are likely to be short-lived, dis-
appearing within 10 seconds. But while Snapchat’s growing fan base might be enjoying the
thrill of adding doggy noses and ears to selfies they just took with their friends at a party,
senior leadership at the company is busy managing a host of serious issues. In 2012, Snap-
chat’s daily user base of 10 million people were sharing about 20 million images a day. Fast
forward to 2017, and the daily user base has grown to more than 160 million people sharing
2.5 billion snaps a day (snaps is a metric that counts both photographs and videos). With that
kind of growth, the company must be continuously expanding its computing infrastructure,
which, in turn, means arranging for venture capital funding and ultimately launching an IPO
on March 2, 2017. Following its first day of trading, the app’s parent company, Snap, Inc.,
had a public market valuation of over $28 billion. While the company reported revenue of
$404.5 million in 2016, like many start-up companies it still hasn’t made a profit. Almost 98%
of its revenue comes from advertising. While users have been quick to join the new social
network, advertisers are still figuring out how Snapchat integrates with their overall promo-
tion strategy. In addition, older social platforms obviously have advertising and promotional

218 CHAPTER 7 Web 2.0 and Social Technology

programs that advertisers already understand. As well, they have years of behavioral
data and consumer insights that help advertisers accurately target prospective custom-
ers. In conclusion, while Snapchat is a hit with consumers, the business challenges facing
the leadership team as they manage the company’s growth, financing, infrastructure, and
branding are monumental. Hopefully, the company’s lively ghost logo will prove to represent
the playful spirit in all of us and not the omen of a company that lived fast, but died young.

And Now for Something Different: Second Life
Second Life is a social network service unlike most others. What makes it unique is that it uses
a 3D virtual world interface in which users, called Residents, are represented by avatars, or
cyber bodies that they create (Figure 7.4).

Developed by Linden Research in 2003, Second Life lets residents communicate with others
in the virtual world through chat or voice communications. Residents can create and trade things
they make in Second Life, including virtual clothes, art, vehicles, houses, and other architectural
structures. They can also earn money by providing services such as instruction in a foreign lan-
guage or serving as a DJ in a virtual club. This has led to the evolution of a Second Life economy
with its own currency, the Linden dollar (L$). While most of the economic activity remains in the
Second Life world, there have been news reports of a few entrepreneurs who made consider-
able sums of real money. The most common businesses were operated by programmers and art-
ists with the skills to make virtual objects that less talented residents were willing to pay real
money for.

Between 2006 and 2008, there was a big spike in interest on the part of businesses that
witnessed great potential in using Second Life. For example, IBM used it as a location for meet-
ings, training, and recruitment. Musicians performed concerts to crowds in virtual amphithe-
aters, and to smaller audiences in virtual nightclubs and bars. American Apparel was the first
major retailer to set up shop in Second Life. Starwood Hotels used Second Life as a relatively
low-cost market research experiment in which avatars visit Starwood’s virtual Aloft hotel. The
endeavor created publicity for the company, and feedback on the design of the hotel was
solicited from visiting avatars. This information was used in the creation of the first real-world
Aloft hotel, which opened in 2008 (Carr,  2007). Starwood subsequently donated its Second
Life property to a not-for-profit educational organization. Fashion and clothing manufacturers
like Reebok, American Apparel, Adidas, and others used Second Life as a place to feature new
clothing designs, setting up virtual stores where Second Life citizens could purchase digital
clothing for their avatars. The hope was that awareness of fashion products on Second Life
would spur interest and eventual purchase of real-world products. But efforts by these and

Avatars an icon, figure or visual
representation of a person in
computer games, simulations,
virtual worlds or online
discussion forms.

ST
R

/R
eu

te
rs


Im

ag
es

FIGURE 7.4 Second Life residents participate in a virtual world
beauty contest sponsored by cosmetics manufacturer L’Oreal.

Social Networking Services and Communities 219

other businesses, like 1-800-flowers, to get Second Life citizens to purchase real-world prod-
ucts through the virtual community proved disappointing. Many businesses that were quick
to become part of the early excitement around Second Life eventually left the virtual world
community.

The general consensus seems to be that we’re still not ready for virtual world applications.
Based on past experience, significant technological, social, cultural, and financial hurdles have
to be overcome before virtual worlds like Second Life develop the kind of mass appeal that
innovators predicted back in 2008. That said, there are signs that the virtual world concept may
be entering a new phase of innovation. Second Life recently rolled out an upgraded virtual
reality (VR) space called “Sansar” that takes advantage of new technology like the Oculus Rift,
a VR headset that creates an immersive experience for users. Linden Labs, the company that
operates Second Life believes the new VR technology will solve many of the in-world function-
ality problems that frustrated new users in the early days. This time, however, Linden Labs will
have some competition. Philip Rosedale, the founder of Linden and Second Life has left that
company and started a new “social VR” venture called High Fidelity. Rosedale’s new company
is also creating virtual world applications that make use of VR technology. He envisions an
open-source platform where users can build their own virtual worlds connected to the worlds
created by others, a type of virtual world social network.

Of course, given the history of virtual worlds up to now, it might be easy to dismiss claims
and predictions about the potential of VR and virtual worlds if it weren’t for the backing of
companies like Facebook which invested close to $2 billion to acquire Oculus. Zuckerberg said
that he believes the acquisition is a long-term bet on the future of social networking. Only time
will tell if these new technologies represent the future of social networking (see Hay,  2015;
Johnson, 2015; Kushner, 2017; Metz, 2017).

Private Social Networks
The ultimate niche community is the private SNS. Private SNSs use social technology to create
a community restricted to members selected by the SNSs’ owner. Private SNSs allow a greater
degree of control over the network. Companies can easily monitor activity on their own SNS
platforms and track conversations taking place about their brands and products. However,
managing a private SNS requires considerably more time, attention, and resources than main-
taining a presence on a general SNS. Organizations need to understand up-front that they are
making a substantial commitment with this strategy.

Most colleges and universities have Facebook pages. In addition, many institutions have
set up private SNSs to engage students even before they have started school there. Students
typically gain access to these private SNSs when they are admitted to the institution. On the
system, they can interact with admissions counselors, current students, and other admitted
students. Interactions that occur on these networks set the stage for relationships and engage-
ment that are simply not possible with e-mail and phone calls.

In 2008 Mercedes-Benz created two private SNSs designed to increase engagement with
current and potential customers. The Mercedes Advisor network is for current Mercedes-Benz
owners. GenerationBenz.com is a private network for prospective Mercedes owners. Member-
ship in the network is limited to those who fit Mercedes’ profile for younger luxury car buyers.
Both of these communities provide the company with an opportunity to engage their target
audiences directly. Members participate in market surveys and polls, provide feedback on
prospective ad campaigns and product features, and participate in discussion groups with
company managers. This provides valuable feedback to Mercedes as well as creates strong
advocates for the company’s luxury car brand.

While engaging customers on a private SNS can be time-consuming and potentially require
significant staffing resources, the technological challenges associated with setting up a private
SNS are relatively small. A number of companies offer a combination of free and subscription-
based pricing for individuals or organizations wishing to create a private social network. Basic
SNS sites can be set up fairly quickly for free. Search on “private social network services” for the
latest information.

220 CHAPTER 7 Web 2.0 and Social Technology

Future of Social Networking Systems
Social networking services are perhaps the most feature-rich applications of Web 2.0. It is
expected that growth and innovation in this sector will continue as individual users and busi-
ness organizations discover its power for building networks and relationships. We expect that
Facebook will continue to dominate the field, but that smaller SNSs will stake out strong posi-
tions in niche markets using traditional market segmentation strategies that focus on the needs
of specific geographic, cultural, age, or special interest segments. Finally, with the advent of VR
technologies, more powerful computers, and large-capacity Web server installations, virtual
worlds may become an exciting new platform for all types of social interaction in the future.

Questions

1. What are the major differences between SNSs and older online communities?

2. What is the basic difference between the social graph and Berners-Lee’s concept of the Giant
Global Graph?

3. Explain Facebook’s Open Graph initiative and how it plans to expand its influence across the
World Wide Web.

4. What are some potential ways that business organizations can take advantage of Second Life’s
unique virtual world interface?

5. Why would a business want to create a private SNS? What are some of the challenges associated with
doing this?

7.3 Engaging Consumers with Blogs
and Microblogs
One of the problems with traditional media, like newspapers and magazines, is that editors and
publishers decide what you should read. Often their decisions are based on what the masses will
buy at the newsstands. Space is limited and barriers to getting published are high. News services
frequently fail to devote sufficient space to complex issues or viewpoints that might challenge the
financial or business interests of the publication’s owners or advertisers. But with social media,
anyone can write a column or article and publish it online for the world to read (see IT at Work 7.4).

Of course, this creates another potential problem: clutter. Blogging is so easy that even
people who do not have much to say can publish their thoughts, opinions, and ideas. Readers
need to be prepared to look at online content with a skeptical eye and find ways to judge the
credibility of the material they find on social media.

In their simplest form, blogs are websites where people regularly post content. Some
personal blogs are simply online diaries or journals where people share their thoughts, reflec-
tions, or an account of their life. Other blogs are more sophisticated and professional in format,
resembling online newspapers or magazines. Because blogging technology has become so
commonplace, you may not always realize you are reading a blog when accessing online
content. Many organizations have integrated one of the blogging platforms discussed later
with their website. Blogging tools make it easy for organizations to provide website visitors
with frequently updated content on pages with titles such as “What’s New,” “Company News,”
or “Product Updates.” As a result, you may be a frequent blog reader without realizing it!

Blogs contain content in a variety of digital formats including text, photographs, video, and
music. People who create and maintain blogs are referred to as bloggers.

What Is the Purpose of a Blog?
Many professionals now blog as a way to establish their reputation and promote their busi-
ness interests, or because they enjoy writing and sharing their viewpoints with others.

Engaging Consumers with Blogs and Microblogs 221

Corporate bloggers use the medium to tell stories about their brands and connect with
customers.

On the surface, blogging appears to be a broadcast (one-to-many) communication tool.
However, it can also be an effective tool for interactive dialogue. Many blogs utilize comment
features, allowing readers to respond to blog posts, interacting with the blogger and other
readers. Successful bloggers tend to comment on and link to other blogs in their posts, in effect
maintaining a dialogue or conversation with other bloggers. These connections between blogs
create what some refer to as the blogosphere, or a network of blogs. IT at Work 7.5 lists a
number of ways that organizations use blogs for marketing.

IT at Work 7.4

How to Create a Blog
Setting up a blog is relatively easy. Making the effort to regularly
write and post content that others will find interesting is more chal-
lenging. The following steps outline the process of setting up a blog.

1. Create a plan. Successful blogging requires a certain degree of
organization and discipline. You can address this part of the
project by developing a plan at the outset. The plan should
answer questions like these:

a. What are you going to blog about? What will be the focus or
topic of your blog?

b. Identify your target audience. For whom are you writing?

c. How often do you intend to update your blog? Some blog-
gers post new material daily, some weekly, and some just a
few times a month. As a general rule, readers are more likely
to follow blogs that are updated regularly. Avoid sporadic
updates or only blogging when you feel like it. Successful
bloggers frequently set up a publication schedule outlining
topics and posting dates to keep themselves on track.

d. Who else is blogging about the same topic? Identify blog-
gers you can interact with through your posts and com-
ments on their blogs.

2. Determine if you will self-host your blog by purchasing a host-
ing plan and domain name (URL), or if your blog will use a free

blogging service. Free services allow you to get up and running
quickly and do not require any long-term commitments. This
provides an easy, low-risk way to get started. While this might
be the most convenient approach, you do not actually own your
blog or the content you post there because it is on a domain
owned by someone else. Your domain name in these situations
is usually in the form of “myblogname.blogspot.com,” which
can appear less professional to some readers. Purchasing a
hosting plan and domain name, however, is the better long-term
strategy since it creates a unique identity for your blog.

3. Select a blogging platform (see the section below). This is the
software that will provide the look and feel of your site and
give you myriad features you can employ to build a successful
blog. Standard features in most blog platforms include a com-
ment section, RSS buttons so readers can subscribe to your
blog, and share buttons so readers can post links to your blog
on other social media sites (e.g., Twitter, Facebook, Digg, etc.).

4. Set up your blog. Once you’ve set up your hosting and platform
arrangements, you will need to create the aesthetic design for
your site. Most platforms make this easy with a multitude of
template options that you can further customize to give your
blog a unique look.

5. Get started. Now comes the challenging part, writing your
posts and regularly updating your blog to attract readers. You
can read blogs about blogging to get great tips and advice.

IT at Work 7.5

How Marketers Use Blogs and Microblogs
Blogs and microblogs provide individuals and organizations with a
means to accomplish a variety of communications objectives. Mar-
keters use blogs and microblogs to

• Develop relationships with independent bloggers, encour-
aging them to write positive stories about the brand, product,
and company

• Engage members of the blogging community, via corporate
blogs, by providing helpful and interesting information

• Tell the company’s “story,” position a product, create brand
identity, and differentiate from the competition

• Engage customers and readers by soliciting comments and
feedback about information provided in blog posts

• Drive traffic to the company website by using Twitter to
announce recent updates to the company blog

• Inform current or prospective customers about positive blog
posts featuring your product or company that were written by
independent, third-party bloggers

• Encourage repeat visits to the company website through reg-
ular updates or new posts to the blog

• Have a celebrity or influential expert send a tweet with a
promotional message about your brand using Twitter’s new
advertising program

222 CHAPTER 7 Web 2.0 and Social Technology

Blogging and Public Relations
Some bloggers have become highly successful and have developed a large audience for
their material. Many people approach blogging like a business and consider themselves
“publishers,” with the goal of generating enough readers or subscribers that they can make
money from advertisers and ad agencies who will pay to display their ads on an individual
blogger’s site. Earlier in this chapter, you read about a set of groundswell social media
strategies described by Charlene Li and Josh Bernoff. One of those strategies, Energizing,
is accomplished when a business identifies a blogger whose audience matches its target
market and persuades that person to write about the company’s product. This is similar
to a public relations manager sending a press release to a journalist, hoping he or she
will write a news story about the company in the local paper. When a highly credible and
influential blogger writes a positive story about your company, it can have a very positive
impact on your brand’s image. Bloggers can also have a negative impact if they write unfa-
vorable posts about the company or its products. As a result, public relations professionals
are learning how to identify and form positive relationships with influential bloggers with
the goal of generating favorable coverage of the company and its products. Frequently,
this will involve doing things like providing the blogger with information in advance of it
being released to the public, providing access to company executives for interviews, send-
ing the blogger samples of the company’s product so that he or she can write about it from
firsthand experience, and so on. For some companies, particularly those in the technology
industry, building relationships with influential bloggers has become an important public
relations strategy.

Reading and Subscribing to Blogs
The best way to gain an understanding of the blogging phenomenon is to simply start reading
blogs. You can use search engines like Google or Yahoo to find blogs on all kinds of topics. Most
blogs make it easy to subscribe using an RSS reader (see Section 7.4 later on in this chapter).
Reading blogs is a great way to stay current on rapidly evolving topics related to technology
and business.

Blogging Platforms
Selecting a blogging platform is an important decision when setting up a blog. Installing a
platform when you are creating a blog is relatively easy. Converting to a new blogging platform
after using another one for a while is not. Two of the most popular platforms are WordPress
followed by Google’s Blogger platform. Other blog platforms include TypePad, Movable Type,
and Tumblr. The Tumblr platform is significantly different from traditional platforms in that it
emphasizes easy posting of photos and light copy. As such, it is considered to be a microblog-
ging platform and is discussed later in this chapter.

When choosing between WordPress and Blogger, WordPress is a feature-rich platform
and offers greater control over site appearance (Figure 7.5). Blogger is simpler and easier to
use, making it a more desirable choice for beginning bloggers who want to get up and running
without becoming bogged down in technological issues. Blogger’s affiliation with Google might
also be attractive because of the potential for integration with other Google services. For in-
stance, Blogger comes with a built-in analytics program that appears to share many similarities
with Google Analytics, a stand-alone Web traffic-monitoring tool.

For now, begin by reading blogs about social media, information technology, and other
topics that are of personal interest. Note how these blogs vary in terms of style, length, and
appearance. Identify the features they offer readers for commenting on and sharing content.
After you get a feel for how people blog, try setting up your own blog using Google’s free
platform and hosting service at www.google.com/blogger.

Blogging platform a software
application used to create, edit,
and add features to a blog.
WordPress and Blogger are two
of the most popular blogging
platforms.

Engaging Consumers with Blogs and Microblogs 223

©
N

et
Ph

ot
os

/A
la

m
y

FIGURE 7.5 WordPress is one of the leading platforms for
online blogs.

©
2

02
0W

EB
/A

la
m

y

FIGURE 7.6 Twitter is a microblogging
SNS that limits users to messages of 140
characters or less.

Microblogs
You may be a microblogger and not even know it! Microblogging is a way of sharing content
with people by the regular, often frequent posting of short messages. Although people don’t
usually call it microblogging, perhaps the most common form of this social media activity
occurs when you update your status message on Facebook. More often, however, the term is
used to describe popular microblogging services like Twitter and Tumblr.

Most microblog content consists of text-based messages, although there appears to be an
increase in people who are microblogging photos and video on Twitter and Tumblr. Tumblr has
increased in popularity recently among younger Internet users because of its multimedia capa-
bilities and ease of use.

Twitter
Twitter has grown in popularity over the last few years, becoming one of the world’s largest
communication platforms. According to Twitter, approximately 5 million messages, or tweets,
are sent each day by over 310 million active monthly users (see Figure 7.6). People frequently
attach descriptive keywords or hashtags, designated by the # sign, to their tweets to make
them easier for others to find (e.g., #news, #politics, #fail).

Microblog a blog that consists
of frequent, but very brief posts
containing text, pictures, or
videos. Twitter is perhaps the
most well-known example of a
microblog.

Tweet a brief 140-character
message or post broadcast on
Twitter, a microblogging service.

224 CHAPTER 7 Web 2.0 and Social Technology

Twitter has played a significant role in both global and domestic events (Lee, 2013). In
countries where the media is largely dominated by government control, Twitter has proven
to be a valuable tool for activists engaged in organizing protests, debating political view-
points, and broadcasting real-time information about significant events that might otherwise
be ignored by the mainstream media. Twitter has become a primary channel for real-time
updates on events and issues in politics, entertainment, social causes, and sports. In 2016,
Twitter kept people informed about the #RIO2016 Olympics, #Brexit—Britain’s separation
from the European Union, the #BlackLivesMatter movement, news about the popular televi-
sion show #GameofThrones, and, of course, the U.S. presidential #Election2016 where Twitter
was used heavily by both candidates. For better or worse, President Donald Trump continues
to use Twitter as a primary communications tool since winning the election. Because of Twit-
ter’s reach, most federal and state political leaders now use Twitter as a regular channel for
communicating with their followers. For the same reason, most advocacy groups engage in
what some call hashtag activism, using the service to maintain awareness levels about their
cause as well as influence people’s beliefs and attitudes on key issues. Twitter has even begun
to influence investment decisions made on Wall Street. Financial research analysts have cre-
ated algorithms that use the volume and sentiment of Twitter traffic to predict the future stock
value of a company.

Twitter is attractive to individuals, groups, and organizations because it provides a direct
link to the public, bypassing traditional mass media, which often acts as an information gate-
keeper. Ironically, Twitter frequently influences what we see on traditional media. Journalists
regularly use Twitter to broadcast breaking news stories. “Hashtag journalists” increasingly
monitor Twitter to identify newsworthy events being tweeted (reported) by eyewitnesses and
to gauge the public’s interest in an event or issue by monitoring trending topics. Using Twit-
ter to monitor public sentiment as well as influence public opinion has become an important
skill for public relations professionals working for business and not-for-profit organizations.
Organizations can no longer afford to ignore the conversations that take place on Twitter
about their brands, products, and executives. Furthermore, public relations professionals
must understand how to actively participate in these conversations or risk appearing aloof
and out-of-touch.

Twitter is often used by consumers to complain about frustrations they are having with
a company or its products. In response, some companies have adopted Twitter as a cus-
tomer service channel, along with e-mail and telephone call centers (see Opening Case 6.1).
When customer service representatives find people complaining about their brand or prod-
uct, they can use the service to empathize with the customer’s frustration and offer solutions
for resolving the problem. Because conversations on Twitter are public, other customers can
watch or “listen in” on interactions between an unhappy customer and a customer service rep-
resentative and judge how effective the company is at solving problems. This can be a benefit
or liability for organizations depending on how adept they are at communicating and resolving
customer problems on Twitter.

Just a few years ago, many businesses appeared to be somewhat confused about how to
incorporate the microblogging service into their communications strategy. That has changed.
Over 65% of companies now use Twitter for marketing communications and are increas-
ingly expanding their reach by encouraging employees to share relevant messages with their
personal social networks. In addition to organic (unpaid) tweets, companies are spending
close to $3 billion a year on promoted tweets, or paid ads sent out over the network. Twitter is
viewed by many companies as a good way to reach people on mobile devices.

Like other social media tools, the best way to gain an understanding of Twitter is to use it.
The official Twitter interface is simple and efficient, but a large segment of the Twittersphere
uses third-party apps that have been developed to enhance the site’s functionality and user
experience. Some are considered essential tools in the life of the power Twitter user:

• TweetDeck is an advanced, split-screen app that allows users to view messages stream-
ing from followers, people being followed, and people the user might wish to follow. It
also makes it easy to quickly reply to incoming tweets, increasing the frequency of Twitter

Twittersphere the universe
of people who use Twitter, a
microblogging service.

Engaging Consumers with Blogs and Microblogs 225

conversations. The TweetDeck interface makes it easy to participate in Twitter forums or
online discussion groups similar to what takes place in a chat room. In 2011, this popular
Twitter application was acquired by Twitter and continues to be a popular interface appli-
cation for people accessing the service from a computer.

• Dlvr.it automates posts to Twitter and other social platforms published on a blog using
RSS technology.

• Twitterholic is a service that ranks users by the number of followers, friends, and updates.

Many users believe Twitter is best suited for mobile devices like smartphones or tablets,
which enable users to post spontaneous messages and updates regardless of their location.
There are literally hundreds of third-party Twitter apps for computers and mobile devices, with
more being written every day. You can find the most popular mobile apps by using an Internet
search engine or searching your phone’s app store.

How Do People and Businesses Use Twitter? Think of Twitter as a social net-
work where the dominant focus is on status updates. People tweet messages to their followers
that they think will be of interest. While some businesses still struggle with ways to use Twitter
effectively, many have adopted strategies that engage consumers, enhance brand image, and
improve revenue. Examples include the following:

• Celebrities use Twitter to update loyal fans about their day-to-day activities.
• Social media experts like Jeremiah Owyang and Brian Solis use Twitter to share links to

online material that people in their profession will find interesting.
• Companies use Twitter to update customers about new products and special offers.
• Mobile food service trucks in large cities use Twitter to update customers about their

current and future locations.
• Businesses can advertise on Twitter using a variety of ad types including promoted tweets,

promoted accounts, and promoted trends ads.
• News services like CNN and Mashable use Twitter as a “headline news” stream, sending

links to news stories on their websites.
• Coupon and shopping services use Twitter to send daily deals and special offers to their

followers.
• People use Twitter to send status updates to their friends, keeping others informed about

their activities, sharing stories and links to online material they find interesting.
• Politicians use Twitter to communicate with their constituents, often linking to stories or

material on their website that they believe will help support their positions on issues.
• Businesses can use Twitter to increase traffic to their website by tweeting links to

information on the website that they think people will find interesting.

When users receive a tweet from someone they follow, they can “Reply” to the message, or
retweet it by forwarding the message to everyone in their network. In this way, users engage in
a dialogue of sorts with people they are connected to on the service. Tweets that are retweeted
among many different users can go viral.

Tumblr Blogs
Tumblr is often described as a microblogging service because it makes the posting of mul-
timedia content easy for users and allows them to update their blogs frequently. However,
Tumblr blogs can include just as much text as a regular blog, although most who use the ser-
vice emphasize photographs and video as the primary content. This emphasis on multimedia
makes the Tumblr blogs more visually compelling. Tumblr is particularly popular among those
who are blogging about things like fashion, entertainment, and the arts.

226 CHAPTER 7 Web 2.0 and Social Technology

Questions

1. What is the difference between a blog and a microblog?

2. What is a blogging platform?

3. Why do marketers use blogs and microblogs?

4. What makes Twitter a more attractive communication channel than traditional media for many indi-
viduals and organizations?

5. How is Tumblr different from other types of blogging platforms?

7.4 Mashups, Social Metrics, and
Monitoring Tools
A mashup is a Web application that combines information from two or more sources and pre-
sents this information in a way that creates some new benefit or service. Using AJAX technolo-
gies and APIs, websites and applications can pull information from a variety of sources.

One of the most common examples of a consumer mashup that you are likely to
encounter involves the integration of map data (from companies such as Google or MapQuest)
with information like store names, locations, phone numbers, and consumer reviews from
other websites.

By combining this information in a single location or application, users enjoy a powerful
and visually compelling service. ProgrammableWeb maintains a helpful directory of mashups.

Enterprise mashups combine data from internal business sources (e.g., sales records,
customer information, etc.) and/or information from external sources for enhanced usefulness
and productivity. For instance, a bank may utilize an enterprise mashup to display a mortgage
application from its own records, the property location on a Google map, and information from
county government property tax records.

What Makes a Mashup Social
To begin with, many of the most popular APIs used in mashup apps are from social media sites.
That means the data involved in the mashup are likely to be user-generated social information.
The other reason mashups are considered social media is that they represent the power to
separate content from form—allowing Web developers (and sometimes users) greater control
over how information is displayed and used on the Web.

Mashups also represent a change in philosophy for content creators. Traditionally, a
business that created content operated a closed system where it maintained almost complete
control over the “product.” On the social Web, content creators enjoy greater distribution by
allowing others access to their digital information through an API. For instance, the Google
brand name appears on thousands of websites due to its open-source mapping API. If Twitter
did not have an open-source API, awareness and use of the service would be far less than it is
today because users would have to go to the Twitter site in order to use it. By giving up some
control, these content creators enjoy wider distribution and market penetration.

While the ability to create powerful mashup applications has not yet reached most
individual Web users, it has decentralized control over how content is displayed and used,
which is a key principle of Web 2.0 social technologies (Section 7.1). We anticipate that it will
not be long before someone develops a technology that will make it easier for the average user
to create his or her own custom mashup applications.

Mashup technology also represents a tremendous opportunity for new Web-based busi-
nesses with limited start-up capital. For example, the online directory and business review
service, Yelp.com, uses the Google Maps API to make it easier for users to locate restaurants

Mashups, Social Metrics, and Monitoring Tools 227

and bars in their area. In turn, Yelp has an API that allows sites like Zillow.com to use Yelp’s
information when displaying information about homes and neighborhoods to prospective real
estate customers.

RSS Technology
Another technology that extends control of Web content beyond the creator is really simple
syndication (RSS) (Figure 7.7). Traditionally, users had to visit multiple sites in order to view
content at each location. This is potentially time-consuming and difficult for users who are
interested in following several sources. RSS technology allows users to subscribe to multiple
sources (e.g., blogs, news headlines, social media feeds, videos, and podcasts) and have the
content displayed in a single application, called an “RSS reader” or “RSS aggregator.” In effect,
users can create a customized news and information site by personalizing how they want infor-
mation from their news sources organized and displayed. Popular RSS aggregators include
Feedly, Digg Reader, and The Old Reader. Many other free or freemium aggregators are avail-
able with a variety of features.

©
A

K
P

Ph
ot

os
/A

la
m

y

FIGURE 7.7 Many blogs use the RSS logo to identify the
subscription link that readers use to import the blog’s content to
their RSS aggregator.

Social Monitoring Services
A fast-growing sector in the social technology field involves social monitoring services. Moni-
toring applications allow users to track conversations taking place on social media sites. The
initial impetus for the growth of monitoring tools was the need for business organizations to
better understand what people were saying about their brands, products, and executives (the
“listening” part of the Groundswell Strategy model discussed in Section 7.1). Monitoring ser-
vices can be used to identify industry experts, commentators, and opinion leaders who post
regularly to social media sites. Once identified, public relations professionals can build rela-
tionships with these individuals and encourage them to become brand advocates who regu-
larly portray the brand or company positively in their online writing and social media posts. See
IT at Work 7.6.

In the next section, we describe two categories of social monitoring tools: subscription-
based services and free monitoring services.

Subscription Monitoring Services The most comprehensive social media moni-
toring tools require the user to pay a subscription or licensing fee. These tools not only monitor
the social media environment for mentions of your brand or company name but also provide
analytics and tools for measuring trends in the amount of conversation occurring, the tone or
sentiment (e.g., positive, negative, neutral) of the conversation, and other important aspects

228 CHAPTER 7 Web 2.0 and Social Technology

IT at Work 7.6

Businesses Monitor Social Activity. . .

• To identify brand advocates—people who repeatedly discuss
a particular topic

• To find experts talking about technical or business topics

• To assess reputation or sentiment in the online community
about a brand, person, or issue

• To understand customers by listening—identifying topics of
interest to the online community

• To track trends in the volume or nature of online conversations

• To assess the relationship between marketing actions (e.g.,
product launch) and online conversations

• To identify potential problems with your brand’s reputation
before things get out of control

of the social interaction