Posted: April 24th, 2025
You have been appointed a project manager for a major software products company. Your job is to manage the development of the next-generation version of its widely used mobile fitness app. Because competition is intense, tight deadlines have been established and announced. What team structure would you choose and why? What software process model(s) would you choose and why?
Outline your plan addressing these issues and other issues.
Need 6-8 pages with introduction and conclusion included. Must include a minimum of 9 peer-reviewed sources. No AI work.
437
C H A P T E R
22
What is it? When you build computer software,
change happens. And because it happens,
you need to manage it effectively. Software
configuration management (SCM), also called
change management, is a set of activities de-
signed to manage change.
Who does it? Everyone involved in the soft-
ware process is involved with change man-
agement to some extent, but specialized
support positions are sometimes created to
manage the SCM process.
Why is it important? If you don’t control
change, it controls you. And that’s never good.
It’s very easy for a stream of uncontrolled
changes to turn a well-run software project
into chaos. As a consequence, software qual-
ity suffers and delivery is delayed.
What are the steps? Because many work
products are produced when software is built,
each must be uniquely identified. Once this is
accomplished, mechanisms for version and
change control can be established.
What is the work product? A software
configuration management plan defines the
project strategy for change management.
Changes result in updated software products
that must be retested and documented, with-
out breaking the project schedule or the pro-
duction versions of the software products.
How do I ensure that I’ve done it right? When
every work product can be accounted for,
traced, controlled, tracked, and analyzed; when
everyone who needs to know about a change
has been informed—you’ve done it right.
Q u i c k L o o k
Software Configuration
Management
baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
change control . . . . . . . . . . . . . . . . . . . . . . . .448
change management, mobility and agile . . .453
change management process . . . . . . . . . . . . 447
configuration audit . . . . . . . . . . . . . . . . . . . . .452
configuration management, elements of . . .440
configuration objects . . . . . . . . . . . . . . . . . . . 441
content management . . . . . . . . . . . . . . . . . . .455
continuous integration . . . . . . . . . . . . . . . . . .446
identification . . . . . . . . . . . . . . . . . . . . . . . . . .452
integration and publishing . . . . . . . . . . . . . . .455
repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . .453
SCM process . . . . . . . . . . . . . . . . . . . . . . . . . .448
software configuration items . . . . . . . . . . . . .438
status reporting . . . . . . . . . . . . . . . . . . . . . . . .452
version control . . . . . . . . . . . . . . . . . . . . . . . . .445
k e y
c o n c e p t s
Change is inevitable when computer software is built and can lead to confusion
when you and other members of a software team are working on a project. Confu
sion arises when changes are not analyzed before they are made, recorded before
they are implemented, reported to those with a need to know, or controlled in a
manner that will improve quality and reduce error. Babich [Bab86] suggests an
approach that will minimize confusion, improve productivity, and reduce the num
ber of mistakes when he writes: “Configuration management is the art of identi
fying, organizing, and controlling modifications to the software being built by a
programming team. The goal is to maximize productivity by minimizing mistakes.”
438 PART THREE QUALITY AND SECURITY
Software configuration management (SCM) is an umbrella activity that is applied
throughout the software process. Typical SCM work flow is shown in Figure 22.1.
Because change can occur at any time, SCM activities are developed to (1) identify
change, (2) control change, (3) ensure that change is being properly implemented, and
(4) report changes to others who may have an interest.
It is important to make a clear distinction between software support and software
configuration management. Support (Chapter 27) is a set of software engineering
activities that occur after software has been delivered to the customer and put into
operation. Software configuration management is a set of tracking and control activ
ities that are initiated when a software engineering project begins and terminates only
when the software is taken out of operation.
A primary goal of software engineering is to improve the ease with which changes
can be accommodated and reduce the amount of effort expended when changes must be
made. In this chapter, we discuss the specific activities that enable you to manage change.
22.1 so f t wa r e co n f i g u r at i o n Ma nag e M e n t
The output of the software process is information that may be divided into three broad
categories: (1) computer programs (both source level and executable forms), (2) work
products that describe the computer programs (targeted at various stakeholders), and
(3) data or content (contained within the program or external to it). In Web design or
game development, managing changes to the multimedia content items can be more
demanding than managing the changes to the software or documentation. The items
that comprise all information produced as part of the software process are collectively
called a software configuration.
As software engineering work progresses, a hierarchy of software configuration items
(SCIs)—a named element of information that can be as small as a single UML diagram
or as large as the complete design document—is created. If each SCI simply led to other
SCIs, little confusion would result. Unfortunately, another variable enters the process—
change. Change may occur at any time, for any reason. In fact, the first law of system
Analyze
Implementation
Identify
Change
Control
Change
Report
Change
Publish/
Deploy
Figure 22.1
Software
configuration
management
work flow
CHAPTER 22 SOFTWARE CONFIGURATION MANAGEMENT 439
engineering [Ber80] states: “No matter where you are in the system life cycle, the
system will change, and the desire to change it will persist throughout the life cycle.”
What is the origin of these changes? The answer to this question is as varied as
the changes themselves. However, there are four fundamental sources of change:
∙ New business or market conditions dictate changes in product requirements or
business rules.
∙ New stakeholder needs demand modification of data produced by information
systems, functionality delivered by products, or services delivered by a
computerbased system.
∙ Reorganization or business growth or downsizing causes changes in project
priorities or software engineering team structure.
∙ Budgetary or scheduling constraints cause a redefinition of the system or product.
Software configuration management is a set of activities that have been developed
to manage change throughout the life cycle of computer software. SCM can be viewed
as a software quality assurance activity that is applied throughout the software process.
In the sections that follow, we describe major SCM tasks and important concepts that
help us to manage change.
22.1.1 An SCM Scenario
This section is extracted from [Dar01].1
A typical configuration management (CM) operational scenario involves several stake
holders: a project manager who is in charge of a software group, a configuration manager
who is in charge of the CM procedures and policies, the software engineers who are
responsible for developing and maintaining the software product, and the customer who
uses the product. In the scenario, assume that the product is a small one involving about
15,000 lines of code being developed by an agile team with four developers. (Note that
other scenarios of smaller or larger teams are possible, but, in essence, there are generic
issues that each of these projects face concerning CM.)
At the operational level, the scenario involves various roles and tasks. For the project
manager or team leader, the goal is to ensure that the product is developed within a
certain time frame. Hence, the manager monitors the progress of development and
recognizes and reacts to problems. This is done by generating and analyzing reports
about the status of the software system and by performing reviews on the system.
The goals of the configuration manager (who on a small team may be the project
manager) are to ensure that procedures and policies for creating, changing, and testing
of code are followed, as well as to make information about the project accessible. To
implement techniques for maintaining control over code changes, this manager introduces
mechanisms for making official requests for changes, for evaluating proposed changes
with the development team, and ensuring the changes are acceptable to the product
owner. Also, the manager collects statistics about components in the software system,
such as information determining which components in the system are problematic.
1 Special permission to reproduce “Spectrum of Functionality in CM Systems” by Susan Dart
[Dar01], © 2001 by Carnegie Mellon University is granted by the Software Engineering Institute.
440 PART THREE QUALITY AND SECURITY
For the software engineers, the goal is to work effectively. There must be a mech
anism to ensure that simultaneous changes to the same component are properly
tracked, managed, and executed. This means engineers do not unnecessarily interfere
with each other in the creation and testing of code and in the production of support
ing work products. But, at the same time, they try to communicate and coordinate
efficiently. Specifically, engineers use tools that help build a consistent software prod
uct. They communicate and coordinate by notifying one another about tasks required
and tasks completed. Changes are propagated across each other’s work by merging
files. Mechanisms exist to ensure that, for components that undergo simultaneous
changes, there is some way of resolving conflicts and merging changes. A history is
kept of the evolution of all components of the system along with a log with reasons
for changes and a record of what actually changed. The engineers have their own
workspace for creating, changing, testing, and integrating code. At a certain point, the
code is made into a baseline from which further development continues and from
which variants for other target machines are made.
The customer uses the product. Because the product is under CM control, the
customer follows formal procedures for requesting changes and for indicating bugs in
the product.
Ideally, a CM system used in this scenario should support all these roles and tasks;
that is, the roles determine the functionality required of a CM system. The project
manager sees CM as an auditing mechanism; the configuration manager sees it as a
controlling, tracking, and policymaking mechanism; the software engineer sees it as
a changing, building, and access control mechanism; and the customer sees it as a
quality assurance mechanism.
22.1.2 Elements of a Configuration Management System
In her comprehensive white paper on software configuration management, Susan Dart
[Dar01] identifies four important elements that should exist when a configuration
management system is developed:
∙ Component elements. A set of tools coupled within a file management sys
tem (e.g., a database) that enables access to and management of each software
configuration item.
∙ Process elements. A collection of procedures and tasks that define an effec
tive approach to change management (and related activities) for all constituen
cies involved in the management, engineering, and use of computer software.
∙ Construction elements. A set of tools that automate the construction of soft
ware by ensuring that the proper set of validated components (i.e., the correct
version) have been assembled.
∙ Human elements. A set of tools and process features (encompassing other
CM elements) used by the software team to implement effective SCM.
These elements (to be discussed in more detail in later sections) are not mutually
exclusive. For example, component elements work in conjunction with construction
elements as the software process evolves. Process elements guide many human
activities that are related to SCM and might therefore be considered human elements
as well.
CHAPTER 22 SOFTWARE CONFIGURATION MANAGEMENT 441
22.1.3 Baselines
Change is a fact of life in software development. Customers want to modify require
ments. Developers want to modify the technical approach. Managers want to modify
the project strategy. Why all this modification? The answer is really quite simple. As
time passes, all constituencies know more (about what they need, which approach
would be best, and how to get it done and still make money). Most software changes
are justified, so there’s no point in complaining about them. Rather, be certain that
you have mechanisms in place to handle them.
A baseline is a software configuration management concept that helps you to con
trol change without seriously impeding justifiable change. The IEEE [IEE17] defines
a baseline as:
A specification or product that has been formally reviewed and agreed upon, that there
after serves as the basis for further development, and that can be changed only through
formal change control procedures.
Before a software configuration item becomes a baseline, change may be made quickly
and informally. However, once a baseline is established, changes can be made, but a
specific, formal procedure must be applied to evaluate and verify each change.
In the context of software engineering, a baseline is a milestone in the development
of software. A baseline is marked by the delivery of one or more software configura
tion items that have been approved as a consequence of a technical review (Chapter 16).
For example, the elements of a design model have been documented and reviewed.
Errors are found and corrected. Once all parts of the model have been reviewed, cor
rected, and then approved, the design model becomes a baseline. Further changes to
the program architecture (documented in the design model) can be made only after
each has been evaluated and approved. Although baselines can be defined at any level
of detail, the most common software baselines are shown in Figure 22.2.
The progression of events that lead to a baseline is also illustrated in Figure 22.2.
Software engineering tasks produce one or more SCIs. After SCIs are reviewed and
approved, they are placed in a project database (also called a project library or soft-
ware repository and discussed in Section 22.5). Be sure that the project database is
maintained in a centralized, controlled location. When a member of a software engi
neering team wants to make a modification to a baselined SCI, it is copied from the
project database into the engineer’s private workspace. However, this extracted SCI
can be modified only if SCM controls (discussed later in this chapter) are followed.
The arrows in Figure 22.2 illustrate the modification path for a baselined SCI.
22.1.4 Software Configuration Items
We have already defined a software configuration item as information that is created
as part of the software engineering process. In the extreme, an SCI could be consid
ered to be a single section of a large specification or one test case in a large suite of
tests. More realistically, an SCI is all or part of a work product (e.g., a document, an
entire suite of test cases, a named program component, a multimedia content asset,
or a software tool).
In reality, SCIs are organized to form configuration objects that may be cata
loged in the project database with a single name. A configuration object has a
442 PART THREE QUALITY AND SECURITY
name, attributes, and is “connected” to other objects by relationships. Referring to
Figure 22.3, the configuration objects, DesignSpecification, DataModel, Compo-
nentN, SourceCode, and TestSpecification are each defined separately. However,
each of the objects is related to the others as shown by the arrows. A curved arrow
indicates a compositional relation. That is, DataModel and ComponentN are part
of the object DesignSpecification. A doubleheaded straight arrow indicates an
interrelationship. If a change were made to the SourceCode object, the interrela
tionships enable you to determine what other objects (and SCIs) might be affected.2
22.1.5 Management of Dependencies and Changes
We introduced the concept of traceability and the use of traceability matrices in Sec
tion 7.2.6. The traceability matrix is one way to document dependencies among
requirements, architectural decisions (Section 10.5), and defect causes (Section 17.6).
These dependencies need to be considered when determining the impact of a proposed
change and guiding the selection test cases that should be used for regression testing
(Section 20.3). de Sousa and Redmiles write that viewing dependency management
as impact management3 helps developers to focus on how changes made affect their
work [Sou08].
Project database
BASELINES:
System Specification
Software Requirements
Design Specification
Source Code
Test Plans/Procedures/Data
Operational System
Stored
SCIs
Extracted
SCIs
Approved
SCIs
Modified
SCIs
SCIs
Software
engineering
tasks
Technical
reviews
SCM
controls
Figure 22.2 Baselined SCIs and the project database
2 These relationships are defined within the database. The structure of the database (reposi
tory) is discussed in greater detail in Section 22.2.
3 Impact management is discussed further in Section 22.5.2.
CHAPTER 22 SOFTWARE CONFIGURATION MANAGEMENT 443
DesignSpecifications
data design
architectural design
module design
interface design
TestSpecifications
test plan
test procedure
test cases
ComponentN
interface description
algorithm description
PDL
SourceCode
DataModel
Figure 22.3
Configuration
objects
Impact analysis focuses on organizational behavior as well as individual actions.
Impact management involves two complementary aspects: (1) ensuring that software
developers employ strategies to minimize the impact of their colleagues’ actions on
their own work, and (2) encouraging software developers to use practices that min
imize the impact of their own work on that of their colleagues. It is important to
note that when a developer tries to minimize the impact of her work on others, she
is also reducing the work others need to do to minimize the impact of her work on
theirs [Sou08].
It is important to maintain software work products to ensure that developers are
aware of the dependencies among the SCIs. Developers must establish discipline when
checking items in and out of the SCM repository and when making approved changes,
as discussed in Section 22.2.
22.2 th e scM re p o s i to ry
The SCM repository is the set of mechanisms and data structures that allow a software
team to manage change in an effective manner. It provides the obvious functions of
a modern database management system by ensuring data integrity, sharing, and inte
gration. In addition, the SCM repository provides a hub for the integration of software
tools, is central to the flow of the software process, and can enforce uniform structure
and format for software engineering work products.
To achieve these capabilities, the repository is defined in terms of a metamodel.
The meta-model determines how information is stored in the repository, how data
444 PART THREE QUALITY AND SECURITY
can be accessed by tools and viewed by software engineers, how well data security
and integrity can be maintained, and how easily the existing model can be extended
to accommodate new needs.
22.2.1 General Features and Content
The features and content of the repository are best understood by looking at it from two
perspectives: what is to be stored in the repository and what specific services are pro
vided by the repository. A detailed breakdown of types of representations, documents,
and other work products that are stored in the repository is presented in Figure 22.4.
A robust repository provides two different classes of services: (1) the same types
of services that might be expected from any sophisticated database management sys
tem and (2) services that are specific to the software engineering environment.
A repository that serves a software engineering team should also (1) integrate with
or directly support process management functions, (2) support specific rules that gov
ern the SCM function and the data maintained within the repository, (3) provide an
interface to other software engineering tools, and (4) accommodate storage of sophis
ticated data objects (e.g., text, graphics, video, audio).
22.2.2 SCM Features
To support SCM, the repository must be capable of maintaining SCIs related to many
different versions of the software. More important, it must provide the mechanisms
Documents
Business
content
Model
content
Construction
content
V & V
content
Project
management
content
Use cases
Analysis model
Scenario-based diagrams
Flow-oriented diagrams
Class-based diagrams
Behavioral diagrams
Design model
Architectural diagrams
Interface diagrams
Component-level diagrams
Technical metrics
Business rules
Business functions
Organization structure
Information architecture
Project esstimates
Project scchedule
SCM requuirements
Change requests
Change reports
SQA requirements
Project reports/audit reports
Project metrics
Project plan
SCM/SQA plan
System spec
ecRequirements spe
tsDesign document
cedureTest plan and pro
ntsSupport documen
User manual
Source code
Object code
ructionsSystem build instr
Test cases
Test scripts
Test results
Quality metrics
Figure 22.4 Content of the repository
CHAPTER 22 SOFTWARE CONFIGURATION MANAGEMENT 445
for assembling these SCIs into a versionspecific configuration. The repository tool
set needs to provide support for the following features.
Versioning. As a project progresses, many versions (Section 22.5.2) of individual
work products will be created. The repository must be able to save all these versions
to enable effective management of product releases and to permit developers to go
back to previous versions during testing and debugging.
The repository must be able to control a wide variety of object types, including
text, graphics, bit maps, complex documents, and unique objects such as screen and
report definitions, object files, test data, and results. A mature repository tracks ver
sions of objects with arbitrary levels of granularity; for example, a single data defini
tion or a cluster of modules can be tracked.
Dependency Tracking and Change Management. The repository manages a wide
variety of relationships among the data elements stored in it. These include relation
ships between enterprise entities and processes, among the parts of an application
design, between design components and the enterprise information architecture,
between design elements and deliverables, and so on. Some of these relationships are
merely associations, and some are dependencies or mandatory relationships.
The ability to keep track of all these relationships is crucial to the integrity of the
information stored in the repository and to the generation of deliverables based on it,
and it is one of the most important contributions of the repository concept to the
improvement of the software development process. For example, if a UML class
diagram is modified, the repository can detect whether related classes, interface
descriptions, and code components also require modification and can bring affected
SCIs to the developer’s attention.
Requirements Tracing. This special function depends on link management and
provides the ability to track all the design and construction components and deliver
ables that result from a specific requirements specification (forward tracing). In addi
tion, it provides the ability to identify which requirement generated any given work
product (backward tracing).
Configuration Management. A configuration management facility keeps track of a
series of configurations representing specific project milestones or production releases.
Audit Trails. An audit trail establishes additional information about when, why,
and by whom changes are made. Information about the source of changes can be
entered as attributes of specific objects in the repository. A repository trigger mech
anism is helpful for prompting the developer or the tool that is being used to initi
ate entry of audit information (such as the reason for a change) whenever a design
element is modified.
22.3 Ve r s i o n co n t ro L syst e M s
Version control combines procedures and tools to manage different versions of con
figuration objects that are created during the software process. A version control
system implements or is directly integrated with four major capabilities: (1) a project
446 PART THREE QUALITY AND SECURITY
database (repository) that stores all relevant configuration objects, (2) a version man-
agement capability that stores all versions of a configuration object (or enables any
version to be constructed using differences from past versions), (3) a make facility
that enables you to collect all relevant configuration objects and construct a specific
version of the software. In addition, version control and change control systems often
implement (4) an issues tracking (also called bug tracking) capability that enables the
team to record and track the status of all outstanding issues associated with each
configuration object.
A number of version control systems establish a change set—a collection of all
changes (to some baseline configuration) that are required to create a specific version
of the software. Dart [Dar91] notes that a change set “captures all changes to all files
in the configuration along with the reason for changes and details of who made the
changes and when.”
A number of named change sets can be identified for an application or system.
This enables you to construct a version of the software by specifying the change
sets (by name) that must be applied to the baseline configuration. To accomplish
this, a system modeling approach is applied. The system model contains: (1) a tem-
plate that includes a component hierarchy and a “build order” for the components
that describes how the system must be constructed, (2) construction rules, and
(3) verification rules.4
A number of different automated approaches to version control have been proposed
over the years.5 The primary difference in approaches is the sophistication of the
attributes that are used to construct specific versions and variants of a system and the
mechanics of the process for construction.
22.4 co n t i n u o u s in t e g r at i o n
Best practices for SCM include: (1) keeping the number of code variants small,
(2) test early and often, (3) integrate early and often, and (4) tool use to automate
testing, building, and code integration. Continuous integration (CI) is important to
agile developers following the DevOps workflow (Section 3.5.3). CI also adds value
to SCM by ensuring that each change is promptly integrated into the project source
code, compiled, and tested automatically. CI offers development teams several concrete
advantages [Mol12]:
Accelerated feedback. Notifying developers immediately when integration fails
allows fixes to be made while the number of performed changes is small.
Increased quality. Building and integrating software whenever necessary pro
vides confidence into the quality of the developed product.
4 It is also possible to query the system model to assess how a change in one component will
impact other components.
5 Github (https://github.com/), Perforce (https://www.perforce.com/), and Apache Subversion
also known as SVN (http://subversion.apache.org/) are popular version control systems.
CHAPTER 22 SOFTWARE CONFIGURATION MANAGEMENT 447
6 Puppet (https://puppet.com/), Jenkins (https://jenkins.io/), and Hudson (http://hudsonci.org/)
are examples of CI tools. TravisCI (https://travisci.org/) is a CI tool designed for sync
projects residing on Github.
Reduced risk. Integrating components early avoids risking a long integration
phase because design failures are discovered and fixed early.
Improved reporting. Providing additional information (e.g., code analysis met
rics) allows for more accurate configuration status accounting.
CI is becoming a key technology as software organizations begin their shift to more
agile software development processes. CI is best done using specialized tools.6 CI
allows project managers, quality assurance managers, and software engineers to
improve software quality by reducing the likelihood of defects escaping outside the
development team. Early defect capture always reduces the development costs by
allowing cheaper fixes earlier in the software project time line.
22.5 th e ch a ng e Ma nag e M e n t pro c e s s
The software change management process defines a series of tasks that have four
primary objectives: (1) to identify all items that collectively define the software con
figuration, (2) to manage changes to one or more of these items, (3) to facilitate the
construction of different versions of an application, and (4) to ensure that software
quality is maintained as the configuration evolves over time.
A process that achieves these objectives need not be bureaucratic and ponderous,
but it must be characterized in a manner that enables a software team to develop
answers to a set of complex questions:
∙ How does a software team identify the discrete elements of a software con
figuration?
∙ How does an organization manage the many existing versions of a program
(and its documentation) in a manner that will enable change to be accommo
dated efficiently?
∙ How does an organization control changes before and after software is
released to a customer?
∙ How does an organization assess the impact of change and manage the impact
effectively?
∙ Who has responsibility for approving and ranking requested changes?
∙ How can we ensure that changes have been made properly?
∙ What mechanism is used to apprise others of changes that are made?
These questions lead to the definition of five SCM tasks—identification, version
control, change control, configuration auditing, and reporting—illustrated in
Figure 22.5.
448 PART THREE QUALITY AND SECURITY
Reporting
Configuration auditing
Version control
Change control
Identification
SCIs
Software
Vm.n
Figure 22.5
Layers of the
SCM process
Referring to the figure, SCM tasks can be viewed as concentric layers. SCIs flow
outward through these layers throughout their useful life, ultimately becoming part of
the software configuration of one or more versions of an application or system. As
an SCI moves through a layer, the actions implied by each SCM task may or may not
be applicable. For example, when a new SCI is created, it must be identified. However,
if no changes are requested for the SCI, the change control layer does not apply. The
SCI is assigned to a specific version of the software (version control mechanisms
come into play). A record of the SCI (its name, creation date, version designation,
etc.) is maintained for configuration auditing purposes and reported to those with a
need to know. In the sections that follow, we examine each of these SCM process
layers in more detail.
22.5.1 Change Control
For a large software project, uncontrolled change rapidly leads to chaos. For such
projects, change control combines human procedures and automated tools to provide
a mechanism for the control of change. The change control process is illustrated
schematically in Figure 22.6. A change request is submitted and evaluated to assess
technical merit, potential side effects, overall impact on other configuration objects
and system functions, and the projected cost of the change. The results of the evalu
ation are presented as a change report, which is used by a change control authority
(CCA)—a person or group that makes a final decision on the status and priority of
the change. An engineering change order (ECO) is generated for each approved
change. The ECO describes the change to be made, the constraints that must be
respected, and the criteria for review and audit.
CHAPTER 22 SOFTWARE CONFIGURATION MANAGEMENT 449
Developer evaluates
change request from user
Request is queued for action Request is denied
User is informedAssign individuals to COs
Check out COs (items)
Make the change
Review (audit) the change
Check in COs that have
been changed
Establish baseline for testing
Perform QA and testing
Promote changes for next
release
Rebuild new software version
Review/audit all COs
Include changes in new version
Distribute new version
Figure 22.6
The change
control process
450 PART THREE QUALITY AND SECURITY
The object(s) to be changed can be placed in a directory that is controlled solely
by the software engineer making the change. A version control system (see the CVS
sidebar) updates the original file once the change has been made. As an alternative,
the object(s) to be changed can be “checked out” of the project database (reposi
tory), the change is made, and appropriate SQA activities are applied. The object(s)
is (are) then “checked in” to the database, and appropriate version control mecha
nisms (Section 22.3) are used to create the next version of the software.
These version control mechanisms, integrated within the change control process,
implement two important elements of change management—access control and syn
chronization control. Access control governs which software engineers have the
authority to access and modify a particular configuration object. Synchronization
control helps to ensure that parallel changes, performed by two different people, don’t
overwrite one another.
You may feel uncomfortable with the level of bureaucracy implied by the change
control process description shown in Figure 22.6. This feeling is not uncommon.
Without proper safeguards, change control can retard progress and create unnecessary
red tape. Most software developers who have change control mechanisms (unfortu
nately, many have none) have created a number of layers of control to help avoid the
problems alluded to here.
Prior to an SCI becoming a baseline, only informal change control need be applied.
The developer of the configuration object (SCI) in question may make whatever
changes are justified by project and technical requirements (as long as changes do not
affect broader system requirements that lie outside the developer’s scope of work).
Once the object has undergone technical review and has been approved, a baseline
can be created.7 Once an SCI becomes a baseline, project level change control is
implemented. Now, to make a change, the developer must gain approval from the
project manager (if the change is “local”) or from the CCA if the change affects other
SCIs. In some cases, the developer dispenses with the formal generation of change
requests, change reports, and ECOs. However, assessment of each change is conducted
and all changes are tracked and reviewed.
When the software product is released to customers, formal change control is
instituted. The formal change control procedure has been outlined in Figure 22.6.
The change control authority plays an active role in the second and third layers of
control. Depending on the size and character of a software project, the CCA may be
composed of one person—the project manager—or a number of people (e.g., repre
sentatives from software, hardware, database engineering, support, marketing). The
role of the CCA is to take a global view, that is, to assess the impact of change beyond
the SCI in question. How will the change affect hardware? How will the change affect
performance? How will the change modify customers’ perception of the product? How
will the change affect product quality and reliability? These and many other questions
are addressed by the CCA.
7 A baseline can be created for other reasons as well. For example, when “daily builds” are
created, all components checked in by a given time become the baseline for the next day’s
work.
CHAPTER 22 SOFTWARE CONFIGURATION MANAGEMENT 451
SCM Issues
The scene: Doug Miller’s office
as the SafeHome software proj-
ect begins.
The players: Doug Miller, manager of the
SafeHome software engineering team, and
Vinod Raman, Jamie Lazar, and other members
of the product software engineering team.
The conversation:
Doug: I know it’s early, but we’ve got to talk
about change management.
Vinod (laughing): Hardly. Marketing
called this morning with a few “second
thoughts.” Nothing major, but it’s just the
beginning.
Jamie: We’ve been pretty informal about
change management on past projects.
Doug: I know, but this is bigger and more
visible, and as I recall . . .
Vinod (nodding): We got killed by uncon-
trolled changes on the home lighting control
project . . . remember the delays that . . .
Doug (frowning): A nightmare that I’d prefer
not to relive.
Jamie: So what do we do?
Doug: As I see it, three things. First we have
to develop—or borrow—a change control
process.
Jamie: You mean how people request
changes?
Vinod: Yeah, but also how we evaluate the
change, decide when to do it (if that’s what we
decide), and how we keep records of what’s
affected by the change.
Doug: Second, we’ve got to get a really
good SCM tool for change and version
control.
Jamie: We can build a database for all of our
work products.
Vinod: They’re called SCIs in this context,
and most good tools provide some support
for that.
Doug: That’s a good start, now we have to . . .
Jamie: Uh, Doug, you said there were three
things . . .
Doug (smiling): Third—we’ve all got to
commit to follow the change management
process and use the tools—no matter what,
okay?
safehoMe
22.5.2 Impact Management
A web of software work product interdependencies must be considered every time a
change is made. Impact management encompasses the work required to properly
understand these interdependencies and control their effects on other SCIs (and the
people who are responsible for them).
Impact management is accomplished with three actions [Sou08]. First, an impact
network identifies the members of a software team (and other stakeholders) who might
effect or be affected by changes that are made to the software. A clear definition of
the software architecture (Chapter 10) assists greatly in the creation of an impact
network. Next, forward impact management assesses the impact of your own changes
on the members of the impact network and then informs members of the impact of
those changes. Finally, backward impact management examines changes that are made
by other team members and their impact on your work and incorporates mechanisms
to mitigate the impact.
452 PART THREE QUALITY AND SECURITY
22.5.3 Configuration Audit
Identification, version control, and change control help you to maintain order in what
would otherwise be a chaotic and fluid situation. However, even the most successful
control mechanisms track a change only until an ECO is generated. How can a soft
ware team ensure that the change has been properly implemented? The answer is
twofold: (1) technical reviews and (2) the software configuration audit.
The technical review (Chapter 16) focuses on the technical correctness of the con
figuration object that has been modified. The reviewers assess the SCI to determine
consistency with other SCIs, omissions, or potential side effects. A technical review
should be conducted for all but the most trivial changes.
A software configuration audit complements the technical review by assessing a
configuration object for characteristics that are generally not considered during review.
The audit asks and answers the following questions:
1. Has the change specified in the ECO been made? Have any additional modifi
cations been incorporated?
2. Has a technical review been conducted to assess technical correctness?
3. Has the software process been followed, and have software engineering stan
dards been properly applied?
4. Has the change been “highlighted” in the SCI? Have the change date and
change author been specified? Do the attributes of the configuration object
reflect the change?
5. Have SCM procedures for noting the change, recording it, and reporting it
been followed?
6. Have all related SCIs been properly updated?
In some cases, the audit questions are asked as part of a technical review. However,
when SCM is a formal activity, the configuration audit is conducted separately by the
quality assurance group. Such formal configuration audits also ensure that the correct
SCIs (by version) have been incorporated into a specific build and that all documen
tation is up to date and consistent with the version that has been built.
22.5.4 Status Reporting
Configuration status reporting (sometimes called status accounting) is an SCM task
that answers the following questions: (1) What happened? (2) Who did it? (3) When
did it happen? (4) What else will be affected?
The flow of information for configuration status reporting (CSR) is illustrated in
Figure 22.6. At the very least, develop a “need to know” list for every configuration
object and keep it up to date. When a change is made, be sure that everyone on the
list is notified. Each time an SCI is assigned new or updated identification, a CSR
entry is made. Each time a change is approved by the CCA (i.e., an ECO is issued),
a CSR entry is made. Each time a configuration audit is conducted, the results are
reported as part of the CSR task. Output from CSR may be placed in an online data
base or website, so that software developers or support staff can access change infor
mation by keyword category. In addition, a CSR report is generated on a regular basis
and is intended to keep management and practitioners apprised of important changes.
CHAPTER 22 SOFTWARE CONFIGURATION MANAGEMENT 453
22.6 Mo b i L i t y a n d ag i L e ch a ng e Ma nag e M e n t
Earlier in this book, we discussed the special nature of WebApps and MobileApps and
the specialized methods8 that are required to build them. Game developers face similar
challenges, as do all agile development teams. Among the many characteristics that dif
ferentiate these applications from traditional software is the ubiquitous nature of change.
Mobile developers and game developers often use an iterative, incremental process
model that applies many principles derived from agile software development
(Chapter 4). Using this approach, an engineering team often develops an increment
in a very short time period using a customerdriven approach. Subsequent increments
add additional content and functionality, and each is likely to implement changes that
lead to enhanced content, better usability, improved aesthetics, better navigation,
enhanced performance, and stronger security. Therefore, in the agile world of app and
game development, change is viewed somewhat differently.
If you’re a member of a software team that builds apps or games, you must embrace
change. And yet, a typical agile team eschews all things that appear to be process
heavy, bureaucratic, and formal. Software configuration management is often viewed
(albeit incorrectly) to have these characteristics. This seeming contradiction is reme
died not by rejecting SCM principles, practices, and tools, but rather by molding them
to meet the special needs of mobile projects.
22.6.1 e-Change Control
The work flow associated with change control for conventional software (Section 22.5.1)
is generally too ponderous for WebApp and mobile software development. It is
unlikely that the change request, change report, and engineering change order sequence
can be achieved in an agile manner that is acceptable for many game and app devel
opment projects. How then do we manage a continuous stream of changes requested
for content and functionality?
To implement effective change management within the “code and go” philosophy
that continues to dominate much of game and mobile development, the conventional
change control process can be modified. Each change should be categorized into one
of four classes:
Class 1. A content or function change that corrects an error or enhances local
content or functionality.
Class 2. A content or function change that has an impact on other content
objects or functional components.
Class 3. A content or function change that has a broad impact across an app
(e.g., major extension of functionality, significant enhancement or reduction in con
tent, major required changes in navigation).
Class 4. A major design change (e.g., a change in interface design or navigation
approach) that will be immediately noticeable to one or more categories of user.
Once the requested change has been categorized, it can be processed according to the
algorithm shown in Figure 22.7 for WebApps but is equally applicable for apps and games.
8 See [Pre08] for a comprehensive discussion of Web engineering methods.
454 PART THREE QUALITY AND SECURITY
Class 1 change Class 4 change
Acquire related
objects and assess
impact of change
Develop brief
written description
of change
Develop brief
written description
of change
Classify the
requested change
Transmit to all
team members for
review
Check out object(s)
to be changed
Transmit to all
stakeholders for
review
Make changes
design, construct,
test
Check in object(s)
that were changed
Publish to WebApp
Class 2 change Class 3 change
OK to makeOKOK to make OK to makeOK
Further evaluation
is requiredd
Changes required in
related objects
Further evaluation
is requiredd
Figure 22.7 Managing changes for WebApps
CHAPTER 22 SOFTWARE CONFIGURATION MANAGEMENT 455
Referring to the figure, class 1 and 2 changes are treated informally and are handled
in an agile manner. For a class 1 change, you would evaluate the impact of the change,
but no external review or documentation is required. As the change is made, standard
checkin and checkout procedures are enforced by configuration repository tools. For
class 2 changes, you should review the impact of the change on related objects (or
ask other developers responsible for those objects to do so). If the change can be made
without requiring significant changes to other objects, modification occurs without
additional review or documentation. If substantive changes are required, further
evaluation and planning are necessary.
Class 3 and 4 changes are also treated in an agile manner, but some descriptive
documentation and more formal review procedures are required. A change description—
describing the change and providing a brief assessment of the impact of the change—is
developed for class 3 changes. The description is distributed to all members of the team
who review it to better assess its impact. A change description is also developed for
class 4 changes, but in this case all stakeholders conduct the review.
22.6.2 Content Management
Content management is related to configuration management in the sense that a con
tent management system (CMS) establishes a process (supported by appropriate tools)
that acquires existing content (from a broad array of app and/or game configuration
objects), structures it in a way that enables it to be presented to an end user, and then
provides it to the clientside environment for display.
The most common use of a content management system occurs when a dynamic
application is built. Apps and games create screen displays “on the fly.” That is,
the user typically performs an action that the software responds to by changing the
information displayed on the screen. The user action may cause the app to query
a serverside database; it then formats the information accordingly and presents it
to the user.
For example, a music store (e.g., Apple iTunes) provides hundreds of thousands of
tracks for sale. When a user requests a music track, a database is queried and a vari
ety of information about the artist, the CD (e.g., its cover image or graphics), the
musical content, and sample audio are all downloaded and configured into a standard
content template. The resultant page is built on the server side and passed to the cli
ent side for examination by the end user. A generic representation for WebApps is
shown in Figure 22.8.
22.6.3 Integration and Publishing
Content management systems are useful for composing Web services to create
contextaware MobileApps and updating gamelevel scenes at run time, as well as
building dynamic Web pages. In the most general sense, a CMS “configures” content
for the end user by invoking three integrated subsystems: a collection subsystem, a
management subsystem, and a publishing subsystem [Boi04].
The Collection Subsystem. Content is derived from data and information that must
be created or acquired by a content developer. The collection subsystem encompasses
all actions required to create and/or acquire content, and the technical functions that
are necessary to (1) convert content into a form that can be represented by a markup
456 PART THREE QUALITY AND SECURITY
Database
Templates
HTML code + scripts Client-side browser
Content
management
system
Server side
Configuration objects
Figure 22.8
Content
management
system
language (e.g., HTML, XML), and (2) organize content into screens that can be
displayed efficiently on the client side.
Content creation and acquisition (often called authoring or level design for
games) commonly occurs in parallel with other development activities and is often
conducted by nontechnical content developers. This activity combines elements of
creativity and research and is supported by tools that enable the content author to
characterize content in a manner that can be standardized for use within the app
or game.
Once content exists, it must be converted to conform to the requirements of a CMS.
This implies stripping raw content of any unnecessary information (e.g., redundant
graphical representations), formatting the content to conform to the requirements of
the CMS, and mapping the results into an information structure that will enable it to
be managed and published.
The Management Subsystem. Once content exists, it must be stored in a repository,
cataloged for subsequent acquisition and use, and labeled to define (1) current status
(e.g., is the content object complete or in development?), (2) the appropriate version
of the content object, and (3) related content objects. Configuration management is
CHAPTER 22 SOFTWARE CONFIGURATION MANAGEMENT 457
performed within this subsystem. Therefore, the management subsystem implements
a repository that encompasses the following elements:
∙ Content database. The information structure that has been established to
store all content objects.
∙ Database capabilities. Functions that enable the CMS to search for specific
content objects (or categories of objects), store and retrieve objects, and
manage the file structure that has been established for the content.
∙ Configuration management functions. The functional elements and associ
ated workflow that support content object identification, version control,
change management, change auditing, and reporting.
In addition to these elements, the management subsystem implements an adminis
tration function that encompasses the metadata and rules that control the overall
structure of the content and the manner in which it is supported.
The Publishing Subsystem. Content must be extracted from the repository, con
verted to a form that is amenable to publication and formatted so that it can be
transmitted to clientside screen displays. The publishing subsystem accomplishes
these tasks using a series of templates. Each template is a function that builds a
publication using one of three different components [Boi04]:
∙ Static elements. Text, graphics, media, and scripts that require no further
processing are transmitted directly to the client side.
∙ Publication services. Function calls to specific retrieval and formatting
services that personalize content (using predefined rules), perform data
conversion, and build appropriate navigation links.
∙ External services. Access to external corporate information infrastructure
such as enterprise data or “backroom” applications.
A content management system that encompasses each of these subsystems is
applicable for major Web and mobile projects. However, the basic philosophy and
functionality associated with a CMS are applicable to all dynamic applications.
22.6.4 Version Control
As apps and games evolve through a series of increments, a number of different ver
sions may exist at the same time. One version (the current operational app) is available
via the Internet for end users; another version (the next app increment) may be in the
final stages of testing prior to deployment; a third version is in development and
represents a major update in content, interface aesthetics, and functionality. Configu
ration objects must be clearly defined so that each can be associated with the appro
priate version. Without some type of control, developers and content creators may end
up overwriting each other’s changes.
It’s likely that you’ve experienced a similar situation. To avoid it, a version control
process is required.
1. A central repository for the app or game project should be established.
The repository will hold current versions of all configuration objects (content,
functional components, and others).
458 PART THREE QUALITY AND SECURITY
2. Each developer creates his own working folder. The folder contains those
objects that are being created or changed at any given time.
3. The clocks on all developer workstations should be synchronized. This is
done to avoid overwriting conflicts when two developers make updates that
are very close to one another in time.
4. As new configuration objects are developed or existing objects are
changed, they are imported into the central repository. The version control
tool will manage all checkin and checkout functions from the working fold
ers of each developer. The tool should also provide automatic email updates
to all interested parties when changes to the repository are made.
5. As objects are imported or exported from the repository, an automatic,
time-stamped log message is made. This provides useful information for
auditing and can become part of an effective reporting scheme.
The version control tool maintains different versions of the app and can revert to
an older version if required.
22.6.5 Auditing and Reporting
In the interest of agility, the auditing and reporting functions are deemphasized during
the development of games or apps.9 However, they are not eliminated altogether. All
objects that are checked into or out of the repository are recorded in a log that can
be reviewed at any point in time. A complete log report can be created so that all
members of the team have a chronology of changes over a defined period of time. In
addition, an automated email notification (addressed to those developers and stake
holders who have interest) can be sent every time an object is checked in or out of
the repository.
22.7 su M M a ry
Software configuration management is an umbrella activity that is applied throughout
the software process. SCM identifies, controls, audits, and reports modifications that
invariably occur while software is being developed and after it has been released to
a customer. All work products created as part of software engineering become part
of a software configuration. The configuration is organized in a manner that enables
orderly control of change.
The software configuration is composed of a set of interrelated objects, also called
software configuration items, that are produced as a result of some software engineer
ing activity. In addition to software engineering work products, the development envi
ronment that is used to create software can also be placed under configuration control.
All SCIs are stored within a repository that implements a set of mechanisms and data
9 This is beginning to change. There is an increasing emphasis on SCM as one element of
application security [Fug14]. By providing a mechanism for tracking and reporting every
change made to every application object, a change management tool can provide valuable
protection against malicious changes.
CHAPTER 22 SOFTWARE CONFIGURATION MANAGEMENT 459
structures to ensure data integrity, provide integration support for other software tools,
support information sharing among all members of the software team, and implement
functions in support of version and change control.
Once a configuration object has been developed and reviewed, it becomes a base-
line. Changes to a baselined object result in the creation of a new version of that
object. The evolution of a program can be tracked by examining the revision history
of all configuration objects. Version control is the set of procedures and tools for
managing the use of these objects.
Change control is a procedural activity that ensures quality and consistency as
changes are made to a configuration object. The change control process begins with
a change request, leads to a decision to make or reject the request for change, and
culminates with a controlled update of the SCI that is to be changed.
The configuration audit is an SQA activity that helps to ensure that quality is
maintained as changes are made. Status reporting provides information about each
change to those with a need to know.
Configuration management for apps or games is similar in most respects to SCM
for conventional software. However, each of the core SCM tasks should be streamlined
to make it as lean as possible, and special provisions for content management must
be implemented.
Pro b l e m s a n d Po i n t s to Po n d e r
22.1. Why is the first law of system engineering true? Provide specific examples for each of
the four fundamental reasons for change.
22.2. What are the four elements that exist when an effective SCM system is implemented?
Discuss each briefly.
22.3. Assume that you’re the manager of a small project. What baselines would you define for
the project, and how would you control them?
22.4. Design a project database (repository) system that would enable a software engineer to
store, cross-reference, trace, update, change, and so forth all important software configuration
items. How would the database handle different versions of the same program? Would source
code be handled differently than documentation? How will two developers be precluded from
making different changes to the same SCI at the same time?
22.5. Research an existing SCM tool, and describe how it implements control for versions,
variants, and configuration objects in general.
22.6. Research an existing SCM tool, and describe how it implements the mechanics of version
control. Alternatively, read two or three papers on SCM and describe the different data struc-
tures and referencing mechanisms that are used for version control.
22.7. Develop a checklist for use during configuration audits.
22.8. What is the difference between an SCM audit and a technical review? Can their function
be folded into one review? What are the pros and cons?
22.9. Briefly describe the differences between SCM for conventional software and SCM for
WebApps or MobileApps.
22.10. Describe the value of continuous integration tools to agile software developers.
Design element: Quick Look icon magnifying glass: © Roger Pressman
460
C H A P T E R
23 Software Metrics
and Analytics
What is it? Software process and project met-
rics are quantitative measures that enable you
to gain insight into the efficacy of the software
process and the projects that are conducted
using the process as a framework. Product
metrics help software engineers gain insight
into the design and construction of the soft-
ware they build.
Who does it? Software metrics are analyzed
and assessed by software managers. Soft-
ware engineers use product metrics to help
them build higher-quality software.
Why is it important? If you don’t measure,
judgment can only be based on subjective
evaluation. You need objective criteria to help
guide the design of data, architecture, inter-
faces, and components. If you measure, trends
(either good or bad) can be spotted, better
estimates can be made, and true improvement
can be accomplished over time.
What are the steps? Derive the process,
project, and product measures and metrics
that you intend to use. Collect the metrics
and then analyze them against historical
data. Use the analysis results to gain insight
into the process, project, and product.
What is the work product? A set of software
metrics that provides insight into the process
and understanding of the project.
How do I ensure that I’ve done it right?
Define only a few metrics, and then use them
to gain insight into the quality of a software
process, project, and product. Apply a consis-
tent, yet simple measurement scheme that is
never to be used to assess, reward, or punish
individual performance.
Q u i c k L o o k
data science . . . . . . . . . . . . . . . . . . . . . . . . . . 461
defect removal efficiency (DRE) . . . . . . . . . . .482
goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .485
indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .462
measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
measurement . . . . . . . . . . . . . . . . . . . . . . . . .462
metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
arguments for . . . . . . . . . . . . . . . . . . . . . . 481
attributes of . . . . . . . . . . . . . . . . . . . . . . . .462
design metrics . . . . . . . . . . . . . . . . . . . . . .466
establishing a program . . . . . . . . . . . . . . .485
function-oriented . . . . . . . . . . . . . . . . . . . . 481
LOC-based metrics . . . . . . . . . . . . . . . . . . 481
private and public . . . . . . . . . . . . . . . . . . .480
process . . . . . . . . . . . . . . . . . . . . . . . . . . . .476
productivity . . . . . . . . . . . . . . . . . . . . . . . . 481
project . . . . . . . . . . . . . . . . . . . . . . . . . . . .476
size-oriented . . . . . . . . . . . . . . . . . . . . . . .480
software quality . . . . . . . . . . . . . . . . . . . . .482
source code . . . . . . . . . . . . . . . . . . . . . . . . 473
testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
software analytics . . . . . . . . . . . . . . . . . . . . . .462
k e y
c o n c e p t s
A key element of any engineering process is measurement. Measurement can be
applied to the software process with the intent of improving it on a continuous
basis. Measurement can be used throughout a software project to assist in estima-
tion, quality control, productivity assessment, and project control. You can use
measures to better understand the attributes of the models that you create and to
assess the quality of the engineered products or systems that you build.
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 461
Measurement can be used by software engineers to help assess the quality of work
products and to assist in tactical decision making as a project proceeds. But unlike
other engineering disciplines, software engineering is not grounded in the basic quan-
titative laws of physics. Direct measures, such as voltage, mass, velocity, or tempera-
ture, are uncommon in the software world. Because software measures and metrics
are often indirect, they are open to debate.
Within the context of the software process and the projects that are conducted using
the process, a software team is concerned primarily with productivity and quality
metrics—measures of software development “output” as a function of effort and time
applied and measures of the “fitness for use” of the work products that are produced.
For planning and estimating purposes, our interest is historical. What was software
development productivity on past projects? What was the quality of the software that
was produced? How can past productivity and quality data be extrapolated to the
present? How can it help us plan and estimate more accurately?
Measurement is a management and technical tool. If conducted properly, it provides
you with insight. And as a result, it assists the project manager and the software team
in making decisions that will lead to a successful project.
In this chapter, we present measures that can be used to assess the quality of the
product as it is being engineered. We also present measures that can be used to help
manage software projects. These measures provide you with a real-time indication of
the effectiveness of your software processes (analysis, design, testing) and the overall
quality of the software as it is being built.
23.1 so f t wa r e Me a s u r e M e n t
Data science1 is concerned with measurement, machine learning, and prediction of
future events based on these measures. Measurement assigns numbers or symbols to
attributes of entities in the real word. To accomplish this, a measurement model
encompassing a consistent set of rules is required. Although the theory of measure-
ment (e.g., [Kyb84]) and its application to computer software (e.g., [Zus97]) are top-
ics that are beyond the scope of this book, it is worthwhile to establish a
fundamental framework and a set of basic principles that guide the definition of
metrics for software development.
23.1.1 Measures, Metrics, and Indicators
Although the terms measure, measurement, and metrics are often used interchange-
ably, it is important to note the subtle differences between them. When a single data
point has been collected (e.g., the number of errors uncovered within a single software
component), a measure has been established. Measurement occurs as the result of the
collection of one or more data points (e.g., a number of component reviews and unit
tests are investigated to collect measures of the number of errors for each). A software
1 Appendix 2 in the book contains an introduction to data science geared toward software
engineers.
462 PART THREE QUALITY AND SECURITY
metric relates the individual measures in some way (e.g., the average number of errors
found per review or the average number of errors found per unit test).
A software engineer collects measures and develops metrics so that indicators will
be obtained. An indicator is a metric or combination of metrics that provides insight
into the software process, a software project, or the product itself.
23.1.2 Attributes of Effective Software Metrics
Hundreds of metrics have been proposed for computer software, but not all provide
practical support to the software engineer. Some demand measurement that is too
complex; others are so esoteric that few real-world professionals have any hope of
understanding them, and others violate the basic intuitive notions of what high-quality
software really is. Experience indicates that a metric will be used only if it is intuitive
and easy to compute. If dozens of “counts” have to be made, and complex computa-
tions are required, it is unlikely that the metric will be widely adopted.
Ejiogu [Eji91] defines a set of attributes that should be encompassed by effective
software metrics. It should be relatively easy to learn how to derive the metric, and
its computation should not demand inordinate effort or time. The metric should sat-
isfy the engineer’s intuitive notions about the product attribute under consideration
(e.g., a metric that measures module cohesion should increase in value as the level
of cohesion increases). The metric should always yield results that are unambiguous.
The mathematical computation of the metric should use measures that do not lead to
bizarre combinations of units. For example, multiplying people on the project teams
by programming language variables in the program results in a suspicious mix of
units that are not intuitively persuasive. Metrics should be based on the requirements
model, the design model, or the structure of the program itself. They should not be
dependent on the vagaries of programming language syntax or semantics. Finally,
the metric should provide you with information that can lead to a higher-quality
end product.
23.2 so f t wa r e ana Ly t i c s
There is some confusion about the differences between software metrics and software
analytics. Software metrics are used to gauge the quality or performance of a product
or process. Key performance indicators (KPIs) are metrics that are used to track
performance and trigger remedial actions when their values fall in a predetermined
range. But how do you know that metrics are meaningful in the first place?
Software analytics is the systematic computational analysis of software engineering
data or statistics to provide managers and software engineers with meaningful insights
and empower their teams to make better decisions [Bus12]. It is important that
the insights provide timely, actionable advice to developers. For example, knowing
the number of defects in a software product today is not as important as knowing the
number of defects is 5 percent higher than last month. Analytics can help developers
predict the number of defects to expect, where to test for them, and how much time
it will take to fix them. This allows managers and developers to create incremental
schedules that use these predictions to determine expected completion times. The use
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 463
of automated tools capable of processing large, dynamic data sets of engineering
metrics and measures [Men13] is required to provide real-time insight into large
project and product data sets.
Buse and Zimmermann [Bus12] suggest that analytics can help developers make
decisions regarding:
∙ Targeted testing. To help focus regression testing and integration testing
resources
∙ Targeted refactoring. To help make strategic decisions on how to avoid large
technical debt costs
∙ Release planning. To help ensure that market needs as well as technical fea-
tures in software product are taken into account
∙ Understanding customers. To help developers get actionable information on
product use by customers in the field during product engineering
∙ Judging stability. To help managers and developers monitor the state of the
evolving prototype and anticipate future maintenance needs
∙ Targeting inspection. To help teams determine the value of individual
inspection activities, their frequency, and their scope
The statistical techniques (data mining, machine learning, statistical modeling)
required to do software analytic work is beyond the scope of this book. Some of these
techniques are discussed briefly in Appendix 2. We will focus on the use of software
metrics in the remainder of this chapter.
23.3 pro d u c t Me t r i c s
Over the past four decades, many researchers have attempted to develop a single
metric that provides a comprehensive measure of software complexity. Fenton [Fen94]
characterizes this research as a search for “the impossible holy grail.” Although doz-
ens of complexity measures have been proposed [Zus90], each takes a somewhat
different view of what complexity is and what attributes of a system lead to complex-
ity. By analogy, consider a metric for evaluating an attractive car. Some observers
might emphasize body design; others might consider mechanical characteristics; still
others might tout cost, or performance, or the use of alternative fuels, or the ability
to recycle when the car is junked. Because any one of these characteristics may be at
odds with others, it is difficult to derive a single value for “attractiveness.” The same
problem occurs with computer software.
Yet there is a need to measure and control software complexity. And if a single
value of this quality metric is difficult to derive, it should be possible to develop
measures of different internal program attributes (e.g., effective modularity, functional
independence, and other attributes discussed in Chapter 9). These measures and the
metrics derived from them can be used as independent indicators of the quality of
requirements and design models. But here again, problems arise. Fenton [Fen94] notes
this when he states: “The danger of attempting to find measures which characterize
so many different attributes is that inevitably the measures have to satisfy conflicting
464 PART THREE QUALITY AND SECURITY
aims. This is counter to the representational theory of measurement.” Although
Fenton’s statement is correct, many people argue that product measurement conducted
during the early stages of the software process provides software engineers with a
consistent and objective mechanism for assessing quality.2
2 Although criticism of specific metrics is common in the literature, many critiques focus on
esoteric issues and miss the primary objective of metrics in the real world: to help software
engineers establish a systematic and objective way to gain insight into their work and to
improve product quality as a result.
Debating Product Metrics
The scene: Vinod’s cubicle.
The players: Vinod, Jamie, and Ed, members
of the SafeHome software engineering team
who are continuing work of component-level
design and test-case design.
The conversation:
Vinod: Doug [Doug Miller, software engineer-
ing manager] told me that we should all use
product metrics, but he was kind of vague. He
also said that he wouldn’t push the matter . . .
that using them was up to us.
Jamie: That’s good, ’cause there’s no way I
have time to start measuring stuff. We’re fight-
ing to maintain the schedule as it is.
Ed: I agree with Jamie. We’re up against it,
here . . . no time.
Vinod: Yeah, I know, but there’s probably
some merit to using them.
Jamie: I’m not arguing that, Vinod, it’s a time
thing . . . and I for one don’t have any to spare.
Vinod: But what if measuring saves you time?
Ed: Wrong, it takes time and like Jamie said . . .
Vinod: No, wait . . . what if it saves us is time?
Jamie: How?
Vinod: Rework . . . that’s how. If a measure we
use helps us to avoid one major or even mod-
erate problem, and that saves us from having
to rework a part of the system, we save time.
No?
Ed: It’s possible, I suppose, but can you guar-
antee that some product metric will help us
find a problem?
Vinod: Can you guarantee that it won’t?
Jamie: So what are you proposing?
Vinod: I think we should select a few design
metrics, probably class-oriented, and use them
as part of our review process for every compo-
nent we develop.
Ed: I’m not real familiar with class-oriented
metrics.
Vinod: I’ll spend some time checking them out
and make a recommendation . . . okay with
you guys?
(Ed and Jamie nod without much enthusiasm.)
safeHoMe
23.3.1 Metrics for the Requirements Model
Technical work in software engineering begins with the creation of the requirements
model. It is at this stage that requirements are derived and a foundation for design is
established. Therefore, product metrics that provide insight into the quality of the
analysis model are desirable.
Although relatively few analysis and specification metrics have appeared in the
literature, it is possible to adapt metrics (e.g., use case points or function points)
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 465
that are often used for project estimation (Section 25.6) and apply them in this
context. These estimation metrics examine the requirements model with the intent
of predicting the “size” of the resultant system. Size is sometimes (but not always)
an indicator of design complexity and is almost always an indicator of increased
coding, integration, and testing effort. By measuring characteristics of the require-
ments model, it is possible to gain quantitative insight into its specificity and
completeness.
Conventional Software. Davis and his colleagues [Dav93] propose a list of charac-
teristics that can be used to assess the quality of the requirements model and the
corresponding requirements specification: specificity (lack of ambiguity), complete-
ness, correctness, understandability, verifiability, internal and external consistency,
achievability, concision, traceability, modifiability, precision, and reusability. In addi-
tion, the authors note that high-quality specifications are electronically stored; execut-
able or at least interpretable; annotated by relative importance; and stable, versioned,
organized, cross-referenced, and specified at the right level of detail.
Although many of these characteristics appear to be qualitative in nature, each can
be represented using one or more metrics [Dav93]. For example, we assume that there
are nr requirements in a specification, such that
nr = nf + nnf
where nf is the number of functional requirements and nnf is the number of nonfunc-
tional (e.g., performance) requirements.
To determine the specificity (lack of ambiguity) of requirements, Davis and col-
leagues suggest a metric that is based on the consistency of the reviewers’ interpreta-
tion of each requirement:
Q1 =
nui
nr
where nui is the number of requirements for which all reviewers had identical inter-
pretations. The closer the value of Q to 1, the lower is the ambiguity of the specifica-
tion. Other characteristics are computed in a similar manner.
Mobile Software. The objective of all mobile projects is to deliver a combination
of content and functionality to the end user. Measures and metrics used for traditional
software engineering projects are difficult to translate directly to MobileApps. Yet, it
is possible to develop measures that can be determined during the requirements gath-
ering activities that can serve as the basis for creating MobileApp metrics. Among
the measures that can be collected are the following:
Number of static screen displays. These pages represent low relative com-
plexity and generally require less effort to construct than dynamic pages. This
measure provides an indication of the overall size of the application and the
effort required to develop it.
Number of dynamic screen displays. These pages represent higher relative
complexity and require more effort to construct than static pages. This mea-
sure provides an indication of the overall size of the application and the effort
required to develop it.
466 PART THREE QUALITY AND SECURITY
Number of persistent data objects. As the number of persistent data
objects (e.g., a database or data file) grows, the complexity of the MobileApp
also grows and the effort to implement it increases proportionally.
Number of external systems interfaced. As the requirement for interfacing
grows, system complexity and development effort also increase.
Number of static content objects. These objects represent low relative
complexity and generally require less effort to construct than dynamic pages.
Number of dynamic content objects. These objects represent higher rela-
tive complexity and require more effort to construct than static pages.
Number of executable functions. As the number of executable functions (e.g.,
a script or applet) increases, modeling and construction effort also increase.
For example with these measures, you can define a metric that reflects the degree of
end-user customization that is required for the MobileApp and correlate it to the effort
expended on the project and/or the errors uncovered as reviews and testing are
conducted. To accomplish this, you define
Nsp = number of static screen displays
Ndp = number of dynamic screen displays
Then,
Customization index, C =
Ndp
Ndp + Nsp
The value of C ranges from 0 to 1. As C grows larger, the level of app customization
becomes a significant technical issue.
Similar metrics can be computed and correlated with project measures such as
effort expended, errors and defects uncovered, and models or documentation pages
produced. If the values of these metrics are stored in a database with project measures
(after a number of projects have been completed), the relationships between the app
requirement measures and project measures will provide indicators that can aid in
project assessment tasks.
23.3.2 Design Metrics for Conventional Software
It is inconceivable that the design of a new aircraft, a new computer chip, or a new
office building would be conducted without defining design measures, determining
metrics for various aspects of design quality, and using them as indicators to guide
the manner in which the design evolves. And yet, the design of complex software-
based systems often proceeds with virtually no measurement. The irony of this is that
design metrics for software are available, but the vast majority of software engineers
continue to be unaware of their existence.
Architectural design metrics focus on characteristics of the program architecture
(Chapter 10) with an emphasis on the architectural structure and the effectiveness of
modules or components within the architecture. These metrics are “black box” in the
sense that they do not require any knowledge of the inner workings of a particular
software component. Metrics can provide insight into structural data and system com-
plexity associated with architectural design.
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 467
Card and Glass [Car90] define three software design complexity measures: struc-
tural complexity, data complexity, and system complexity.
For hierarchical architectures (e.g., call-and-return architectures), structural com-
plexity of a module i is defined in the following manner:
S(i) = f 2out(i)
where fout(i) is the fan-out3 of module i.
Data complexity provides an indication of the complexity in the internal interface
for a module i and is defined as
D(i) =
v(i)
fout(i) + 1
where v(i) is the number of input and output variables that are passed to and from
module i.
Finally, system complexity is defined as the sum of structural and data complexity,
specified as
C(i) = S(i) + D(i)
As each of these complexity values increases, the overall architectural complexity of
the system also increases. This leads to a greater likelihood that integration and test-
ing effort will also increase.
Fenton [Fen91] suggests a number of simple morphology (i.e., shape) metrics that
enable different program architectures to be compared using a set of straightforward
dimensions. Referring to the call-and-return architecture in Figure 23.1, the following
metrics can be defined:
Size = n + a
where n is the number of nodes and a is the number of arcs. For the architecture
shown in Figure 23.1,
Size = 17 + 18 = 35
Depth = longest path from the root (top) node to a leaf node. For the architecture
shown in Figure 23.1, depth = 4.
Width = maximum number of nodes at any one level of the architecture. For
the architecture shown in Figure 23.1, width = 6.
The arc-to-node ratio, r = a/n, measures the connectivity density of the architecture
and may provide a simple indication of the coupling of the architecture. For the
architecture shown in Figure 23.1, r = 18/17 = 1.06.
The U.S. Air Force Systems Command [USA87] has developed a number of soft-
ware quality indicators that are based on measurable design characteristics of a com-
puter program. Using concepts similar to those proposed in IEEE Std. 982.1-2005
[IEE05], the Air Force uses information obtained from data and architectural design
3 Fan-out is defined as the number of modules immediately subordinate to module I, that is,
the number of modules that are directly invoked by module i.
468 PART THREE QUALITY AND SECURITY
to derive a design structure quality index (DSQI) that ranges from 0 to 1 (see: [USA87]
and [Cha89] for details).
23.3.3 Design Metrics for Object-Oriented Software
There is much about object-oriented design that is subjective—an experienced designer
“knows” how to characterize an OO system so that it will effectively implement cus-
tomer requirements. But, as an OO design model grows in size and complexity, a
more objective view of the characteristics of the design can benefit both the experi-
enced designer (who gains additional insight) and the novice (who obtains an indica-
tion of quality that would otherwise be unavailable).
In a detailed treatment of software metrics for OO systems, Whitmire [Whi97]
describes nine distinct and measurable characteristics of an OO design. Size is
defined by taking a static count of OO entities such as classes or operations, cou-
pled with the depth of an inheritance tree. Complexity is defined in terms of struc-
tural characteristics by examining how classes of an OO design are interrelated to
one another. Coupling is measured by counting physical connections between ele-
ments of the OO design (e.g., the number of collaborations between classes or the
number of messages passed between objects). Sufficiency is “the degree to which
an abstraction [class] possesses the features required of it . . .” [Whi97]. Complete-
ness determines whether a class delivers the set of properties that fully reflect the
needs of the problem domain. Cohesion is determined be examining whether all
operations work together to achieve a single, well-defined purpose. Primitiveness
is the degree to which an operation is atomic—that is, the operation cannot be
constructed out of a sequence of other operations contained within a class. Similar-
ity determines the degree to which two or more classes are similar in terms of their
structure, function, behavior, or purpose. Volatility measures the likelihood that a
change will occur.
a
d
jg i
h
f
nm qp r
k l
b
Node
Arc
ec
Width
D
ep
th
Figure 23.1 Morphology metrics
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 469
In reality, product metrics for OO systems can be applied not only to the design
model, but also to the requirements model. In the remainder of this section, we discuss
metrics that provide an indication of quality at the OO class level and the operation
level. In addition, metrics applicable for project management and testing are also
explored.
Chidamber and Kemerer (CK) have proposed one of the most widely referenced
sets of OO software metrics [Chi94].4 Often referred to as the CK metrics suite, the
authors have proposed six class-based design metrics for OO systems.5
Weighted Methods per Class (WMC). Assume that n methods of complexity c1,
c2, . . . , cn are defined for a class C. The specific complexity metric that is chosen
(e.g., cyclomatic complexity) should be normalized so that nominal complexity for a
method takes on a value of 1.0.
WMC = Σci
for i = 1 to n. The number of methods and their complexity are reasonable indicators
of the amount of effort required to implement and test a class. In addition, the larger
the number of methods, the more complex is the inheritance tree (all subclasses inherit
the methods of their parents). Finally, as the number of methods grows for a given
class, it is likely to become more and more application specific, thereby limiting
potential reuse. For all of these reasons, WMC should be kept as low as is reasonable.
Depth of the Inheritance Tree (DIT). This metric is “the maximum length from
the node to the root of the tree” [Chi94]. Referring to Figure 23.2, the value of DIT
for the class hierarchy shown is 4. As DIT grows, it is likely that lower-level classes
will inherit many methods. This leads to potential difficulties when attempting to
predict the behavior of a class. A deep class hierarchy (DIT is large) also leads to
greater design complexity. On the positive side, large DIT values imply that many
methods may be reused.
Number of Children (NOC). The subclasses that are immediately subordinate to a
class in the class hierarchy are termed its children. Referring to Figure 23.2, class C2
has three children—subclasses C21, C22, and C23. As the number of children grows,
reuse increases, but also, as NOC increases, the abstraction represented by the parent
class can be diluted if some of the children are not appropriate members of the parent
class. As NOC increases, the amount of testing (required to exercise each child in its
operational context) will also increase.
Coupling Between Object Classes (CBO). The CRC model (Chapter 8) may be used
to determine the value for CBO. In essence, CBO is the number of collaborations
listed for a class on its CRC index card.6 As CBO increases, it is likely that the
4 An alternative suite of OO metrics has been proposed by Harrison, Counsell, and Nithi
[Har98b]. Interested readers are urged to examine their work.
5 Chidamber and Kemerer use the term methods rather than operations. Their usage of the
term is reflected in this section.
6 If CRC index cards are developed manually, completeness and consistency must be assessed
before CBO can be determined reliably.
470 PART THREE QUALITY AND SECURITY
reusability of a class will decrease. High values of CBO also complicate modifications
and the testing that ensues when modifications are made. In general, the CBO values
for each class should be kept as low as is reasonable. This is consistent with the
general guideline to reduce coupling in conventional software.
Response for a Class (RFC). The response set of a class is “a set of methods that
can potentially be executed in response to a message received by an object of that
class” [Chi94]. RFC is the number of methods in the response set. As RFC increases,
the effort required for testing also increases because the test sequence (Chapter 20)
grows. It also follows that, as RFC increases, the overall design complexity of the
class increases.
Lack of Cohesion in Methods (LCOM). Each method within a class C accesses
one or more attributes (also called instance variables). LCOM is the number of meth-
ods that access one or more of the same attributes.7 If no methods access the same
attributes, then LCOM = 0. To illustrate the case where LCOM ≠ 0, consider a class
with six methods. Four of the methods have one or more attributes in common (i.e.,
they access common attributes). Therefore, LCOM = 4. If LCOM is high, methods
may be coupled to one another via attributes. This increases the complexity of the
class design. Although there are cases in which a high value for LCOM is justifiable,
it is desirable to keep cohesion high, that is, keep LCOM low.8
C11
C2
C21 C22
C1
C
C23
C211
Figure 23.2
A class
hierarchy
7 The formal definition is a bit more complex. See [Chi94] for details.
8 The LCOM metric provides useful insight in some situations, but it can be misleading in
others. For example, keeping coupling encapsulated within a class increases the cohesion
of the system as a whole. Therefore, in at least one important sense, higher LCOM actually
suggests that a class may have higher cohesion, not lower.
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 471
23.3.4 User Interface Design Metrics
Although there is significant literature on the design of human-computer interfaces
(Chapter 12), relatively little information has been published on metrics that would
provide insight into the quality and usability of the interface. Although UI metrics
may be useful in some cases, the final arbiter should be user input based on GUI
prototypes. Nielsen and Levy [Nie94] report that “one has a reasonably large chance
of success if one chooses between interface [designs] based solely on users’ opinions.
Users’ average task performance and their subjective satisfaction with a GUI are
highly correlated.”
In the paragraphs that follow, we present a representative sampling of design met-
rics that may have application for websites, browser-based applications, and mobile
applications. Many of these metrics are applicable to all user interfaces. It is important
to note, however, that many of these metrics have not as yet been validated and should
be used judiciously.
Applying CK Metrics
The scene: Vinod’s cubicle.
The players: Vinod, Jamie, Shakira, and Ed,
members of the SafeHome software engineer-
ing team who are continuing to work on
component-level design and test-case design.
The conversation:
Vinod: Did you guys get a chance to read
the description of the CK metrics suite I sent
you on Wednesday and make those
measurements?
Shakira: Wasn’t too complicated. I went back
to my UML class and sequence diagrams, like
you suggested, and got rough counts for DIT,
RFC, and LCOM. I couldn’t find the CRC model,
so I didn’t count CBO.
Jamie (smiling): You couldn’t find the CRC
model because I had it.
Shakira: That’s what I love about this team,
superb communication.
Vinod: I did my counts . . . did you guys
develop numbers for the CK metrics?
(Jamie and Ed nod in the affirmative.)
Jamie: Since I had the CRC cards, I took a
look at CBO, and it looked pretty uniform
across most of the classes. There was one
exception, which I noted.
Ed: There are a few classes where RFC is
pretty high, compared with the averages . . .
maybe we should take a look at simplifying
them.
Jamie: Maybe yes, maybe no. I’m still
concerned about time, and I don’t want to fix
stuff that isn’t really broken.
Vinod: I agree with that. Maybe we should
look for classes that have bad numbers in at
least two or more of the CK metrics. Kind of
two strikes and you’re modified.
Shakira (looking over Ed’s list of classes
with high RFC): Look, see this class. It’s
got a high LCOM as well as a high RFC.
Two strikes?
Vinod: Yeah I think so . . . it’ll be difficult to
implement because of complexity and difficult
to test for the same reason. Probably worth
designing two separate classes to achieve the
same behavior.
Jamie: You think modifying it’ll save us time?
Vinod: Over the long haul, yes.
safeHoMe
472 PART THREE QUALITY AND SECURITY
Interface Metrics. For WebApps, the following interface measures can be considered:
Suggested Metric Description
Layout appropriateness The relative position of entities within the interface
Layout complexity Number of distinct regions9 defined for an interface
Layout region complexity Average number of distinct links per region
Recognition complexity Average number of distinct items the user must look at before
making a navigation or data input decision
Recognition time Average time (in seconds) that it takes a user to select the
appropriate action for a given task
Typing effort Average number of keystrokes required for a specific function
Mouse pick effort Average number of mouse picks per function
Selection complexity Average number of links that can be selected per page
Content acquisition time Average number of words of text per Web page
Memory load Average number of distinct data items that the user must
remember to achieve a specific objective
Aesthetic (Graphic Design) Metrics. By its nature, aesthetic design relies on quali-
tative judgment and is not generally amenable to measurement and metrics. However,
Ivory and her colleagues [Ivo01] propose a set of measures that may be useful in
assessing the impact of aesthetic design:
Suggested Metric Description
Word count Total number of words that appear on a page
Body text percentage Percentage of words that are body versus display text
(e.g., headers)
Emphasized body text percentage Portion of body text that is emphasized (e.g., bold, capitalized)
Text positioning count Changes in text position from flush left
Text cluster count Text areas highlighted with color, bordered regions, rules,
or lists
Link count Total links on a page
Page size Total bytes for the page as well as elements, graphics, and
style sheets
Graphic percentage Percentage of page bytes that are for graphics
Graphics count Total graphics on a page (not including graphics specified in
scripts, applets, and objects)
Color count Total colors employed
Font count Total fonts employed (i.e., face + size + bold + italic)
9 A distinct region is an area within the layout display that accomplishes some specific set of
related functions (e.g., a menu bar, a static graphical display, a content area, an animated
display).
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 473
Content Metrics. Metrics in this category focus on content complexity and on
clusters of content objects that are organized into pages [Men01].
Suggested Metric Description
Page wait Average time required for a page to download at different
connection speeds
Page complexity Average number of different types of media used on page,
not including text
Graphic complexity Average number of graphics media per page
Audio complexity Average number of audio media per page
Video complexity Average number of video media per page
Animation complexity Average number of animations per page
Scanned image complexity Average number of scanned images per page
Navigation Metrics. Metrics in this category address the complexity of the naviga-
tional flow [Men01]. In general, they are applicable only for static Web applications,
which don’t include dynamically generated links and pages.
Suggested Metric Description
Page-linking complexity Number of links per page
Connectivity Total number of internal links, not including dynamically
generated links
Connectivity density Connectivity divided by page count
Using a subset of the metrics suggested, it may be possible to derive empirical
relations that allow a WebApp development team to assess technical quality and pre-
dict effort based on projected estimates of complexity. Further work remains to be
accomplished in this area.
23.3.5 Metrics for Source Code
Halstead’s theory of “software science” [Hal77] proposed the first analytical “laws”
for computer software.10 Halstead assigned quantitative laws to the development of
computer software, using a set of primitive measures that may be derived after code
is generated or estimated once design is complete. The measures are:
n1 = number of distinct operators that appear in a program
n2 = number of distinct operands that appear in a program
N1 = total number of operator occurrences
N2 = total number of operand occurrences
10 It should be noted that Halstead’s “laws” have generated substantial controversy, and many
believe that the underlying theory has flaws. However, experimental verification for selected
programming languages has been performed (e.g., [Fel89]).
474 PART THREE QUALITY AND SECURITY
Halstead uses these primitive measures to develop expressions for the overall program
length, potential minimum volume for an algorithm, the actual volume (number of bits
required to specify a program), the program level (a measure of software complexity),
the language level (a constant for a given language), and other features such as develop-
ment effort, development time, and even the projected number of faults in the software.
Halstead shows that length N can be estimated as
N = n1 log2 n1 + n2 log2 n2
and program volume may be defined as
V = N log2 (n1 + n2)
It should be noted that V will vary with programming language and represents the
volume of information (in bits) required to specify a program.
Theoretically, a minimum volume must exist for a particular algorithm. Halstead
defines a volume ratio L as the ratio of volume of the most compact form of a program
to the volume of the actual program. In actuality, L must always be less than 1. In
terms of primitive measures, the volume ratio may be expressed as
L =
2
n1
×
n2
N2
Halstead’s work is amenable to experimental verification, and a large body of research
has been conducted to investigate software science. A discussion of this work is beyond
the scope of this book. For further information, see [Zus90], [Fen91], and [Zus97].
23.4 Me t r i c s f o r te st i ng
Testing metrics fall into two broad categories: (1) metrics that attempt to predict the
likely number of tests required at various testing levels, and (2) metrics that focus on
test coverage for a given component. The majority of metrics proposed for testing
focus on the process of testing, not the technical characteristics of the tests themselves.
In general, testers must rely on analysis, design, and code metrics to guide them in
the design and execution of test cases.
Architectural design metrics provide information on the ease or difficulty associated
with integration testing and the need for specialized testing software (e.g., stubs and
drivers). Cyclomatic complexity (a component-level design metric) lies at the core of
basis path testing, a test-case design method presented in Chapter 19. In addition, cyc-
lomatic complexity can be used to target modules as candidates for extensive unit testing.
Modules with high cyclomatic complexity are more likely to be error prone than modules
whose cyclomatic complexity is lower. For this reason, you should expend above-average
effort to uncover errors in such modules before they are integrated in a system.
Testing effort can be estimated using metrics derived from Halstead measures
(Section 23.3.5). Using the definitions for program volume V and program level PL,
Halstead effort e can be computed as
PL =
1
(n1∕2)(N2∕n2)
e =
V
PL
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 475
The percentage of overall testing effort to be allocated to a module k can be estimated
using the following relationship:
Percentage of testing effort (k) =
e(k)
Σ e(i)
where e(k) is computed for module k and the summation in the denominator is the
sum of Halstead effort across all modules of the system.
OO testing can be quite complex. Metrics can assist you in targeting testing
resources at threads, scenarios, and packages of classes that are “suspect” based on
measured characteristics. The OO design metrics noted in Section 23.3.3 provide an
indication of design quality. They also provide a general indication of the amount of
testing effort required to exercise an OO system.
Binder [Bin94b] suggests a broad array of design metrics that have a direct influ-
ence on the “testability” of an OO system. The metrics consider aspects of encapsu-
lation and inheritance.
Lack of cohesion in methods (LCOM).11 The higher the value of LCOM,
the more states must be tested to ensure that methods do not generate side
effects.
Percent public and protected (PAP). Public attributes are inherited from
other classes and therefore are visible to those classes. Protected attributes are
accessible to methods in subclasses. This metric indicates the percentage of
class attributes that are public or protected. High values for PAP increase the
likelihood of side effects among classes because public and protected attri-
butes lead to high potential for coupling.12 Tests must be designed to ensure
that such side effects are uncovered.
Public access to data members (PAD). This metric indicates the number
of classes (or methods) that can access another class’s attributes, a violation
of encapsulation. High values for PAD lead to the potential for side effects
among classes. Tests must be designed to ensure that such side effects are
uncovered.
Number of root classes (NOR). This metric is a count of the distinct class
hierarchies that are described in the design model. Test suites for each root
class and the corresponding class hierarchy must be developed. As NOR
increases, testing effort also increases.
Fan-in (FIN). When used in the OO context, fan-in in the inheritance hier-
archy is an indication of multiple inheritance. FIN > 1 indicates that a class
inherits its attributes and operations from more than one root class. FIN > 1
should be avoided when possible.
Number of children (NOC) and depth of the inheritance tree (DIT).13
As we mentioned in Chapter 18, superclass methods will have to be retested
for each subclass.
11 See Section 23.3.3 for a description of LCOM.
12 Some people promote designs with none of the attributes being public or private, that is,
PAP = 0. This implies that all attributes must be accessed in other classes via methods.
13 See Section 23.3.3 for a description of NOC and DIT.
476 PART THREE QUALITY AND SECURITY
23.5 Me t r i c s f o r Ma i n t e na nc e
All the software metrics introduced in this chapter can be used for the development
of new software and the maintenance of existing software. However, metrics designed
explicitly for maintenance activities have been proposed.
IEEE Std. 982.1-2005 [IEE05] suggests a software maturity index (SMI) that pro-
vides an indication of the stability of a software product (based on changes that occur
for each release of the product). The following information is determined:
MT = number of modules in the current release
Fc = number of modules in the current release that have been changed
Fa = number of modules in the current release that have been added
Fd = number of modules from the preceding release that were deleted in the
current release
The software maturity index is computed in the following manner:
SMI =
MT − (Fa + Fc + Fd)
MT
As SMI approaches 1.0, the product begins to stabilize. SMI may also be used as a
metric for planning software maintenance activities. The mean time to produce a
release of a software product can be correlated with SMI, and empirical models for
maintenance effort can be developed.
23.6 pro c e s s a n d pro j e c t Me t r i c s
Process metrics are collected across all projects and over long periods of time. Their
intent is to provide a set of process indicators that lead to long-term software process
improvement (Chapter 28). Project metrics enable a software project manager to
(1) assess the status of an ongoing project, (2) track potential risks, (3) uncover prob-
lem areas before they go “critical,” (4) adjust work flow or tasks, and (5) evaluate the
project team’s ability to control quality of software work products.
Measures that are collected by a project team and converted into metrics for use
during a project can also be transmitted to those with responsibility for software
process improvement. For this reason, many of the same metrics are used in both
the process and project domains.
Unlike software process metrics that are used for strategic purposes, software proj-
ect measures are tactical. That is, project metrics and the indicators derived from them
are used by a project manager and a software team to adapt project work flow and
technical activities.
The only rational way to improve any process is to measure specific attributes of
the process, develop a set of meaningful metrics based on these attributes, and then
use the metrics to provide indicators that will lead to a strategy for improvement
(Chapter 28). But before we discuss software metrics and their impact on software
process improvement, it is important to note that process is only one of a number of
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 477
“controllable factors in improving software quality and organizational performance”
[Pau94].
Referring to Figure 23.3, process sits at the center of a triangle connecting three
factors that have a profound influence on software quality and organizational perfor-
mance. The skill and motivation of people have been shown [Boe81] to be the most
influential factors in quality and performance. The complexity of the product can have
a substantial impact on quality and team performance. The technology (i.e., the soft-
ware engineering methods and tools) that populates the process also has an impact.
In addition, the process triangle exists within a circle of environmental conditions
that include the development environment (e.g., integrated software tools), business
conditions (e.g., deadlines, business rules), and customer characteristics (e.g., ease of
communication and collaboration).
You can only measure the efficacy of a software process indirectly. That is, you
derive a set of metrics based on the outcomes that can be derived from the process.
Outcomes include measures of errors uncovered before release of the software, defects
delivered to and reported by end users, work products delivered (productivity), human
effort expended, calendar time used, schedule conformance, and other measures. You
can also derive process metrics by measuring the characteristics of specific software
engineering tasks. For example, you might measure the effort and time spent perform-
ing the umbrella activities and the generic software engineering activities described
in Chapter 1.
The first application of project metrics on most software projects occurs during
estimation. Metrics collected from past projects are used as a basis from which
effort and time estimates are made for current software work. As a project proceeds,
Figure 23.3
Determinants
for software
quality and
organizational
effectiveness
Process
chnologyTecPeopple
Product
Development
Environment
Business
Conditions
Customer
Characteristics
478 PART THREE QUALITY AND SECURITY
measures of effort and calendar time expended are compared to original estimates
(and the project schedule). The project manager uses these data to monitor and control
progress.
As technical work commences, other project metrics begin to have significance.
Production rates represented in terms of models created, review hours, function
points, and delivered source lines are measured. In addition, errors uncovered during
each software engineering task are tracked. As the software evolves from require-
ments into design, technical metrics are collected to assess design quality and to
provide indicators that will influence the approach taken to code generation and
testing.
The intent of project metrics is twofold. First, these metrics are used to minimize
the development schedule by making the adjustments necessary to avoid delays and
mitigate potential problems and risks. Second, project metrics are used to assess prod-
uct quality on an ongoing basis and, when necessary, modify the technical approach
to improve quality.
As quality improves, defects are minimized, and as the defect count goes down,
the amount of rework required during the project is also reduced. This leads to a
reduction in overall project cost.
Software process metrics can provide significant benefits as an organization works
to improve its overall level of process maturity. However, like all metrics, these can
be misused, creating more problems than they solve. Grady [Gra92] suggests a “soft-
ware metrics etiquette” that is appropriate for both managers and practitioners as they
institute a process metrics program:
∙ Use common sense and organizational sensitivity when interpreting metrics
data.
∙ Provide regular feedback to the individuals and teams who collect measures
and metrics.
∙ Don’t use metrics to appraise individuals.
∙ Work with practitioners and teams to set clear goals and metrics that will be
used to achieve them.
∙ Never use metrics to threaten individuals or teams.
∙ Metrics data that indicate a problem area should not be considered “negative.”
These data are merely an indicator for process improvement.
∙ Don’t obsess on a single metric to the exclusion of other important metrics.
As an organization becomes more comfortable with the collection and use of process
metrics, the derivation of simple indicators gives way to a more rigorous approach
called statistical software process improvement (SSPI). In essence, SSPI uses software
failure analysis to collect information about all errors and defects14 encountered as an
application, system, or product is developed and used.
14 In this book, an error is defined as some flaw in a software engineering work product that
is uncovered before the software is delivered to the end user. A defect is a flaw that is
uncovered after delivery to the end user. It should be noted that others do not make this
distinction.
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 479
23.7 so f t wa r e Me a s u r e M e n t
Measurements in the physical world can be categorized in two ways: direct measures
(e.g., the length of a bolt) and indirect measures (e.g., the “quality” of bolts produced,
measured by counting rejects). Software metrics can be categorized similarly.
Direct measures of the software process include cost and effort applied. Direct
measures of the product include lines of code (LOC) produced, execution speed,
memory size, and defects reported over some set period of time. Indirect measures of
the product include functionality, quality, complexity, efficiency, reliability, maintain-
ability, and many other “–abilities” that are discussed in Chapter 15.
Establishing a Metrics Approach
The scene: Doug Miller’s office
as the SafeHome software
project is about to begin.
The players: Doug Miller, manager of the
SafeHome software engineering team, and
Vinod Raman and Jamie Lazar, members of
the product software engineering team.
The conversation:
Doug: Before we start work on this project, I’d
like you guys to define and collect a set of simple
metrics. To start, you’ll have to define your goals.
Vinod (frowning): We’ve never done that
before, and . . .
Jamie (interrupting): And based on the time
line management has been talking about, we’ll
never have the time. What good are metrics
anyway?
Doug (raising his hand to stop the on-
slaught): Slow down and take a breath, guys.
The fact that we’ve never done it before is all
the more reason to start now, and the metrics
work I’m talking about shouldn’t take much
time at all . . . in fact, it just might save us time.
Vinod: How?
Doug: Look, we’re going to be doing a lot
more in-house software engineering as our
products get more intelligent, become context
aware, mobile, all that . . . and we need to
understand the process we use to build
software . . . and improve it so we can build
software better. The only way to do that is to
measure.
Jamie: But we’re under time pressure, Doug.
I’m not in favor of more paper pushing . . . we
need the time to do our work, not collect data.
Doug (calmly): Jamie, an engineer’s work in-
volves collecting data, evaluating it, and using
the results to improve the product and the
process. Am I wrong?
Jamie: No, but . . .
Doug: What if we hold the number of
measures we collect to no more than five
or six and focus on quality?
Vinod: No one can argue against high
quality . . .
Jamie: True . . . but, I don’t know. I still think
this isn’t necessary.
Doug: I’m going to ask you to humor me on
this one. How much do you guys know about
software metrics?
Jamie (looking at Vinod): Not much.
Doug: Here are some Web refs . . . spend a
few hours getting up to speed.
Jamie (smiling): I thought you said this
wouldn’t take any time.
Doug: Time you spend learning is never
wasted . . . go do it, and then we’ll establish
some goals, ask a few questions, and define
the metrics we need to collect.
safeHoMe
480 PART THREE QUALITY AND SECURITY
The cost and effort required to build software, the number of lines of code pro-
duced, and other direct measures are relatively easy to collect, as long as specific
conventions for measurement are established in advance. However, the quality and
functionality of software or its efficiency or maintainability are more difficult to assess
and can be measured only indirectly.
We have partitioned the software metrics domain into process, project, and product
metrics and noted that product metrics that are private to an individual are often
combined to develop project metrics that are public to a software team. Project met-
rics are then consolidated to create process metrics that are public to the software
organization as a whole. But how does an organization combine metrics that come
from different individuals or projects?
To illustrate, consider a simple example. Individuals on two different project teams
record and categorize all errors that they find during the software process. Individual
measures are then combined to develop team measures. Team A found 342 errors
during the software process prior to release. Team B found 184 errors. All other things
being equal, which team is more effective in uncovering errors throughout the pro-
cess? Because you do not know the size or complexity of the projects, you cannot
answer this question. However, if the measures are normalized, it is possible to create
software metrics that enable comparison to broader organizational averages.
Size-oriented software metrics are derived by normalizing quality and/or productiv-
ity measures by considering the size of the software that has been produced. If a
software organization maintains simple records, a table of size-oriented measures,
such as the one shown in Figure 23.4, can be created. The table lists each software
development project that has been completed over the past few years and correspond-
ing measures for that project. Referring to the table entry (Figure 23.4) for project
alpha: 12,100 lines of code were developed with 24 person-months of effort at a cost
of $168,000. It should be noted that the effort and cost recorded in the table represent
all software engineering activities (analysis, design, code, and test), not just coding.
Further information for project alpha indicates that 365 pages of documentation were
developed, 134 errors were recorded before the software was released, and 29 defects
were encountered after release to the customer within the first year of operation. Three
people worked on the development of software for project alpha.
To develop metrics that can be assimilated with similar metrics from other projects,
you can choose lines of code as a normalization value. From the rudimentary data
Figure 23.4 Size-oriented metrics
$(000) Pp. doc. ErrorsE�ort
168
440
314
24
62
43
LOCProject
alpha
beta
gamma
365
1224
1050
134
321
256
Defects
29
86
64
Category
3
5
6
12,100
27,200
20,200
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 481
contained in the table, a set of simple size-oriented metrics can be developed for each
project:
∙ Errors per KLOC (thousand lines of code)
∙ Defects per KLOC
∙ $ per KLOC
∙ Pages of documentation per KLOC
In addition, other interesting metrics can be computed:
∙ Errors per person-month
∙ KLOC per person-month
∙ $ per page of documentation
Size-oriented metrics are not universally accepted as the best way to measure the
software process. Most of the controversy swirls around the use of lines of code as a
key measure. Proponents of the LOC measure claim that LOC is an “artifact” of all
software development projects that can be easily counted, that many existing software
estimation models use LOC or KLOC as a key input, and that a large body of litera-
ture and data predicated on LOC already exists. On the other hand, opponents argue
that LOC measures are programming language dependent, that when productivity is
considered, they penalize well-designed but shorter programs; that they cannot easily
accommodate nonprocedural languages; and that their use in estimation requires a
level of detail that may be difficult to achieve (i.e., the planner must estimate the LOC
to be produced long before analysis and design have been completed).
Similar arguments, pro and con, can be made for function-oriented metrics such
as function points (FP) or use case points (both are discussed in Chapter 25). Function-
oriented software metrics use a measure of the functionality delivered by the applica-
tion as a normalization value. Computation of a function-oriented metric is based on
characteristics of the software’s information domain and complexity.
The function point, like the LOC measure, is controversial. Proponents claim that
FP is programming language–independent, making it ideal for applications using con-
ventional and nonprocedural languages, and that it is based on data that are more
likely to be known early in the evolution of a project, making FP more attractive as
an estimation approach. Opponents claim that the method requires some “sleight of
hand” in that computation is based on subjective rather than objective data, that counts
of the information domain (and other dimensions) can be difficult to collect after the
fact, and that FP has no direct physical meaning—it’s just a number.
Function points and LOC-based metrics have been found to be relatively accurate
predictors of software development effort and cost. However, to use LOC and FP for
estimation (Chapter 25), an historical baseline of information must be established. It
is this historical data that over time will let you judge the value of a particular metric
on future projects.
Size-oriented measures (e.g., LOC) and function-oriented measures are often used
to derive productivity metrics. This invariably leads to a debate about the use of such
data. Should the LOC/person-month (or FP/person-month) of one group be compared
to similar data from another? Should managers appraise the performance of individuals
by using these metrics? The answer to these questions is an emphatic no! The reason
482 PART THREE QUALITY AND SECURITY
for this response is that many factors influence productivity, making for “apples and
oranges” comparisons that can be easily misinterpreted.
Within the context of process and project metrics, you should be concerned primar-
ily with productivity and quality—measures of software development “output” as a
function of effort and time applied and measures of the “fitness for use” of the work
products that are produced.
For process improvement and project planning purposes, your interest is historical.
What was software development productivity on past projects? What was the quality
of the software that was produced? How can past productivity and quality data be
extrapolated to the present? How can it help us improve the process and plan new
projects more accurately?
23.8 Me t r i c s f o r so f t wa r e Qua L i t y
The quality of a system, application, or product is only as good as the requirements
that describe the problem, the design that models the solution, the code that leads to
an executable program, and the tests that exercise the software to uncover errors.
Software is a complex entity. Therefore, errors are to be expected as work products
are developed. Process metrics are intended to improve the software process so that
errors are uncovered in the most effective manner.
You can use measurement to assess the quality of the requirements and design
models, the source code, and the test cases that have been created as the software is
engineered. To accomplish this real-time assessment, you apply product metrics to
evaluate the quality of software engineering work products in objective rather than
subjective ways.
A project manager must also evaluate quality as the project progresses. Private
metrics collected by individual software engineers are combined to provide project-
level results. Although many quality measures can be collected, the primary thrust at
the project level is to measure errors and defects. Metrics derived from these measures
provide an indication of the effectiveness of individual and group software quality
assurance and control activities.
Metrics such as work product errors per function point, errors uncovered per review
hour, and errors uncovered per testing hour provide insight into the efficacy of each
of the activities implied by the metric. Error data can also be used to compute the
defect removal efficiency (DRE) for each process framework activity. DRE is dis-
cussed later in this section.
Although there are many measures of software quality, correctness, maintainability,
integrity, and usability provide useful indicators for the project team. Gilb [Gil88]
suggests definitions and measures for each.
Correctness. Correctness is the degree to which the software performs its
required function. Defects (lack of correctness) are those problems reported
by a user of the program after the program has been released for general use.
For quality assessment purposes, defects are counted over a standard period
of time, typically one year. The most common measure for correctness is
defects per KLOC, where a defect is defined as a verified lack of
conformance to requirements.
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 483
Maintainability. Maintainability is the ease with which a program can be
corrected if an error is encountered, adapted if its environment changes, or
enhanced if the customer desires a change in requirements. There is no way
to measure maintainability directly; therefore, we must use indirect mea-
sures. A simple time-oriented metric is mean time to change (MTTC), the
time it takes to analyze the change request, design an appropriate modifica-
tion, implement the change, test it, and distribute the change to all users.
Integrity. This attribute measures a system’s ability to withstand attacks
(both accidental and intentional) to its security. To measure integrity, two
additional attributes must be defined: threat and security. Threat is the prob-
ability (which can be estimated or derived from empirical evidence) that an
attack of a specific type will occur within a given time. Security is the prob-
ability (which can be estimated or derived from empirical evidence) that the
attack of a specific type will be repelled. The integrity of a system can then
be defined as:
Integrity = Σ[1 − (threat × (1 − security))]
For example, if threat (the probability that an attack will occur) is 0.25 and
security (the likelihood of repelling an attack) is 0.95, the integrity of the
system is 0.99 (very high). If, on the other hand, the threat probability is
0.50 and the likelihood of repelling an attack is only 0.25, the integrity of
the system is 0.63 (unacceptably low).
Usability. Usability is an attempt to quantify ease of use and can be
measured in terms of the characteristics presented in Chapter 12.
These four factors are only a sampling of those that have been proposed as measures
for software quality.
A quality metric that provides benefit at both the project and process level is defect
removal efficiency (DRE). In essence, DRE is a measure of the filtering ability of
quality assurance and control actions as they are applied throughout all process frame-
work activities.
When considered for a project as a whole, DRE is defined in the following manner:
DRE =
E
E + D
where E is the number of errors found before delivery of the software to the end user
and D is the number of defects found after delivery.
The ideal value for DRE is 1. That is, no defects are found in the software. Real-
istically, D will be greater than 0, but the value of DRE can still approach 1. As E
increases (for a given value of D), the overall value of DRE begins to approach 1. In
fact, as E increases, it is likely that the final value of D will decrease (errors are
filtered out before they become defects). If used as a metric that provides an indicator
of the filtering ability of quality control and assurance activities, DRE encourages a
software project team to institute techniques for finding as many errors as possible
before delivery.
DRE can also be used within the project to assess a team’s ability to find errors
before they are passed to the next framework activity or software engineering task.
484 PART THREE QUALITY AND SECURITY
For example, requirements analysis produces a requirements model that can be
reviewed to find and correct errors. Those errors that are not found during the review
of the requirements model are passed on to design (where they may or may not be
found). When used in this context, we redefine DRE as
DREi =
Ei
Ei + Ei+1
where Ei is the number of errors found during software engineering action i and Ei+1
is the number of errors found during software engineering action i + 1 that are trace-
able to errors that were not discovered in software engineering action i.
A quality objective for a software team (or an individual software engineer) is to
achieve a DREi that approaches 1. That is, errors should be filtered out before they
are passed on to the next activity or action. If DRE is low as you move through
analysis and design, spend some time improving the way you conduct formal techni-
cal reviews.
A Metrics-Based Quality Approach
The scene: Doug Miller’s office
2 days after initial meeting on
software metrics.
The players: Doug Miller, manager of the
SafeHome software engineering team, and
Vinod Raman and Jamie Lazar, members of the
product software engineering team.
The conversation:
Doug: Did you both have a chance to learn a
little about process and project metrics?
Vinod and Jamie: (Both nod.)
Doug: It’s always a good idea to establish
goals when you adopt any metrics. What are
yours?
Vinod: Our metrics should focus on quality.
In fact, our overall goal is to keep the number
of errors we pass on from one software
engineering activity to the next to an
absolute minimum.
Doug: And be very sure you keep the number
of defects released with the product to as
close to zero as possible.
Vinod (nodding): Of course.
Jamie: I like DRE as a metric, and I think we
can use it for the entire project, but also as we
move from one framework activity to the next.
It’ll encourage us to find errors at each step.
Vinod: I’d also like to collect the number of
hours we spend on reviews.
Jamie: And the overall effort we spend on
each software engineering task.
Doug: You can compute a review-to-
development ratio . . . might be interesting.
Jamie: I’d like to track some use case data
as well. Like the amount of effort required to
develop a use case, the amount of effort
required to build software to implement a use
case, and . . .
Doug (smiling): I thought we were going to
keep this simple.
Vinod: We should, but once you get into this
metrics stuff, there’s a lot of interesting things
to look at.
Doug: I agree, but let’s walk before we run
and stick to our goal. Limit data to be collected
to five or six items, and we’re ready to go.
safeHoMe
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 485
23.9 esta b L i s H i ng so f t wa r e Me t r i c s pro g r a M s
The Software Engineering Institute has developed a comprehensive guidebook
[Par96b] for establishing a “goal-driven” software metrics program. The guidebook
suggests the following steps: (1) identify your business goals, (2) identify what you
want to know or learn, (3) identify your subgoals, (4) identify the entities and attri-
butes related to your subgoals, (5) formalize your measurement goals, (6) identify
quantifiable questions and the related indicators that you will use to help you achieve
your measurement goals, (7) identify the data elements that you will collect to con-
struct the indicators, (8) identify the measures to be used, and make these definitions
operational, (9) identify the actions that you will take to implement the measures, and
(10) prepare a plan for implementing the measures. A detailed discussion of these
steps is best left to the SEI’s guidebook. However, a brief overview of key points is
illustrated by the following example.
Because software supports business functions, differentiates computer-based sys-
tems or products, or acts as a product in itself, goals defined for the business can
almost always be traced downward to specific goals at the software engineering level.
For example, consider the SafeHome product. Working as a team, software engineer-
ing and business managers develop a list of prioritized business goals:
1. Improve our customers’ satisfaction with our products.
2. Make our products easier to use.
3. Reduce the time it takes us to get a new product to market.
4. Make support for our products easier.
5. Improve our overall profitability.
The software organization examines each business goal and asks: “What activities do
we manage, execute, or support and what do we want to improve within these activ-
ities?” To answer these questions, the SEI recommends the creation of an “entity-
question list” in which all things (entities) within the software process that are managed
or influenced by the software organization are noted. Examples of entities include
development resources, work products, source code, test cases, change requests, soft-
ware engineering tasks, and schedules. For each entity listed, software people develop
a set of questions that assess quantitative characteristics of the entity (e.g., size, cost,
time to develop). The questions derived as a consequence of the creation of an entity-
question list lead to the derivation of a set of subgoals that relate directly to the enti-
ties created and the activities performed as part of the software process.
Consider the fourth goal: “Make support for our products easier.” The following
list of questions might be derived for this goal [Par96b]:
∙ Do customer change requests contain the information we require to adequately
evaluate the change and then implement it in a timely manner?
∙ How large is the change request backlog?
∙ Is our response time for fixing bugs acceptable based on customer need?
∙ Is our change control process (Chapter 22) followed?
∙ Are high-priority changes implemented in a timely manner?
486 PART THREE QUALITY AND SECURITY
Based on these questions, the software organization can derive the following subgoal:
Improve the performance of the change management process. The software process
entities and attributes that are relevant to the subgoal are identified, and the measure-
ment goals associated with them are delineated.
The SEI [Par96b] provides detailed guidance for steps 6 through 10 of its goal-
driven measurement approach. In essence, you refine measurement goals into questions
that are further refined into entities and attributes that are then refined into metrics.
The vast majority of software development organizations have fewer than 20 soft-
ware people. It is unreasonable, and in most cases unrealistic, to expect that such
organizations will develop comprehensive software metrics programs. However, it is
reasonable to suggest that software organizations15 of all sizes measure and then use
the resultant metrics to help improve their local software process and the quality and
timeliness of the products they produce.
A small organization can begin by focusing not on measurement but rather on
results. The software group is polled to define a single objective that requires improve-
ment. For example, “reduce the time to evaluate and implement change requests.” A
small organization might select the following set of easily collected measures:
∙ Time (hours or days) elapsed from the time a request is made until evaluation
is complete, tqueue.
∙ Effort (person-hours) to perform the evaluation, Weval.
∙ Time (hours or days) elapsed from completion of evaluation to assignment of
change order to personnel, teval.
∙ Effort (person-hours) required to make the change, Wchange.
∙ Time required (hours or days) to make the change, tchange.
∙ Errors uncovered during work to make change, Echange.
∙ Defects uncovered after change is released to the customer base, Dchange.
Once these measures have been collected for a number of change requests, it is
possible to compute the total elapsed time from change request to implementation of
the change and the percentage of elapsed time absorbed by initial queuing, evaluation
and change assignment, and change implementation. Similarly, the percentage of effort
required for evaluation and implementation can be determined. These metrics can be
assessed in the context of quality data, Echange and Dchange. The percentages provide
insight into where the change request process slows down and may lead to process
improvement steps to reduce tqueue, Weval, teval, Wchange, and/or Echange. In addition, the
defect removal efficiency can be computed as
DRE =
Echange
Echange + Dchange
DRE can be compared to elapsed time and total effort to determine the impact of
quality assurance activities on the time and effort required to make a change.
The majority of software developers still do not measure, and sadly, most have
little desire to begin. As we noted previously in this chapter, the problem is cultural.
15 This discussion is equally relevant to software teams that have adopted an agile software
development process (Chapter 3).
CHAPTER 23 SOFTWARE METRICS AND ANALYTICS 487
Attempting to collect measures where none have been collected in the past often pre-
cipitates resistance. “Why do we need to do this?” asks a harried project manager.
“I don’t see the point,” complains an overworked practitioner. Why is it so important
to measure the process of software engineering and the product (software) that it pro-
duces? The answer is relatively obvious. If you do not measure, there is no real way
of determining whether you are improving. And if you are not improving, you are lost.
23.10 su M M a ry
Measurement enables managers and practitioners to improve the software process;
assist in the planning, tracking, and control of software projects; and assess the qual-
ity of the product (software) that is produced. Measures of specific attributes of the
process, project, and product are used to compute software metrics. These metrics can
be analyzed to provide indicators that guide management and technical actions.
Process metrics enable an organization to take a strategic view by providing insight
into the effectiveness of a software process. Project metrics are tactical. They enable a
project manager to adapt project work flow and technical approach in a real-time manner.
Measurement results in cultural change. Data collection, metrics computation, and
metrics analysis are the three steps that must be implemented to begin a metrics
program. In general, a goal-driven approach helps an organization focus on the right
metrics for its business.
Both size- and function-oriented metrics are used throughout the industry. Size-
oriented metrics use the line of code as a normalizing factor for other measures such
as person-months or defects. Few product metrics have been proposed for direct use
in software testing and maintenance. However, many other product metrics can be
used to guide the testing process and as a mechanism for assessing the maintainability
of a computer program.
Software metrics provide a quantitative way to assess the quality of internal prod-
uct attributes, thereby enabling you to assess quality before the product is built.
Metrics provide the insight necessary to create effective requirements and design
models, solid code, and thorough tests.
Software quality metrics, like productivity metrics, focus on the process, the proj-
ect, and the product. By developing and analyzing a metrics baseline for quality, an
organization can correct those areas of the software process that are the cause of
software defects.
To be useful in a real-world context, a software metric must be simple and comput-
able, persuasive, consistent, and objective. It should be programming language inde-
pendent and provide you with effective feedback.
pro b L e M s a n d po i n t s to po n d e r
23.1. Software for System X has 24 individual functional requirements and 14 nonfunctional
requirements. What is the specificity of the requirements? The completeness?
23.2. A major information system has 1140 modules. There are 96 modules that perform con-
trol and coordination functions and 490 modules whose function depends on prior processing.
The system processes approximately 220 data objects that each has an average of three attributes.
488 PART THREE QUALITY AND SECURITY
There are 140 unique database items and 90 different database segments. Finally, 600 modules
have single entry and exit points. Compute the DSQI for this system.
23.3. A class X has 12 operations. Cyclomatic complexity has been computed for all operations
in the OO system, and the average value of module complexity is 4. For class X, the complex-
ity for operations 1 to 12 is 5, 4, 3, 3, 6, 8, 2, 2, 5, 5, 4, 4, respectively. Compute the weighted
methods per class.
23.4. A legacy system has 940 modules. The latest release required that 90 of these modules
be changed. In addition, 40 new modules were added and 12 old modules were removed.
Compute the software maturity index for the system.
23.5. Why should some software metrics be kept “private”? Provide examples of three metrics
that should be private. Provide examples of three metrics that should be public.
23.6. Team A found 342 errors during the software engineering process prior to release. Team
B found 184 errors. What additional measures would have to be made for projects A and B to
determine which of the teams eliminated errors more efficiently? What metrics would you
propose to help in making the determination? What historical data might be useful?
23.7. A Web engineering team has built an e-commerce WebApp that contains 145 individual
pages. Of these pages, 65 are dynamic; that is, they are internally generated based on end-user
input. What is the customization index for this application?
23.8. A WebApp and its support environment have not been fully fortified against attack. Web
engineers estimate that the likelihood of repelling an attack is only 30 percent. The system does
not contain sensitive or controversial information, so the threat probability is 25 percent. What
is the integrity of the WebApp?
23.9. At the conclusion of a project, it has been determined that 30 errors were found during
the modeling phase and 12 errors were found during the construction phase that were traceable
to errors not discovered in the modeling phase. What is the DRE for these two phases?
23.10. A software team delivers a software increment to end users. The users uncover eight
defects during the first month of use. Prior to delivery, the software team found 242 errors
during formal technical reviews and all testing tasks. What is the overall DRE for the project
after 1 month’s usage?
Design element: Quick Look icon magnifying glass: © Roger Pressman
489
P A R T
Four
Managing
Software Projects
In this part of Software Engineering: A Practitioner’s Approach, you’ll
learn the management techniques required to plan, organize, monitor, and
control software projects. These questions are addressed in the chapters
that follow:
∙ How must people, process, and problem be managed during a soft-
ware project?
∙ How can software metrics be used to manage a software project and
the software process?
∙ How does a software team generate reliable estimates of effort, cost,
and project duration?
∙ What techniques can be used to systematically assess the risks that
can have an impact on project success?
∙ How does a software project manager select the set of software engi-
neering work tasks?
∙ How is a project schedule created?
∙ Why are maintenance and support so important for both software
engineering managers and practitioners?
Once these questions are answered, you’ll be better prepared to manage
software projects in a way that will lead to timely delivery of a high-
quality product constrained by the available resources.
490
C H A P T E R
24 Project Management
Concepts
What is it? Although many of us (in our darker
moments) take Dilbert’s1 view of “manage-
ment,” it remains a very necessary activity
when computer-based systems and products
are built. Project management involves the
planning, monitoring, and coordinating of peo-
ple, processes, and events that occur as soft-
ware evolves from a preliminary concept to
full operational deployment.
Who does it? Everyone “manages” to some ex-
tent, but the scope of management activities
varies among people involved in a software
project.
Why is it important? Building computer soft-
ware is a complex undertaking, particularly
if it involves many people working over a
relatively long time. That’s why software
projects need to be managed.
What are the steps? Understand the four Ps—
people, product, process, and project. People
must be organized to perform software work
effectively. Product scope and requirements
must be understood. A process that is appro-
priate for the people and the product should
be selected. The project must be planned by
estimating effort and calendar time to accom-
plish work tasks. This is true even for agile
projects management.
What is the work product? A project plan is
created and evolves as project activities com-
mence. The plan is a living document that de-
fines the process and tasks to be conducted,
the people who will do the work, and the
mechanisms for assessing risks, controlling
change, and evaluating quality.
How do I ensure that I’ve done it
right? You’re never completely sure that the
project plan is right until the team has deliv-
ered a high-quality product on time and within
budget. However, a team leader does it right
when she encourages software people to
work together as an effective team, focusing
their attention on customer needs and prod-
uct quality.
Q u i c k L o o k
agile teams . . . . . . . . . . . . . . . . . . . . . . . . . . .495
coordination and communication . . . . . . . . .496
critical practices . . . . . . . . . . . . . . . . . . . . . . 502
people . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
problem decomposition . . . . . . . . . . . . . . . . .497
product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .492
software scope . . . . . . . . . . . . . . . . . . . . . . . .497
software team . . . . . . . . . . . . . . . . . . . . . . . . .494
stakeholders . . . . . . . . . . . . . . . . . . . . . . . . . .493
team leaders . . . . . . . . . . . . . . . . . . . . . . . . . .493
W5HH principle . . . . . . . . . . . . . . . . . . . . . . . . 501
k e y
c o n c e p t s
In the preface to his book on software project management, Meiler Page-Jones
[Pag85] comments on software projects that are not going well, “I’ve watched
in horror as . . . managers futilely struggled through nightmarish projects,
squirmed under impossible deadlines, or delivered systems that outraged their
users and went on to devour huge chunks of maintenance time.”
1 Try searching for the term management on the Dilbert website: http://dilbert.com/.
CHAPTER 24 PROJECT MANAGEMENT CONCEPTS 491
What Page-Jones describes are symptoms that result from an array of management
and technical problems. However, if a postmortem were to be conducted for every
project, it is very likely that a consistent theme would be encountered: project man-
agement was weak or nonexistent.
In this chapter and Chapters 25 through 27, we’ll present the key concepts that lead to
effective software project management. This chapter considers basic software project man-
agement concepts and principles. Chapter 25 discusses the techniques that are used to
estimate costs and create realistic (but flexible) schedules. Chapter 26 presents the manage-
ment activities that lead to effective risk monitoring, mitigation, and management. Chap-
ter 27 considers product support concerns and discusses the management issues that you’ll
encounter when dealing with maintenance of deployed systems. Finally, Chapter 28 dis-
cusses techniques for studying and improving your team’s software engineering processes.
24.1 th e Ma nag e M e n t sp e c t ru M
Effective software project management focuses on the four Ps: people, product, pro-
cess, and project. The order is not arbitrary. The manager who forgets that software
engineering work is an intensely human endeavor will never have success in project
management. A manager who fails to encourage comprehensive stakeholder commu-
nication early in the evolution of a product risks building an elegant solution for the
wrong problem. The manager who pays little attention to the process runs the risk of
inserting competent technical methods and tools into a vacuum. The manager who
begins work without a solid plan jeopardizes the success of the project. The manager
who is not ready to revise the plan when changes arise is doomed to fail.
24.1.1 The People
The cultivation of motivated, highly skilled software people has been discussed since
the 1960s. In fact, the “people factor” is so important that the Software Engineering
Institute has developed a People Capability Maturity Model (People-CMM), in rec-
ognition of the fact that “every organization needs to continually improve its ability
to attract, develop, motivate, organize, and retain the workforce needed to accomplish
its strategic business objectives” [Cur09].
The people capability maturity model defines the following key practice areas for
software people: staffing, communication and coordination, work environment, per-
formance management, training, compensation, competency analysis and develop-
ment, career development, workgroup development, and team/culture development,
and others. Organizations that achieve high levels of People-CMM maturity have a
higher likelihood of implementing effective software project management practices.
24.1.2 The Product
Before a project can be planned, product objectives and scope should be established,
alternative solutions should be considered, and technical and management constraints
should be identified. Without this information, it is impossible to define reasonable
(and accurate) estimates of the cost, an effective assessment of risk, a realistic
breakdown of project tasks, or a manageable project schedule that provides a mean-
ingful indication of progress.
492 PART FOUR MANAGING SOFTWARE PROJECTS
As a software developer, you and other stakeholders must meet to define product
objectives and scope. In many cases, this activity begins as part of the system engi-
neering or business process engineering and continues as the first step in software
requirements engineering (Chapter 7). Objectives identify the overall goals for the
product (from the stakeholders’ points of view) without considering how these goals
will be achieved. These often take the form of user stories and formal use cases. Scope
identifies the primary data, functions, and behaviors that characterize the product, and
more important, attempts to bound these characteristics in a quantitative manner.
Once the product objectives and scope are understood, alternative solutions are
considered. Although very little detail is discussed, the alternatives enable managers
and practitioners to select a “best” approach, given the constraints imposed by deliv-
ery deadlines, budgetary restrictions, personnel availability, technical interfaces, and
myriad other factors.
24.1.3 The Process
A software process (Chapters 2 through 4) provides the framework from which a
comprehensive plan for software development can be established. A small number of
framework activities are applicable to all software projects, regardless of their size or
complexity. Even agile developers follow a change-friendly process (Chapter 3) to
impose some discipline on their software engineering work. A number of task sets—
tasks, milestones, work products, and quality assurance points—enable the framework
activities to be adapted to the characteristics of the software project and the require-
ments of the project team. Finally, umbrella activities—such as software quality assur-
ance, software configuration management, and measurement—overlay the process
model. Umbrella activities are independent of any one framework activity and occur
throughout the process.
24.1.4 The Project
We conduct planned and controlled software projects for one primary reason—it is
the only known way to manage complexity. And yet, software teams still struggle. In
a study of 250 large software projects between 1998 and 2004, Capers Jones [Jon04]
found that “about 25 were deemed successful in that they achieved their schedule,
cost, and quality objectives. About 50 had delays or overruns below 35 percent, while
about 175 experienced major delays and overruns, or were terminated without
completion.” Although the success rate for present-day software projects may have
improved somewhat, our project failure rate remains much higher than it should be.2
To avoid project failure, a software project manager and the software engineers
who build the product must avoid a set of common warning signs, understand the
critical success factors that lead to good project management, and develop a
commonsense approach for planning, monitoring, and controlling the project [Gha14].
Each of these issues is discussed in Section 24.5 and in the chapters that follow.
2 Given these statistics, it’s reasonable to ask how the impact of computers continues to grow
exponentially. Part of the answer, we think, is that a substantial number of these “failed” proj-
ects are ill conceived in the first place. Customers lose interest quickly (because what they’ve
requested wasn’t really as important as they first thought), and the projects are cancelled.
CHAPTER 24 PROJECT MANAGEMENT CONCEPTS 493
24.2 pe o p L e
People build computer software, and projects succeed because well-trained, motivated
people get things done. All of us, from senior engineering vice presidents to the
lowliest practitioner, often take people for granted. Managers argue that people are
primary, but their actions sometimes belie their words. In this section, we examine
the stakeholders who participate in the software process and the manner in which they
are organized to perform effective software engineering.
24.2.1 The Stakeholders
The software process (and every software project) is populated by stakeholders who
can be categorized into one of five constituencies:
1. Senior managers (product owners) who define the business issues that often
have a significant influence on the project.
2. Project (technical) managers (Scrum masters or team leads) who must plan,
motivate, organize, and coordinate the practitioners who do software work.
3. Practitioners who deliver the technical skills that are necessary to engineer a
product or application.
4. Customers who specify the requirements for the software to be engineered
and other stakeholders who have a peripheral interest in the outcome.
5. End users who interact with the software once it is released for production use.
Every software project is populated by people who fall within this taxonomy.3 To
be effective, the project team must be organized in a way that maximizes each person’s
skills and abilities. And that’s the job of the team leader.
24.2.2 Team Leaders
Project management is a people-intensive activity, and for this reason, competent
practitioners often make poor team leaders. They simply don’t have the right mix of
people skills. And yet, as Edgemon states: “Unfortunately and all too frequently it
seems, individuals just fall into a project manager role and become accidental project
managers” [Edg95]. Shared leadership often helps teams perform better, but team
leaders often monopolize decision-making authority and fail to provide team members
with the levels of autonomy needed to complete their tasks [Hoe16].
James Kouzes has been writing about effective leadership in various technical areas
for many years. He lists five practices found in exemplary technology leaders [Kou14]:
Model the way. Leaders must practice what they preach. They demonstrate
commitment to the team and project through shared sacrifice (e.g., by being
the last one to go home each night or taking the time to become an expert on
the software application).
Inspire and shared vision. Leaders recognize that they cannot lead without
followers. It is important to motivate team members to tie their personal
3 When WebApps, MobileApps, or games are developed, other nontechnical people may be
involved in content creation.
494 PART FOUR MANAGING SOFTWARE PROJECTS
aspirations to the team goals. This means involving stakeholders early in the
goal-setting process.
Challenge the process. Leaders must take the initiative to look for innova-
tive ways to improve their own work and the work of their teams. Encourage
team members to experiment and take risks by helping them generate fre-
quent small successes while learning from their failures.
Enable others to act. Foster the team’s collaborative abilities by building
trust and facilitating relationships. Increase the team’s sense of competence
through sharing decision making and goal setting.
Encourage the heart. Celebrate the accomplishments of individuals. Build
community (team) spirit by celebrating shared goals and victories, both inside
and outside the team.
Another way of looking at successful project leaders might be to suggest that they
adopt a problem-solving management style. A software project manager should con-
centrate on understanding the problem to be solved, coordinate the flow of ideas from
stakeholders, and let everyone on the team know (by words and, far more important,
by actions) that quality begins with each one of them and that their input and contri-
butions are valued.
24.2.3 The Software Team
There are almost as many human organizational structures for software development
as there are organizations that develop software. For better or worse, organizational
structure cannot be easily modified. Concerns with the practical and political conse-
quences of organizational change are not within the software project manager’s scope
of responsibility. However, the organization of the people directly involved in a new
software project is within the project manager’s purview.
The “best” team structure depends on the management style of your organization,
the number of people who will populate the team and their skill levels, and the over-
all problem difficulty. Mantei [Man81] describes seven project factors that should be
considered when planning the structure of software engineering teams: (1) difficulty
of the problem to be solved, (2) “size” of the resultant program(s) in lines of code or
function points, (3) time that the team will stay together (team lifetime), (4) degree
to which the problem can be modularized, (5) quality and reliability of the system to
be built, (6) rigidity of the delivery date, and (7) degree of sociability (communication)
required for the project.
Regardless of team organization, the objective for every project manager is to help
create a team that exhibits cohesiveness. In their book Peopleware, DeMarco and
Lister [DeM98] look for teams that “jell.” They write:
A jelled team is a group of people so strongly knit that the whole is greater than the
sum of the parts . . .
Once a team begins to jell, the probability of success goes way up. The team can
become unstoppable, a juggernaut for success . . . They don’t need to be managed in the
traditional way, and they certainly don’t need to be motivated. They’ve got momentum.
DeMarco and Lister contend that members of jelled teams are significantly more
productive and more motivated than average. They share a common goal, a common
culture, and in many cases, a “sense of eliteness” that makes them unique.
CHAPTER 24 PROJECT MANAGEMENT CONCEPTS 495
But not all teams jell. In fact, many teams suffer from what Jackman [Jac98] calls
“team toxicity.” She defines five factors that “foster a potentially toxic team environment”:
(1) a frenzied work atmosphere, (2) high frustration that causes friction among team
members, (3) a “fragmented or poorly coordinated” software process, (4) an unclear def-
inition of roles on the software team, and (5) “continuous and repeated exposure to failure.”
To avoid a frenzied work environment, the project manager should be certain that the
team has access to all information required to do the job and that major goals and objec-
tives, once defined, should not be modified unless absolutely necessary. A software team
can avoid frustration if it is given as much responsibility for decision making as possible.
An inappropriate process (e.g., unnecessary or burdensome work tasks or poorly chosen
work products) can be avoided by understanding the product to be built and the people
doing the work and by allowing the team to select the process model. The team itself
should establish its own mechanisms for accountability (technical reviews4 are an excel-
lent way to accomplish this) and define a series of corrective approaches when a mem-
ber of the team fails to perform. And finally, the key to avoiding an atmosphere of
failure is to establish team-based techniques for feedback and problem solving.
Many software organizations advocate agile software development (Chapter 3) as
an antidote to many of the problems that have plagued software project work. To
review, the agile philosophy encourages customer satisfaction and early incremental
delivery of software; small, highly motivated project teams; informal methods; mini-
mal software engineering work products; and overall development simplicity.
The small, highly motivated project team, also called an agile team, adopts many
of the characteristics of successful software project teams discussed in the preceding
section and avoids many of the toxins that create problems [Hoe16]. However, the
agile philosophy stresses individual (team member) competency coupled with group
collaboration as critical success factors for the team. Cockburn and Highsmith
[Coc01a] note this when they write:
If the people on the project are good enough, they can use almost any process and
accomplish their assignment. If they are not good enough, no process will repair their
inadequacy—“people trump process” is one way to say this. However, lack of user and
executive support can kill a project—“politics trump people.” Inadequate support can
keep even good people from accomplishing the job . . .
To make effective use of the competencies of each team member and to foster
effective collaboration through a software project, agile teams are self-organizing.
Many agile process models (e.g., Scrum) give the agile team significant autonomy
to make the project management and technical decisions required to get the job done.
Planning is kept to a minimum, and the team is allowed to select its own approach
(e.g., process, methods, tools), constrained only by business requirements and organi-
zational standards. As the project proceeds, the team self-organizes to focus individual
competency in a way that is most beneficial to the project at a given point in time.
To accomplish this, an agile team might conduct daily team meetings to coordinate
and synchronize the work that must be accomplished for that day. Based on informa-
tion obtained during these meetings, the team adapts its approach in a way that accom-
plishes an increment of work. As each day passes, continual self-organization and
collaboration move the team toward a completed software increment.
4 Technical reviews are discussed in detail in Chapter 16.
496 PART FOUR MANAGING SOFTWARE PROJECTS
24.2.4 Coordination and Communication Issues
There are many reasons that software projects get into trouble. The scale of many
development efforts is large, leading to complexity, confusion, and significant difficul-
ties in coordinating team members. Uncertainty is common, resulting in a continuing
stream of changes that ratchets the project team. Interoperability has become a key
characteristic of many systems. New software must communicate with existing soft-
ware and conform to predefined constraints imposed by the system or product.
These characteristics of modern software—scale, uncertainty, and interoperability—
are facts of life. To deal with them effectively, you must establish effective methods
for coordinating the people who do the work. To accomplish this, mechanisms for
formal and informal communication among team members and between multiple
teams must be established. Formal communication is accomplished through “writing,
structured meetings, and other relatively non-interactive and impersonal communica-
tion channels” [Kra95]. Informal communication is more personal. Members of a
software team share ideas on an ad hoc basis, ask for help as problems arise, and
interact with one another on a daily basis.
Team Structure
The scene: Doug Miller’s office
prior to the initiation of the
SafeHome software project.
The players: Doug Miller, manager of the
SafeHome software engineering team, and
Vinod Raman, Jamie Lazar, and other members
of the product software engineering team.
The conversation:
Doug: Have you guys had a chance to look
over the preliminary info on SafeHome that
marketing has prepared?
Vinod (nodding and looking at his teammates):
Yes. But we have a bunch of questions.
Doug: Let’s hold on that for a moment. I’d like
to talk about how we are going to structure the
team, who’s responsible for what . . .
Jamie: I’m really into the agile philosophy,
Doug. I think we should be a self-organizing
team.
Vinod: I agree. Given the tight time line and
some of the uncertainty, and the fact that we’re
all really competent [laughs], that seems like
the right way to go.
Doug: That’s okay with me, but you guys
know the drill.
Jamie (smiling and talking as if she was recit-
ing something): We make tactical decisions,
about who does what and when, but it’s our
responsibility to get product out the door on
time.
Vinod: And with quality.
Doug: Exactly. But remember there are con-
straints. Marketing defines the software incre-
ments to be produced—in consultation with us,
of course.
Jamie: And?
Doug: And, we’re going to use UML as our
modeling approach.
Vinod: But keep extraneous documentation to
an absolute minimum.
Doug: Who is the liaison with me?
Jamie: We decided that Vinod will be the tech
lead—he’s got the most experience, so Vinod
is your liaison, but feel free to talk to any of us.
Doug (laughing): Don’t worry, I will.
safehoMe
CHAPTER 24 PROJECT MANAGEMENT CONCEPTS 497
24.3 pro d u c t
A software project manager is confronted with a dilemma at the very beginning of a
software project. Quantitative estimates and an organized plan are required, but solid
information is unavailable. A detailed analysis of software requirements would provide
information necessary for estimates, but analysis often takes weeks or even months
to complete. Worse, requirements may be fluid, changing regularly as the project
proceeds. Yet, a plan is needed now!
Like it or not, you must examine the product and the problem it is intended to
solve at the very beginning of the project. At a minimum, the scope of the product
must be established and bounded.
24.3.1 Software Scope
The first software project management activity is the determination of software scope.
Scope is defined by answering the following questions:
Context. How does the software to be built fit into a larger system, product,
or business context, and what constraints are imposed as a result of the context?
Information objectives. What customer-visible data objects are produced as
output from the software? What data objects are required for input?
Function and performance. What function does the software perform to
transform input data into output? Are any special performance characteristics
to be addressed?
Software project scope must be unambiguous and understandable at the manage-
ment and technical levels. A statement of software scope must be bounded. That is,
quantitative data (e.g., number of simultaneous users, size of mailing list, maximum
allowable response time) are stated explicitly, constraints and/or limitations (e.g., prod-
uct cost restricts memory size) are noted, and mitigating factors (e.g., desired algo-
rithms are well understood and available in Java) are described. Even in the most fluid
situations, the number of prototypes needs to be considered and the scope of the first
prototype needs to be set.
24.3.2 Problem Decomposition
Problem decomposition, sometimes called partitioning or problem elaboration, is an
activity that sits at the core of software requirements analysis (Chapters 7 and 8).
During the scoping activity, no attempt is made to fully decompose the problem.
Rather, decomposition is applied in two major areas: (1) the functionality and content
(information) that must be delivered and (2) the process that will be used to deliver it.
This can be accomplished using a list of functions or with use cases or for agile work,
user stories.
Human beings tend to apply a divide-and-conquer strategy when they are con-
fronted with a complex problem. Stated simply, a complex problem is partitioned into
smaller problems that are more manageable. This is the strategy that applies as proj-
ect planning begins. Software functions, described in the statement of scope, are
evaluated and refined to provide more detail prior to the beginning of estimation
(Chapter 25). Because both cost and schedule estimates are functionally oriented,
498 PART FOUR MANAGING SOFTWARE PROJECTS
some degree of decomposition is often useful. Similarly, major content or data objects
are decomposed into their constituent parts, providing a reasonable understanding of
the information to be produced by the software.
24.4 pro c e s s
The framework activities (Chapter 1) that characterize the software process are appli-
cable to all software projects. The problem is to select the process model that is appro-
priate for the software to be engineered by your project team. The recommended process
model in Chapter 4 may be a good starting point for many project teams to consider.
Your team must decide which process model is most appropriate for (1) the cus-
tomers who have requested the product and the people who will do the work, (2) the
characteristics of the product itself, and (3) the project environment in which the
software team works. When a process model has been selected, the team then defines
a preliminary project plan based on the set of process framework activities. Once the
preliminary plan is established, process decomposition begins. That is, a complete
plan, reflecting the work tasks required to populate the framework activities, must be
created. We explore these activities briefly in the sections that follow and present a
more detailed view in Chapter 25.
24.4.1 Melding the Product and the Process
Project planning begins with the melding of the product and the process. Each func-
tion to be engineered by your team must pass through the set of framework activities
that have been defined for your software organization. The process framework estab-
lishes a skeleton for project planning. It is adapted by allocating a task set that is
appropriate to the project. Assume that the organization has adopted the generic
framework activities—communication, planning, modeling, construction, and
deployment—discussed in Chapter 1.
The team members who work on a product function will apply each of the frame-
work activities to it. In essence, a matrix similar to the one shown in Figure 24.1 is
created. Each major product function (the figure lists functions for the fitness app
software discussed in Chapter 2) or user story is listed in the left-hand column. Frame-
work activities are listed in the top row. Software engineering work tasks (for each
framework activity) would be entered in the following row.5 The job of the project
manager (and other team members) is to estimate resource requirements for each
matrix cell, start and end dates for the tasks associated with each cell, and work
products to be produced as a consequence of each task. These activities are considered
in Chapter 25.
24.4.2 Process Decomposition
A software team should have a significant degree of flexibility in choosing the soft-
ware process model that is best for the project and the software engineering tasks that
5 It should be noted that work tasks must be adapted to the specific needs of the project based
on a number of adaptation criteria.
CHAPTER 24 PROJECT MANAGEMENT CONCEPTS 499
populate the process model once it is chosen. A relatively small project that is simi-
lar to past efforts might be best accomplished using a single sprint approach. If the
deadline is so tight that full functionality cannot reasonably be delivered, an incremental
strategy might be best. Similarly, projects with other characteristics (e.g., uncertain
requirements, breakthrough technology, difficult customers, or potential for significant
component reuse) will lead to the selection of other process models.6
Once the process model has been chosen, the process framework is adapted to it.
In every case, the generic process framework discussed earlier can be used. It will
work for linear models, for iterative and incremental models, for evolutionary models,
and even for concurrent or component assembly models. The process framework is
invariant and serves as the basis for all work performed by a software organization.
But actual work tasks do vary. Process decomposition commences when the proj-
ect manager asks, “How do we accomplish this framework activity?” For example, a
small, relatively simple project might require the following work tasks for the com-
munication activity:
1. Develop a list of clarification issues.
2. Meet with stakeholders to address clarification issues.
3. Jointly develop a statement of scope by listing the user stories.
4. Review the statement of scope with all concerned, and determine the
importance of each user story to the stakeholders.
5. Modify the statement of scope as required.
Figure 24.1
Melding the
problem and
the process
Product Function
Software Engineering Tasks
Sync phone to device
Display data on phone US
Set personal goals
Store data on cloud
Allow user to modify phone UI
Integrate social media
Set goals with friends
Co
m
m
un
ic
at
io
n
Pl
an
ni
ng
M
od
el
in
g
Co
ns
tru
ct
io
n
D
ep
lo
ym
en
tCOMMON PROCESS
FRAMEWORK
ACTIVITIES
6 Recall that project characteristics also have a strong bearing on the structure of the software
team (Section 24.2.3).
500 PART FOUR MANAGING SOFTWARE PROJECTS
These events might occur over a period of less than 48 hours. They represent a process
decomposition that is appropriate for the small, relatively simple project.
Now, consider a more complex project, which has a broader scope and more
significant business impact. Such a project might require the following work tasks for
the communication:
1. Review the customer request.
2. Plan and schedule a formal, facilitated meeting with all stakeholders.
3. Conduct research to specify the proposed solution and existing approaches.
4. Prepare a “working document” and an agenda for the formal meeting.
5. Conduct the meeting.
6. Jointly develop mini-specs that reflect data, function, and behavioral features
of the software. This is often done by developing use cases that describe the
software from the user’s point of view.
7. Review each mini-spec or use case for correctness, consistency, and lack of
ambiguity.
8. Assemble the mini-specs into a scoping document.
9. Review the collection of use cases with all concerned, and determine their
relative importance to all stakeholders.
10. Modify the scoping document or use cases as required.
Both projects perform the framework activity that we call communication, but the
first project team performs half as many software engineering work tasks as the
second.
24.5 pro j e c t
To manage a successful software project, you have to understand what can go wrong
so that problems can be avoided. In an excellent paper on software projects, John Reel
[Ree99] defines signs that indicate that an information systems project is in jeopardy.
In some cases, software people don’t understand their customer’s needs. This leads to
a project with a poorly defined scope. In other projects, changes are managed poorly.
Sometimes the chosen technology changes or business needs shift and management
sponsorship is lost. Management can set unrealistic deadlines or end users can be
resistant to the new system. There are cases in which the project team simply does
not have the requisite skills. And finally, there are developers who never seem to learn
from their mistakes.
Jaded industry professionals often refer to the “90–90 rule” when discussing par-
ticularly difficult software projects: The first 90 percent of a system absorbs 90 per-
cent of the allotted effort and time. The last 10 percent takes another 90 percent of
the allotted effort and time [Zah94]. The seeds that lead to the 90–90 rule are contained
in the signs noted in the preceding paragraph.
But enough negativity! What are the characteristics of successful software projects?
Ghazi [Gha14] and her colleagues note several characteristics that are present in
successful software projects and also found in most well-designed process models.
CHAPTER 24 PROJECT MANAGEMENT CONCEPTS 501
1. Clear and well-understood requirements accepted by all stakeholders
2. Active and continuous participation of users throughout the development process
3. A project manager with required leadership skills who is able to share project
vision with the team
4. A project plan and schedule developed with stakeholder participation to
achieve user goals
5. Skilled and engaged team members
6. Development team members with compatible personalities who enjoy working
in a collaborative environment
7. Realistic schedule and budget estimates which are monitored and maintained
8. Customer needs that are understood and satisfied
9. Team members who experience a high degree of job satisfaction
10. A working product that reflects desired scope and quality
24.6 th e W5hh pr i nc i p L e
In an excellent paper on software process and projects, Barry Boehm [Boe96] states:
“[Y]ou need an organizing principle that scales down to provide simple [project] plans
for simple projects.” Boehm suggests an approach that addresses project objectives,
milestones and schedules, responsibilities, management and technical approaches, and
required resources. He calls it the W5HH Principle, after a series of questions that
lead to a definition of key project characteristics and the resultant project plan:
Why is the system being developed? All stakeholders should assess the validity
of business reasons for the software work. Does the business purpose justify the
expenditure of people, time, and money?
What will be done? The task set required for the project is defined.
When will it be done? The team establishes a project schedule by identifying
when project tasks are to be conducted and when milestones are to be reached.
Who is responsible for a function? The role and responsibility of each member
of the software team is defined.
Where are they located organizationally? Not all roles and responsibilities
reside within software practitioners. The customer, users, and other stakeholders
also have responsibilities.
How will the job be done technically and managerially? Once product scope
is established, a management and technical strategy for the project must be defined.
How much of each resource is needed? The answer to this question is derived
by developing estimates (Chapter 25) based on answers to earlier questions.
Boehm’s W5HH principle is applicable regardless of the size or complexity of a
software project. The questions noted provide you and your team with an excellent
planning outline.
502 PART FOUR MANAGING SOFTWARE PROJECTS
24.7 cr i t i ca L pr ac t i c e s
The Airlie Council7 has developed a list of “critical software practices for
performance-based management.” These practices are “consistently used by, and
considered critical by, highly successful software projects and organizations whose
‘bottom line’ performance is consistently much better than industry averages” [Air99].
These practices are still applicable to modern performance-based management of all
software projects [All14].
Critical practices8 include: metric-based project management (Chapter 23), empir-
ical cost and schedule estimation (Chapter 25), earned value tracking (Chapter 25),
defect tracking against quality targets (Chapters 19 through 21), and people-oriented
management (Chapter 24). Each of these critical practices is addressed throughout
Part Four of this book.
24.8 su M M a ry
Software project management is an umbrella activity within software engineering. It
begins before any technical activity is initiated and continues throughout the modeling,
construction, and deployment of computer software.
Four Ps have a substantial influence on software project management—people,
product, process, and project. People must be organized into effective teams, moti-
vated to do high-quality software work, and coordinated to achieve effective com-
munication. Product requirements must be communicated from customer to
developer, partitioned (decomposed) into their constituent parts, and positioned for
work by the software team. The process must be adapted to the people and the
problem. A common process framework is selected, an appropriate software engi-
neering paradigm is applied, and a set of work tasks is chosen to get the job done.
Finally, the project must be organized in a manner that enables the software team
to succeed.
The pivotal element in all software projects is people. Software engineers can be
organized in a number of different team structures that range from traditional control
hierarchies to “open paradigm” teams. A variety of coordination and communication
techniques can be applied to support the work of the team. In general, technical
reviews and informal person-to-person communication have the most value for
practitioners.
The project management activity encompasses measurement and metrics, estima-
tion and scheduling, risk analysis, tracking, and control. Each of these topics is
considered in the chapters that follow.
7 The Airlie Council was comprised of a team of software engineering experts chartered by
the U.S. Department of Defense to help develop guidelines for best practices in software
project management and software engineering.
8 Only those critical practices associated with “project integrity” are noted here.
CHAPTER 24 PROJECT MANAGEMENT CONCEPTS 503
Pro b l e m s a n d Po i n t s to Po n d e r
24.1. Based on information contained in this chapter and your own experience, develop
“10 commandments” for empowering software engineers. That is, make a list of 10 guidelines
that will lead to software people who work to their full potential.
24.2. The Software Engineering Institute’s People Capability Maturity Model (People-CMM)
takes an organized look at “key practice areas” (KPAs) that cultivate good software people.
Your instructor will assign you one KPA for analysis and summary.
24.3. Describe three real-life situations in which the customer and the end user are the same.
Describe three situations in which they are different.
24.4. The decisions made by senior management can have a significant impact on the effective-
ness of a software engineering team. Provide five examples to illustrate that this is true.
24.5. You have been appointed a project manager within an information systems organization.
Your job is to build an application that is quite similar to others your team has built, although
this one is larger and more complex. Requirements have been thoroughly documented by the
customer. What team structure would you choose and why? What software process model(s)
would you choose and why?
24.6. You have been appointed a project manager for a small software products company. Your
job is to build a breakthrough product that combines virtual reality hardware with state-of-the-
art software. Because competition for the home entertainment market is intense, there is
significant pressure to get the job done. What team structure would you choose and why? What
software process model(s) would you choose and why?
24.7. You have been appointed a project manager for a major software products company. Your
job is to manage the development of the next-generation version of its widely used mobile
fitness app. Because competition is intense, tight deadlines have been established and announced.
What team structure would you choose and why? What software process model(s) would you
choose and why?
24.8. You have been appointed a software project manager for a company that services the
genetic engineering world. Your job is to manage the development of a new software product
that will accelerate the pace of gene typing. The work is R&D oriented, but the goal is to
produce a product within the next year. What team structure would you choose and why? What
software process model(s) would you choose and why?
24.9. You have been asked to develop a small application that analyzes each course offered by
a university and reports the average grade obtained in the course (for a given term). Write a
statement of scope that bounds this problem.
24.10. What, in your opinion, is the most important aspect of people management for a software
project?
Design element: Quick Look icon magnifying glass: © Roger Pressman
504
C H A P T E R
25 Creating a Viable
Software Plan
What is it? Software project planning encom-
passes five major activities—estimation,
scheduling, risk analysis, quality management
planning, and change management planning.
Who does it? Software project managers and
other members of the software team.
Why is it important? You need to assess the
tasks to perform, and the time line for the work to
be conducted. Many software engineering tasks
must occur in parallel, and the result of work per-
formed during one task may have a profound
effect on work to be conducted in another task.
These interdependencies are very difficult to
understand without creating a schedule.
What are the steps? Software engineering ac-
tivities and tasks are refined to accommodate
the functions and constraints imposed by
project scope. The problem is decomposed,
and estimation, risk analysis, and scheduling
occur.
What is the work product? An adaptable plan
containing a simple table delineating the tasks
to be performed, the functions to be imple-
mented, and the cost, effort, and time involved
for each is generated. A project schedule is
also created based on this information.
How do I ensure that I’ve done it right?
That’s hard, because you won’t really know
until the project has been completed. How-
ever, if you use a systematic planning ap-
proach, you can feel confident that you’ve
given it your best shot.
Q u i c k L o o k
agile development . . . . . . . . . . . . . . . . . . . . . 519
critical path . . . . . . . . . . . . . . . . . . . . . . . . . . 520
effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
estimation,
agile projects . . . . . . . . . . . . . . . . . . . . . . . 519
decomposition techniques . . . . . . . . . . . . 510
empirical models . . . . . . . . . . . . . . . . . . . . 510
FP-based . . . . . . . . . . . . . . . . . . . . . . . . . . 514
problem-based . . . . . . . . . . . . . . . . . . . . . . 512
process-based . . . . . . . . . . . . . . . . . . . . . . 515
reconciliation . . . . . . . . . . . . . . . . . . . . . . . 518
techniques . . . . . . . . . . . . . . . . . . . . . . . . . 511
use case points (UCPs) . . . . . . . . . . . . . . . 517
people and effort . . . . . . . . . . . . . . . . . . . . . . .522
principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
project planning . . . . . . . . . . . . . . . . . . . . . . .504
resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . .507
software scope . . . . . . . . . . . . . . . . . . . . . . . .507
software sizing . . . . . . . . . . . . . . . . . . . . . . . . 511
task network . . . . . . . . . . . . . . . . . . . . . . . . . .525
time-boxing . . . . . . . . . . . . . . . . . . . . . . . . . . .528
time-line charts . . . . . . . . . . . . . . . . . . . . . . . .526
tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .528
work breakdown . . . . . . . . . . . . . . . . . . . . . . .526
k e y
c o n c e p t s
Software project management begins with a set of activities that are collectively
called project planning. Before the project can begin, the software team estimates
the work to be done, the resources that will be required, and the time that will
elapse from start to finish. Once these activities are accomplished, the software
team should establish a project schedule that defines software engineering tasks
and milestones, identifies who is responsible for conducting each task, and spec-
ifies the intertask dependencies that may have a strong bearing on progress.
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 505
There was once a bright-eyed young engineer who was chosen to develop some
code for an automated manufacturing application. The reason for his selection was
simple. He was the only person in his group who knew the ins and outs of the
manufacturing controller, but at the time he knew nothing about software engineering
and even less about project scheduling and tracking.
His boss informed the young engineer that the project had to be completed in
2 months. He considered his approach and began writing code. After 2 weeks, the
boss called him into his office and asked how things were going.
“Really great,” said the young engineer with youthful enthusiasm. “This is much
simpler than I thought. I’m probably close to 75 percent finished.”
The boss smiled and encouraged the young engineer to keep up the good work.
They planned to meet again in a week’s time.
A week later the boss called the engineer into his office and asked, “Where are we?”
“Everything’s going well,” said the youngster, “but I’ve run into a few small snags.
I’ll get them ironed out and be back on track soon.”
“How does the deadline look?” the boss asked.
“No problem,” said the engineer. “I’m close to 90 percent complete.”
If you’ve been working in the software world for more than a few years, you can
finish the story. It’ll come as no surprise that the young engineer1 stayed 90 percent
complete for the entire project duration and finished (with the help of others) only
1 month late.
This story has been repeated hundreds of thousands of times by software developers
during the past five decades. The big question is why.
25.1 co m m e n t s o n est i m at i o n
Planning requires you to make an initial commitment, even though it’s likely that this
“commitment” will be proven wrong. Whenever estimates are made, you have to look
into the future and accept some degree of uncertainty as a matter of course.
Estimating is as much art as it is science, and it should not be conducted in a
haphazard manner. Because estimation lays a foundation for all other project planning
actions, and project planning provides the road map for successful software engineer-
ing, we would be ill-advised to embark without it.
Estimation of resources, cost, and schedule for software development requires expe-
rience, access to good historical information (e.g., process and product metrics), and
the courage to commit to quantitative predictions when qualitative information is all
that exists. Estimation carries inherent risk,2 and this risk leads to uncertainty. Project
complexity, project size, and the degree of structural uncertainty all affect the reli-
ability of estimates.
Project complexity has a strong effect on the uncertainty inherent in planning.
Complexity, however, is a relative measure that is affected by familiarity with past
efforts. The first-time developer of a sophisticated e-commerce application might
1 In case you were wondering, this story is autobiographical (RSP).
2 Systematic techniques for risk analysis are presented in Chapter 26.
506 PART FOUR MANAGING SOFTWARE PROJECTS
consider it to be exceedingly complex. However, a Web engineering team developing
its tenth e-commerce WebApp would consider such work run of the mill. A number
of quantitative software complexity measures have been proposed [Zus97], but they
are rarely used in real-world projects. However, other, more subjective assessments of
complexity (e.g., function point complexity adjustment factors described in
Section 25.6) can be established early in the planning process.
Project size is another important factor that can affect the accuracy and efficacy
of estimates. As size increases, the interdependency among various elements of the
software grows rapidly.3 Problem decomposition, an important approach to estimating,
becomes more difficult because the refinement of problem elements may still be
formidable. To paraphrase Murphy’s law: “What can go wrong will go wrong”—and
if there are more things that can fail, more things will fail.
The degree of structural uncertainty also has an effect on estimation risk. In this
context, structure refers to the degree to which requirements have been solidified, the
ease with which functions can be compartmentalized, and the hierarchical nature of
the information that must be processed.
The availability of historical information has a strong influence on estimation risk.
By looking back, you can emulate things that worked and improve areas where prob-
lems arose. When comprehensive software metrics (Chapter 23) are available for past
projects, estimates can be made with greater assurance, schedules can be established
to avoid past difficulties, and overall risk is reduced.
If project scope is poorly understood or project requirements are subject to change,
uncertainty and estimation risk become dangerously high. As a planner, you and the
customer should recognize that variability in software requirements means instability
in cost and schedule.
However, you should not become obsessive about estimation. Modern software
engineering approaches (e.g., evolutionary process models) take an iterative view of
development. In such approaches, it is possible to revisit estimates (as more informa-
tion is known) and revise them when stakeholders make changes to requirements or
schedules.
25.2 th e pro j e c t pL a n n i ng pro c e s s
The objective of software project planning is to provide a framework that enables the
manager to make reasonable estimates of resources, cost, and schedule. In addition,
estimates should attempt to define best-case and worst-case scenarios so that project
outcomes can be bounded. Although there is an inherent degree of uncertainty, the
software team embarks on a plan that has been established as a consequence of a
project planning task set. Therefore, the plan must be adapted and updated as the
project proceeds. In the following sections, each of the activities associated with the
software project planning task set is discussed.
3 Size often increases due to “scope creep” that occurs when problem requirements change.
Increases in project size can have a geometric impact on project cost and schedule (Michael
Mah, personal communication).
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 507
25.3 so f t wa r e sc o p e a n d fe a s i b i L i t y
Software scope describes the functions and features that are to be delivered to end
users; the data that are input and output; the “content” that is presented to users as a
consequence of using the software; and the performance, constraints, interfaces, and
reliability that bound the system. Scope can be defined by developing a set of use
cases4 that is developed with the end users.
Functions described in the use cases are evaluated and in some cases refined to
provide more detail prior to the beginning of estimation. Because both cost and sched-
ule estimates are functionally oriented, some degree of decomposition is often useful.
Performance considerations often constrain processing and response-time requirements.
Once scope has been identified (with the concurrence of the customer), it is rea-
sonable to ask: “Can we build software to meet this scope? Is the project feasible?”
All too often, software engineers rush past these questions (or are pushed past them
by impatient managers or other stakeholders), only to become mired in a project that
is doomed from the onset. You must try to determine if the system can be created
using available technology, dollars, time, and other resources. Project feasibility is
important, but a consideration of business need is even more important. It does no
good to build a high-tech system or product that no one wants.
25.4 re s o u rc e s
Once scope is defined, you must estimate the resources required to build software that
will implement the set of use cases that describe software features and functions.
Figure 25.1 depicts the three major categories of software engineering resources—people,
Task Set for Project Planning
1. Establish project scope.
2. Determine feasibility.
3. Analyze risks (Chapter 26).
4. Define required resources.
a. Determine required human resources.
b. Define reusable software resources.
c. Identify environmental resources.
5. Estimate cost and effort.
a. Decompose the problem.
b. Develop two or more estimates using
size, function points, process tasks, or
use cases.
c. Reconcile the estimates.
6. Develop an initial project schedule
(Section 25.11).
a. Establish a meaningful task set.
b. Define a task network.
c. Use scheduling tools to develop a
time-line chart.
d. Define schedule tracking mechanisms.
7. Repeat steps 1 to 6 to create a detailed
schedule for each prototype as the scope of
each prototype is defined.
task set
4 Use cases have been discussed in detail throughout Part Two of this book. A use case is a
scenario-based description of the user’s interaction with the software from the user’s point
of view.
508 PART FOUR MANAGING SOFTWARE PROJECTS
reusable software components, and the development environment (hardware and software
tools). Each resource is specified with four characteristics: description of the resource, a
statement of availability, time when the resource will be required, and duration of time
that the resource will be applied. The last two characteristics can be viewed as a time
window. Availability of the resource for a specified window must be established at the
earliest practical time.
25.4.1 Human Resources
The planner begins by evaluating software scope and selecting the skills required
to complete development. Both organizational position (e.g., manager, senior soft-
ware engineer) and specialty (e.g., telecommunications, database, e-commerce) are
specified. For relatively small projects (a few person-months), a single individual
may perform all software engineering tasks, consulting with specialists as required.
For larger projects, the software team may be geographically dispersed across a
number of different locations. Hence, the location of each human resource is
specified.
The number of people required for a software project can be determined only after
an estimate of development effort (e.g., person-months) is made. Techniques for esti-
mating effort are discussed later in this chapter.
Figure 25.1
Project
resources
Project
Environment
Reusable
Software
COTS
Components
Full-Experience
Components
New
Components
Past-Experience
Components
People
Skills
Number
Location
Software
Tools
Hardware
Network
Resources
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 509
25.4.2 Reusable Software Resources
Component-based software engineering (CBSE)5 emphasizes reusability—that is, the
creation and reuse of software building blocks. Such building blocks, often called
components, must be cataloged for easy reference, standardized for easy application,
and validated for easy integration.
Ironically, reusable software components are often neglected during planning, only
to become a paramount concern during the development phase of the software process.
It is better to specify software resource requirements early. In this way technical
evaluation of the alternatives can be conducted and timely acquisition can occur. It is
also important to consider whether it would be less costly to buy an existing software
product (assuming it satisfies all stakeholder needs) than building a custom software
product from scratch.
25.4.3 Environmental Resources
The environment that supports a software project, often called the software engineer-
ing environment (SEE), incorporates hardware and software. Hardware provides a
platform that supports the tools (software) required to produce the work products that
are an outcome of good software engineering practice.6 Because most software orga-
nizations have multiple constituencies that require access to the SEE, you must pre-
scribe the time window required for hardware and software and verify that these
resources will be available.
When a computer-based system (incorporating specialized hardware and software)
is to be engineered, the software team may require access to hardware elements being
developed by other engineering teams. For example, software for a robotic device used
within a manufacturing cell may require a specific robot (e.g., a robotic welder) as
part of the validation test step; a software project for advanced page layout may need
a high-speed digital printing system at some point during development. Each hardware
element must be specified as part of planning.
25.5 data ana Ly t i c s a n d so f t wa r e pro j e c t
est i m at i o n
Software cost and effort estimation will never be an exact science. Too many
variables—human, technical, environmental, political—can affect the ultimate cost of
software and effort applied to develop it. However, software project estimation can be
transformed from a black art to a series of systematic steps that provide estimates
with acceptable risk. To achieve reliable cost and effort estimates, a number of options
arise:
1. Delay estimation until late in the project (obviously, we can achieve 100 percent
accurate estimates after the project is complete!).
5 CBSE was considered briefly in Chapter 11.
6 Other hardware—the target environment—is the computer on which the software will
execute when it has been released to the end user.
510 PART FOUR MANAGING SOFTWARE PROJECTS
2. Base estimates on similar projects that have already been completed.
3. Use relatively simple decomposition techniques to generate project cost and
effort estimates.
4. Use one or more empirical models for software cost and effort estimation.
Unfortunately, the first option, however attractive, is not practical. Cost estimates must
be provided up front. However, you should recognize that the longer you wait, the
more you know, and the more you know, the less likely you are to make serious errors
in your estimates.
The second option can work reasonably well, if the current project is quite
similar to past efforts and other project influences (e.g., the customer, business
conditions, the software engineering environment, deadlines) are roughly equiva-
lent. Unfortunately, past experience has not always been a good indicator of future
results.
The remaining options are viable approaches to software project estimation. Ideally,
the techniques noted for each option should be applied in tandem; each used as a
cross-check for the other. Decomposition techniques take a divide-and-conquer
approach to software project estimation. By decomposing a project into major func-
tions and related software engineering activities, cost and effort estimation can be
performed in a stepwise fashion.
An empirical estimation model for computer software uses formulas derived from
existing project data to predict effort as a function of things like LOC or FP.7 Values
for LOC or FP are estimated using the approach described in Sections 25.6.3 and
25.6.4. But instead of using the tables described in those sections, the resultant values
for LOC or FP are plugged into the estimation model [Whi15].
A typical empirical estimation model is derived using regression analysis on data
collected from past software projects. The overall structure of such models takes the
form [Mat94]
E = A + B × (ev)C (25.1)
where A, B, and C are empirically derived constants, E is effort in person-months,
and ev is the estimation variable (either LOC or FP). In addition to the relationship
noted in Equation (25.1), the majority of estimation models have some form of proj-
ect adjustment component that enables E to be adjusted by other project characteris-
tics (e.g., problem complexity, staff experience, development environment).
Empirical estimation models can be used to complement decomposition techniques
and offer a potentially valuable estimation approach in their own right. A model is
based on experience (historical data) and takes the form
d = f(vi)
where d is one of a number of estimated values (e.g., effort, cost, project duration)
and vi are selected independent parameters (e.g., estimated lines of code). The empir-
ical data that support most software estimation models are derived from a limited
7 An empirical model using use cases as the independent variable is suggested in Section
25.6.6. However, relatively few have appeared in the literature to date.
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 511
sample of projects.8 For this reason, no estimation model is appropriate for all classes
of software and in all development environments.
Ideally, any estimation model should be calibrated to reflect local conditions. The
model should be tested by applying data collected from completed projects, plugging
the data into the model, and then comparing actual to predicted results. If agreement
is poor, the model must be tuned and retested before it can be used.
Each of the software estimation methods is only as good as the historical data used
to seed the estimate. If no historical data exist, the estimates rest on a very shaky
foundation. Therefore, you should use the results obtained from such models judi-
ciously. In Chapter 23, we examined the characteristics of some of the software met-
rics or data analytics that provide the basis for historical estimation data. Software
data analytics concepts are discussed briefly in Appendix 2 of this book.
25.6 de c o m p o s i t i o n a n d est i m at i o n te c h n i Q u e s
Software project estimation is a form of problem solving, and in most cases, the problem
to be solved (i.e., developing a cost and effort estimate for a software project) is too
complex to be considered in one piece. For this reason, you should decompose the prob-
lem, recharacterizing it as a set of smaller (and hopefully, more manageable) problems.
In Chapter 24, the decomposition approach was discussed from two different points
of view: decomposition of the problem and decomposition of the process. Estimation
uses one or both forms of partitioning. But before an estimate can be made, you must
understand the scope of the software to be built and generate an estimate of its “size.”
25.6.1 Software Sizing
The accuracy of a software project estimate is predicated on a number of things: (1) the
degree to which you have properly estimated the size of the product to be built, (2) the
ability to translate the size estimate into human effort, calendar time, and dollars (a func-
tion of the availability of reliable software metrics from past projects), (3) the degree to
which the project plan reflects the abilities of the software team, and (4) the stability of
product requirements and the environment that supports the software engineering effort.
Because a project estimate is only as good as the estimate of the size of the work
to be accomplished, software sizing represents your first major challenge as a planner.
In the context of project planning, size refers to a quantifiable outcome of the software
project. If a direct approach is taken, size can be measured in lines of code (LOC).
If an indirect approach is chosen, size is represented as function points (FP). Size can
be estimated by considering the type of project and its application domain, the func-
tionality delivered (i.e., the number of function points), the number of components
(or use cases) to be delivered, and the degree to which a set of existing components
must be modified for the new system.
8 As an example, the COCOMO (Constructive Cost Model) was originally developed in 1981,
with updated versions, COCOMO II and COCOMO III, released in later years. A presentation
on the genesis of the COCOMO model can be downloaded from: http://www.psmsc.com/
UG2016/Presentations/p10-Clark-COCOMO%20III%20Presentation%20v1 .
512 PART FOUR MANAGING SOFTWARE PROJECTS
25.6.2 Problem-Based Estimation
In Chapter 23, lines of code and function points were described as measures from
which productivity metrics can be computed. LOC and FP data are used in two ways
during software project estimation: (1) as estimation variables to “size” each element
of the software and (2) as baseline metrics collected from past projects and used in
conjunction with estimation variables to develop cost and effort projections.
LOC and FP estimation are distinct estimation techniques. Yet both have a number
of characteristics in common. You begin with a bounded statement of software scope
and from this statement attempt to decompose the statement of scope into problem
functions that can each be estimated individually. LOC or FP (the estimation variable)
is then estimated for each function. Alternatively, you may choose another component
for sizing, such as classes or objects, changes, or business processes affected.
Baseline productivity metrics (e.g., LOC/pm or FP/pm)9 are then applied to the
appropriate estimation variable, and cost or effort for the function is derived. Function
estimates are combined to produce an overall estimate for the entire project. When
collecting productivity metrics for projects, be sure to establish a taxonomy of project
types. This will enable you to compute domain-specific averages, making estimation
more accurate. Many modern applications reside on a network or are part of a client-
server architecture. Therefore, be sure that your estimates include the effort required
to develop “infrastructure” software.
25.6.3 An Example of LOC-Based Estimation
As an example of an LOC estimation technique, we consider a software package to
be developed for a computer-aided design application for mechanical components. The
software is to execute on a notebook computer. A preliminary statement of software
scope can be developed:
The mechanical CAD software will accept two- and three-dimensional geometric data
from a designer. The designer will interact and control the CAD system through a user
interface that will exhibit characteristics of good human/machine interface design. All
geometric data and other supporting information will be maintained in a CAD database.
Design analysis modules will be developed to produce the required output, which will
be displayed on a variety of devices. The software will be designed to control and inter-
act with peripheral devices that include a touchpad, scanner, laser printer, and large-bed
digital plotter.
This statement of scope is preliminary—it is not bounded. Every sentence would have
to be expanded to provide concrete detail and quantitative bounding. For example,
before estimation can begin, the planner must determine what “characteristics of good
human/machine interface design” means or what the size and sophistication of the
“CAD database” are to be.
For our purposes, assume that further refinement has occurred and that the major
software functions listed in Figure 25.2 are identified. Following the decomposition
technique for LOC, an estimation table (Figure 25.2) is developed. A range of LOC
estimates is developed for each function. For example, the range of LOC estimates for
the 3D geometric analysis function is optimistic, 4600 LOC; most likely, 6900 LOC;
9 The acronym pm means person-month of effort.
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 513
and pessimistic, 8600 LOC. Applying Equation (25.1), the expected value for the 3D
geometric analysis function is 6800 LOC. Other estimates are derived in a similar
fashion. By summing vertically in the estimated LOC column, an estimate of 33200
lines of code is established for the CAD system.
A review of historical data indicates that the organizational average productivity
for systems of this type is 620 LOC/pm. Based on a burdened labor rate of $8,000 per
month, the cost per line of code is approximately $13. Based on the LOC estimate
and the historical productivity data, the total estimated project cost is $431,000 and
the estimated effort is 54 person-months.10 Do not succumb to the temptation to use
this result as your project estimate. You should derive another result using a different
approach.
Figure 25.2
Estimation
table for the
LOC methods
Function
User interface and control facilities (UICF)
Two-dimensional geometric analysis (2DGA)
Three-dimensional geometric analysis (3DGA)
Database management (DBM)
Computer graphics display facilities (GCDF)
Peripheral control function (PCF)
Design analysis modules (DAM)
Estimated lines of code
Estimated LOC
2300
5300
6800
3350
4950
2100
8400
33200
10 Estimates are rounded to the nearest $1,000 and person-month. Further precision is unnec-
essary and unrealistic, given the limitations of estimation accuracy.
Estimating
The scene: Doug Miller’s office
as project planning begins.
The players: Doug Miller, manager of the
SafeHome software engineering team, and
Vinod Raman, Jamie Lazar, and other members
of the product software engineering team.
The conversation:
Doug: We need to develop an effort estimate
for the project, and then we’ve got to define a
micro schedule for the first increment and a
macro schedule for the remaining increments.
Vinod (nodding): Okay, but we haven’t de-
fined any increments yet.
Doug: True, but that’s why we need to
estimate.
Jamie (frowning): You want to know how long
it’s going to take us?
Doug: Here’s what I need. First, we need to
functionally decompose the SafeHome soft-
ware . . . at a high level . . . then we’ve got to
estimate the number of lines of code that each
function will take . . . then . . .
safehome
514 PART FOUR MANAGING SOFTWARE PROJECTS
25.6.4 An Example of FP-Based Estimation
Decomposition for FP-based estimation focuses on information domain values rather
than software functions. Referring to Table 25.1, you would estimate inputs, outputs,
inquiries, files, and external interfaces for the CAD software. To compute the count
total needed in the FP equation:
FPestimated = count total × [0.65 + 0.01 × Σ(Fi)]
For the purposes of this estimate, the complexity weighting factor is assumed to
be average. Table 25.1 presents the results of this estimate, and the FP count total
is 320.
To compute a value for Σ(Fi) , each of the 14 complexity weighting factors listed
in Table 25.2 is scored with a value between 0 (not important) and 5 (very important).
The sum of these ratings for the complexity factors Σ(Fi) is 52. So the value of
the adjustment factor is 1.17:
[0.65 + 0.01 × Σ(Fi)] = 1.17
Finally, the estimated number of FP is derived:
FPestimated = count total × [0.65 + 0.01 × Σ(Fi)] = 375
Jamie: Whoa! How are we supposed to do
that?
Vinod: I’ve done it on past projects. You begin
with use cases, determine the functionality re-
quired to implement each, then guesstimate
the LOC count for each piece of the function.
The best approach is to have everyone do it
independently and then compare results.
Doug: Or you can do a functional decomposi-
tion for the entire project.
Jamie: But that’ll take forever and we’ve got
to get started.
Vinod: No . . . it can be done in a few hours . . .
this morning, in fact.
Doug: I agree . . . we can’t expect exactitude,
just a ballpark idea of what the size of
SafeHome will be.
Jamie: I think we should just estimate
effort . . . that’s all.
Doug: We’ll do that too. Then use both
estimates as a cross-check.
Vinod: Let’s go do it . . .
Table 25.1
Estimating
information
domain values
Est. FP
Information domain value Opt. Likely Pess. count Weight count
Number of external inputs 20 24 30 24 4 96 (24 × 4 = 96)
Number of external outputs 12 14 22 14 5 70 (14 × 5 = 70)
Number of external inquiries 16 20 28 20 5 100 (20 × 5 = 100)
Number of internal logical files 4 4 5 4 10 40 (4 × 10 = 40)
Number of external interface files 2 2 3 2 7 14 (2 × 7 = 14)
Count total 320
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 515
The organizational average productivity for systems of this type is 6.5 FP/pm.
Based on a burdened labor rate of $8,000 per month, the cost per FP is approximately
$1,230. Based on the FP estimate and the historical productivity data, the total esti-
mated project cost is $461,000 and the estimated effort is 58 person-months.
25.6.5 An Example of Process-Based Estimation
The most common technique for estimating a project is to base the estimate on the
process that will be used. That is, the process is decomposed into a relatively small
set of activities, actions, and tasks and the effort required to accomplish each is
estimated.
Like the problem-based techniques, process-based estimation begins with a delinea-
tion of software functions obtained from the project scope. A series of framework
activities must be performed for each function. Functions and related framework
activities11 may be represented as part of a table similar to the one presented in
Figure 25.3.
Once problem functions and process activities are melded, you estimate the effort
(e.g., person-months) that will be required to accomplish each software process activ-
ity for each software function. These data constitute the central matrix of the table in
Figure 25.3. Average labor rates (i.e., cost/unit effort) are then applied to the effort
estimated for each process activity.
Complexity Factor Value
Backup and recovery 4
Data communications 2
Distributed processing 0
Performance critical 4
Existing operating environment 3
Online data entry 4
Input transaction over multiple screens 5
Master files updated online 3
Information domain values complex 5
Internal processing complex 5
Code designed for reuse 4
Conversion/installation in design 3
Multiple installations 5
Application designed for change 5
Table 25.2
Estimating
information
domain values
11 The framework activities chosen for this project differ somewhat from the generic activities
discussed in Chapter 2. They are customer communication (CC), planning, risk analysis,
engineering, and construction/release.
516 PART FOUR MANAGING SOFTWARE PROJECTS
To illustrate the use of process-based estimation, we again consider the CAD soft-
ware introduced in Section 25.6.3. The system configuration and all software functions
remain unchanged and are indicated by project scope.
Referring to the completed process-based table shown in Figure 25.3, estimates of
effort (in person-months) for each software engineering activity are provided for each
CAD software function (abbreviated for brevity). The engineering and construction
release activities are subdivided into the major software engineering tasks shown.
Gross estimates of effort are provided for customer communication, planning, and risk
analysis. These are noted in the total row at the bottom of the table. Horizontal and
vertical totals provide an indication of estimated effort required for analysis, design,
code, and test. It should be noted that approximately 53 percent of all effort is
expended on front-end engineering tasks (requirements analysis and design), indicat-
ing the relative importance of this work.
Based on an average burdened labor rate of $8,000 per month, the total estimated
project cost is $368,000 and the estimated effort is 46 person-months. If desired, labor
rates could be associated with each framework activity or software engineering task
and computed separately.
0.50 3.00 1.00 1.50
0.50 3.00 0.75 1.50
0.25 2.00 0.50 1.50
0.50 2.00 0.50 2.00
Activity
Task
Function
UICF
2DGA
3DGA
CGDF
DBM
PCF
DAM
Totals 0.25 0.25 0.25 3.50
1% 1% 1%% e�ort
CC Planning Risk
analysis Engineering
Analysis CodeDesign
Construction
release CE Totals
Test
8%
20.50
45%
4.50
10%
16.50
n/a
n/a
n/a
n/a
n/a
n/a
n/a
0.50 2.50 0.40 5.00
0.75 4.00 0.60 2.00
0.50 4.00 1.00 3.00
46.00
5.00
4.25
5.75
6.00
8.50
7.35
8.40
36%
Figure 25.3 Process-based estimation table
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 517
25.6.6 An Example of Estimation Using Use Case Points
As we have noted throughout Part Two of this book, use cases provide a software
team with insight into software scope and requirements. Once use cases have been
developed, they can be used to estimate the projected “size” of a software project.
Use cases do not address the complexity of the functions and features that
are described, and they can describe complex behavior (e.g., interactions) that involve
many functions and features. Even with these constraints, it is possible to compute
use case points (UCPs) in a manner that is analogous to the computation of functions
points (Section 25.6).
Cohn [Coh05] indicates that the computation of use case points must take the
following characteristics into account:
∙ The number and complexity of the use cases in the system.
∙ The number and complexity of the actors on the system.
∙ Various nonfunctional requirements (such as portability, performance,
maintainability) that are not written as use cases.
∙ The environment in which the project will be developed (e.g., the
programming language, the software team’s motivation).
To begin, each use case is assessed to determine its relative complexity. A simple use
case indicates a simple user interface, a single database, and three or fewer transac-
tions and five or fewer class implementations. An average use case indicates a more
complex UI, two or three databases, and four to seven transactions with 5 to 10 classes.
Finally, a complex use case implies a complex UI with multiple databases, using eight
or more transactions and 11 or more classes. Each use case is assessed using these
criteria and the count of each type is weighted by a factor of 5, 10, and 15, respec-
tively. A total unadjusted use case weight (UUCW) is the sum of all weighted counts
[Nun11].
Next, each actor is assessed. Simple actors are automatons (another system, a
machine or device) that communicate through an API. Average actors are automatons
that communicate through a protocol or a data store, and complex actors are humans
who communicate through a GUI or other human interface. Each actor is assessed
using these criteria, and the count of each type is weighted by a factor of 1, 2, and 3,
respectively. The total unadjusted actor weight (UAW) is the sum of all weighted
counts.
These unadjusted values are modified by considering technical complexity factors
(TCFs) and environment complexity factors (ECFs). Thirteen factors contribute to an
assessment of the final TCF, and eight factors contribute to the computation of the
final ECF [Coh05]. Once these values have been determined, the final UCP value is
computed in the following manner:
UCP = (UUCW + UAW) × TCF × ECF (25.2)
The CAD software introduced in Section 25.6.3 is composed of three subsystem
groups: user interface subsystem (includes UICF), engineering subsystem group
(includes the 2DGA, 3DGA, and DAM subsystems), and infrastructure subsystem
group (includes CGDF and PCF subsystems). Sixteen complex use cases describe the
518 PART FOUR MANAGING SOFTWARE PROJECTS
user interface subsystem. The engineering subsystem group is described by 14 average
use cases and 8 simple use cases. And the infrastructure subsystem is described with
10 simple use cases. Therefore,
UUCW = (16 use cases × 15) + [(14 use cases × 10)
+ (8 use cases × 5)] + (10 use cases × 5) = 470
Analysis of the use cases indicates that there are 8 simple actors, 12 average actors,
and 4 complex actors. Therefore,
UAW = (8 actors × 1) + (12 actors × 2) + (4 actors × 3) = 44
After evaluation of the technology and the environment,
TCF = 1.04
ECF = 0.96
Using Equation (25.2),
UCP = (470 + 44) × 1.04 × 0.96 = 513
Using past project data as a guide, the development group has produced 85 LOC per
UCP. Therefore, an estimate of the overall size of the CAD project is 43600 LOC.
Similar computations can be made for applied effort or project duration.
Using 620 LOC/pm as the average productivity for systems of this type and a
burdened labor rate of $8,000 per month, the cost per line of code is approximately $13.
Based on the use case estimate and the historical productivity data, the total estimated
project cost is $552,000 and the estimated effort is about 70 person-months.
25.6.7 Reconciling Estimates
Any estimation technique, no matter how sophisticated, must be checked by comput-
ing at least one other estimate using a different approach. If you have created two or
three estimates independently, you now have two or three estimates for cost and effort
that need to be compared and reconciled. If both sets of estimates show reasonable
agreement, there is good reason to believe that the estimates are reliable. If, on the
other hand, the results of these decomposition techniques show little agreement, fur-
ther investigation and analysis must be conducted.
When your estimates are far apart, you need to reevaluate the information used to
make the estimates. Widely divergent estimates can often be traced to one of two
causes: (1) the scope of the project is not adequately understood or has been misin-
terpreted by the planner, or (2) productivity data used for problem-based estimation
techniques is inappropriate for the application or has been misapplied. You should
determine the cause of divergence and then recompute these estimates.
The estimation techniques discussed in the preceding sections resulted in multiple
estimates that should be reconciled to produce a single estimate of effort, project
duration, or cost. The total estimated effort for the CAD software (Section 25.6.3)
ranges from a low of 46 person-months (derived using a process-based estimation
approach) to a high of 68 person-months (derived with use case estimation). The
simple average of all four estimates is 56 person-months. But is this the best approach
when the high and low estimates are 21 person-months apart?
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 519
One approach may be to compute a weighted average, based on calling a high
estimate a pessimistic estimate, a low estimate an optimistic estimate, and an in-
between value a most likely value. A three-point or expected value can then be com-
puted. The expected value for the estimation variable (size) S can be computed as a
weighted average of the optimistic (sopt), most likely (sm), and pessimistic (spess) esti-
mates. For example,
S =
sopt + 4sm + spess
6
(25.3)
gives heaviest credence to the “most likely” estimate and follows a beta probability
distribution. We assume that there is a very small probability the actual size result
will fall outside the optimistic or pessimistic values.
Once the expected value for the estimation variable has been determined, historical
productivity data should be examined. Do our estimates seem correct? The only rea-
sonable answer to this question is, we can’t be sure. Even then, common sense and
experience must prevail.
25.6.8 Estimation for Agile Development
Because the requirements for an agile project (Chapter 3) are defined by a set of user
stories, it is possible to develop an estimation approach that is informal, reasonably
disciplined, and meaningful within the context of project planning for each software
increment. Estimation for agile projects uses a decomposition approach that encom-
passes the following steps:
1. Each user story (the equivalent of a mini use case created at the very start
of a project by end users or other stakeholders) is considered separately for
estimation purposes.
2. The user story is decomposed into the set of software engineering tasks that
will be required to develop it.
3a. Each task is estimated separately. Note: Estimation can be based on historical
data, an empirical model, or “experience” (e.g., using a technique like
planning poker, Section 7.2.3).
3b. Alternatively, the “volume” of the user story can be estimated in LOC, FP, or
some other volume-oriented measure (e.g., use case count).
4a. Estimates for each task are summed to create an estimate for the user story.
4b. Alternatively, the volume estimate for the user story is translated into effort
using historical data.
5. The effort estimates for all user stories that are to be implemented for a
given software increment are summed to develop the effort estimate for the
increment.
Because the project duration required for the development of a software increment is
quite short (typically 3 to 6 weeks), this estimation approach serves two purposes:
(1) to be certain that the number of scenarios to be included in the increment conforms
to the available resources, and (2) to establish a basis for allocating effort as the incre-
ment is developed.
520 PART FOUR MANAGING SOFTWARE PROJECTS
25.7 pro j e c t sc h e d u L i ng
Software project scheduling is an activity that distributes estimated effort across the
planned project duration by allocating the effort to specific software engineering tasks.
It is important to note, however, that the schedule evolves over time. During early
stages of project planning, a macroscopic schedule is developed. This type of sched-
ule identifies all major process framework activities and the product functions to
which they are applied. As the project gets under way, each entry on the macroscopic
schedule is refined into a detailed schedule. Here, specific software actions and tasks
(required to accomplish an activity) are identified and scheduled.
Although there are many reasons why software is delivered late, most can be traced
to one or more of the following root causes:
∙ An unrealistic deadline established by someone outside the software team and
forced on managers and practitioners on the group.
∙ Changing customer requirements that are not reflected in schedule changes.
∙ An honest underestimate of the amount of effort and/or the number of
resources that will be required to do the job.
∙ Predictable and/or unpredictable risks that were not considered when the
project commenced.
∙ Technical difficulties that could not have been foreseen in advance.
∙ Human difficulties that could not have been foreseen in advance.
∙ Miscommunication among project staff that results in delays.
∙ A failure by project management to recognize that the project is falling
behind schedule and a lack of action to correct the problem.
Aggressive (read “unrealistic”) deadlines are an unpleasant fact in the software busi-
ness. Sometimes such deadlines are demanded for reasons that are legitimate, from
the point of view of the person who sets the deadline. But common sense says that
legitimacy must also be perceived by the people doing the work.
The estimation methods discussed in this chapter and the scheduling techniques
described in this section are often implemented under the constraint of a defined
deadline. If best estimates indicate that the deadline is unrealistic, a competent project
manager should inform management and all stakeholders of her findings and suggest
alternatives to mitigate the damage of missing the deadline.
The reality of a technical project (whether it involves building a virtual world for
a video game or developing an operating system) is that hundreds of small tasks must
occur to accomplish a larger goal. Some of these tasks lie outside the mainstream and
may be completed without worry about the impact on the project completion date.
Other tasks lie on the critical path. If these “critical” tasks fall behind schedule, the
completion date of the entire project is put into jeopardy.
As a project manager, your objective is to define all project tasks, build a network
that depicts their interdependencies, identify the tasks that are critical within the
network, and then track their progress to ensure that delay is recognized “one day at
a time.” To accomplish this, you must have a schedule that has been defined at a
degree of resolution that allows progress to be monitored and the project to be
controlled. The tasks required to achieve a project manager’s needs to build a schedule
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 521
and track progress should not be performed manually. There are many excellent sched-
uling tools. A good manager uses them.
25.7.1 Basic Principles
Scheduling for software engineering projects can be viewed from two rather different
perspectives. In the first, an end date for release of a computer-based system has
already (and irrevocably) been established. The software organization is constrained
to distribute effort within the prescribed time frame. The second view of software
scheduling assumes that rough chronological bounds have been discussed but that the
end date is set by the software engineering organization. Effort is distributed to make
best use of resources, and an end date is defined after careful analysis of the software.
Unfortunately, the first situation is encountered far more frequently than the second.
Like all other areas of software engineering, a number of basic principles guide
software project scheduling:
Compartmentalization. The project must be compartmentalized into a number
of manageable activities and tasks. To accomplish compartmentalization, both the
product and the process are decomposed.
Interdependency. The interdependency of each compartmentalized activity or
task must be determined. Some tasks must occur in sequence, while others can
occur in parallel. Some activities cannot commence until the work product produced
by another is available. Other activities can occur independently.
Time allocation. Each task to be scheduled must be allocated some number of
work units (e.g., person-days of effort). In addition, each task must be assigned a
start date and a completion date that are a function of the interdependencies and
whether work will be conducted on a full-time or part-time basis.
Effort validation. Every project has a defined number of people on the software
team. As time allocation occurs, you must ensure that no more than the allocated
number of people has been scheduled at any given time. For example, consider a
project that has three assigned software engineers (e.g., three person-days are avail-
able per day of assigned effort).12 On a given day, seven concurrent tasks must be
accomplished. Each task requires 0.50 person-days of effort. More effort has been
allocated than there are people to do the work.
Defined responsibilities. Every task that is scheduled should be assigned to a
specific team member.
Defined outcomes. Every task that is scheduled should have a defined outcome. For
software projects, the outcome is normally a work product (e.g., the design of a compo-
nent) or a part of a work product. Work products are often combined in deliverables.
Defined milestones. Every task or group of tasks should be associated with a
project milestone. A milestone is accomplished when one or more work products
has been reviewed for quality (Chapter 15) and has been approved.
Each of these principles is applied as the project schedule evolves.
12 In reality, less than 3 person-days of effort are available because of unrelated meetings,
sickness, vacation, and a variety of other reasons. For our purposes, however, we assume
100 percent availability.
522 PART FOUR MANAGING SOFTWARE PROJECTS
25.7.2 The Relationship Between People and Effort
There is a common myth that is still believed by many managers who are responsible
for software development work: “If we fall behind schedule, we can always add more
programmers and catch up later in the project.” Unfortunately, adding people late in
a project often has a disruptive effect on the project, causing schedules to slip even
further. The people who are added must learn the system, and the people who teach
them are the same people who were doing the work. While teaching, no work is done,
and the project falls further behind.
In addition to the time it takes to learn the system, more people increase the num-
ber of communication paths and the complexity of communication throughout a proj-
ect. Although communication is absolutely essential to successful software development,
every new communication path requires additional effort and therefore additional time.
If you must add people to a late project, be sure that you’ve assigned them work that
is highly compartmentalized.
Over the years, empirical data and theoretical analysis have demonstrated that proj-
ect schedules are elastic. That is, it is possible to compress a desired project comple-
tion date (by adding additional resources) to some extent. It is also possible to extend
a completion date (by reducing the number of resources).
The Putnam-Norden-Rayleigh (PNR) curve13 provides an indication of the relation-
ship between effort applied and delivery time for a software project. A version of the
curve, representing project effort as a function of delivery time, is shown in Figure 25.4.
The curve indicates a minimum value to that indicates the least cost for delivery (i.e.,
the delivery time that will result in the least effort expended). As we move left of to
(i.e., as we try to accelerate delivery), the curve rises nonlinearly.
As an example, we assume that a project team has estimated a level of effort Ed
will be required to achieve a nominal delivery time td that is optimal in terms of
13 Original research can be found in [Nor70] and [Put78].
Figure 25.4
The
relationship
between effort
and delivery
time
Ea = m ( td
4 / ta
4 )
Ea = e�ort in person-months
td = nominal delivery time for schedule
ta = optimal development time (in terms of cost)
to = actual delivery time desired
Tmin = 0.75Td
Ed
Eo
Impossible
region
E�ort
cost
Development timetd to
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 523
schedule and available resources. Although it is possible to accelerate delivery, the
curve rises very sharply to the left of td. In fact, the PNR curve indicates the project
delivery time cannot be compressed much beyond 0.75td. If we attempt further com-
pression, the project moves into “the impossible region” and risk of failure becomes
very high. The PNR curve also indicates that the lowest cost delivery option to = 2td.
The implication here is that delaying project delivery can reduce costs significantly.
Of course, this must be weighed against the business cost associated with the delay.
The software equation [Put92] introduced is derived from the PNR curve and
demonstrates the highly nonlinear relationship between chronological time to complete
a project and human effort applied to the project. The number of delivered lines of
code (source statements), L, is related to effort and development time by the equation:
L = P × E1/3t4/3 (25.4)
where E is development effort in person-months, P is a productivity parameter that
reflects a variety of factors that leads to high-quality software engineering work (typ-
ical values for P range between 2000 and 12000), and t is the project duration in
calendar months.
Rearranging this software equation, we can arrive at an expression for development
effort E:
E =
L3
P3t4 (25.5)
where E is the effort expended (in person-years) over the entire life cycle for software
development and maintenance and t is the development time in years. The equation
for development effort can be related to development cost by the inclusion of a bur-
dened labor rate factor ($/person-year).
This leads to some interesting results. As a project deadline becomes tighter and
tighter, you reach a point at which the work cannot be completed on schedule, regardless
of the number of people doing the work. Face reality and define a new delivery date.
Consider also a complex, real-time software project estimated at 33000 LOC,
12 person-years of effort. If eight people are assigned to the project team, the project
can be completed in approximately 1.3 years. If, however, we extend the end date to
1.75 years, the highly nonlinear nature of the model described in Equation (25.5)
yields:
E =
L3
P3
t4 ∼3.8 person-years
This implies that, by extending the end date by 6 months, we can reduce the number
of people from eight to four! The validity of such results is open to debate, but the
implication is clear: Benefit can be gained by using fewer people over a somewhat
longer time span to accomplish the same objective.
25.8 de f i n i ng a pro j e c t ta s k se t
Regardless of the process model that is chosen, the work that a software team performs
is achieved through a set of tasks that enable you to define, develop, and ultimately
support computer software. No single task set is appropriate for all projects. The set
524 PART FOUR MANAGING SOFTWARE PROJECTS
of tasks that would be appropriate for a large, complex system would likely be per-
ceived as overkill for a small, relatively simple software product. Therefore, an effec-
tive software process should define a collection of task sets, each designed to meet
the needs of different types of projects.
As we noted in Chapter 2, a task set is a collection of software engineering work
tasks, milestones, work products, and quality assurance filters that must be accom-
plished to complete a particular project. The task set must provide enough discipline
to achieve high software quality. But, at the same time, it must not burden the project
team with unnecessary work.
To develop a project schedule, a task set must be distributed on the project time
line. The task set will vary depending upon the project type and the degree of rigor
with which the software team decides to do its work. Many factors influence the task
set to be chosen. These include [Pre05]: size of the project, number of potential users,
mission criticality, application longevity, stability of requirements, ease of customer/
developer communication, maturity of applicable technology, performance constraints,
embedded and nonembedded characteristics, project staff, and reengineering factors.
When taken in combination, these factors provide an indication of the degree of rigor
with which the software process should be applied.
25.8.1 A Task Set Example
A concept development project is initiated when the potential for some new technol-
ogy must be explored. There is no certainty that the technology will be applicable,
but a customer (e.g., marketing) believes that potential benefit exists. Concept devel-
opment projects are approached by applying the following task set:
1.1 Concept scoping determines the overall scope of the project.
1.2 Preliminary concept planning establishes the organization’s ability to under-
take the work implied by the project scope.
1.3 Technology risk assessment evaluates the risk associated with the technology
to be implemented as part of the project scope.
1.4 Proof of concept demonstrates the viability of a new technology in the soft-
ware context.
1.5 Concept implementation implements the concept representation in a manner
that can be reviewed by a customer and is used for “marketing” purposes
when a concept must be sold to other customers or management.
1.6 Customer reaction to the concept solicits feedback on a new technology con-
cept and targets specific customer applications.
A quick scan of these tasks should yield few surprises. In fact, the software engi-
neering flow for concept development projects (and for all other types of projects as
well) is little more than common sense.
25.8.2 Refinement of Major Tasks
The major tasks (i.e., software engineering actions) described in the preceding section
may be used to define a macroscopic schedule for a project. However, the macroscopic
schedule must be refined to create a detailed project schedule. Refinement begins by
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 525
taking each major task and decomposing it into a set of subtasks (with related work
products and milestones).
As an example of task decomposition, consider Task 1.1, Concept Scoping. Task
refinement can be accomplished using an outline format, but in this book, a process
design language approach is used to illustrate the flow of the concept scoping activity:
Task definition: Task 1.1 Concept Scoping
1.1.1 Identify need, benefits and potential customers;
1.1.2 Define desired output/control and input events that drive the
application;
Begin Task 1.1.2
1.1.2.1 TR: Review written description of need14
1.1.2.2 Derive a list of customer visible outputs/inputs
1.1.2.3 TR: Review outputs/inputs with customer and revise as
required;
endtask Task 1.1.2
1.1.3 Define the functionality/behavior for each major function;
Begin Task 1.1.3
1.1.3.1 TR: Review output and input data objects derived in task 1.1.2;
1.1.3.2 Derive a model of functions/behaviors;
1.1.3.3 TR: Review functions/behaviors with customer and revise as
required;
endtask Task 1.1.3
1.1.4 Isolate those elements of the technology to be implemented in software;
1.1.5 Research availability of existing software;
1.1.6 Define technical feasibility;
1.1.7 Make quick estimate of size;
1.1.8 Create a Scope Definition;
endtask definition: Task 1.1
The tasks and subtasks noted in the process design language refinement form the
basis for a detailed schedule for the concept scoping activity.
25.9 de f i n i ng a ta s k ne t wo r k
Individual tasks and subtasks have interdependencies based on their sequence. In
addition, when more than one person is involved in a software engineering project, it
is likely that development activities and tasks will be performed in parallel. When this
occurs, concurrent tasks must be coordinated so that they will be complete when later
tasks require their work product(s).
A task network, also called an activity network, is a graphic representation of the
task flow for a project. The task network is a useful mechanism for depicting intertask
dependencies and determining the critical path. It is sometimes used as the mechanism
through which task sequence and dependencies are input to an automated project
scheduling tool. In its simplest form (used when creating a macroscopic schedule),
the task network depicts major software engineering tasks. Figure 25.5 shows a sche-
matic task network for a concept development project.
14 TR indicates that a technical review (Chapter 16) is to be conducted.
526 PART FOUR MANAGING SOFTWARE PROJECTS
The concurrent nature of software engineering activities leads to a number of
important scheduling requirements. Because parallel tasks occur asynchronously,
you should determine intertask dependencies to ensure continuous progress toward
completion. In addition, you should be aware of those tasks that lie on the critical
path. That is, tasks that must be completed on schedule if the project as a whole is
to be completed on schedule. These issues are discussed in more detail later in this
chapter.
It is important to note that the task network shown in Figure 25.5 is macroscopic.
In a detailed task network (a precursor to a detailed schedule), each activity shown
in the figure would be expanded. For example, Task 1.1 would be expanded to show
all tasks detailed in the refinement of Tasks 1.1 shown in Section 25.8.2.
25.10 sc h e d u L i ng
Scheduling of a software project does not differ greatly from scheduling of any mul-
titask engineering effort. Therefore, generalized project scheduling tools and tech-
niques can be applied with little modification for software projects [Fer14].
Interdependencies among tasks may be defined using a task network. Tasks, some-
times called the project work breakdown structure (WBS), are defined for the product
as a whole or for individual functions.
Project scheduling tools allow you to (1) determine the critical path—the chain of
tasks that determines the duration of the project, (2) establish “most likely” time
estimates for individual tasks by applying statistical models, and (3) calculate “bound-
ary times” that define a time “window” for a particular task [Ker17].
25.10.1 Time-Line Charts
When creating a software project schedule, you begin with a set of tasks (the work
breakdown structure). If automated tools are used, the work breakdown is input as a
Figure 25.5 A task network for concept development
1.2
Concept
planning
1.1
Concept
scoping
Integrate
a, b, c
1.5a
Concept
in parallel to 3 di�erent
concept functions
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 527
task network or task outline. Effort, duration, and start date are then input for each
task. In addition, tasks may be assigned to specific individuals.
As a consequence of this input, a time-line chart, also called a Gantt chart, is
generated. A time-line chart can be developed for the entire project. Alternatively,
separate charts can be developed for each project function or for each individual work-
ing on the project [Toc18].
Figure 25.6 illustrates the format of a time-line chart. It depicts a part of a software
project schedule that emphasizes the concept scoping task for a word-processing
(WP) software product. All project tasks (for concept scoping) are listed in the left-
hand column. The horizontal bars indicate the duration of each task. When multiple
bars occur at the same time on the calendar, task concurrency is implied. The dia-
monds indicate milestones.
Once the information necessary for the generation of a time-line chart has been
input, the majority of software project scheduling tools produce project tables—a
tabular listing of all project tasks, their planned and actual start and end dates, and a
variety of related information (Figure 25.7). Used in conjunction with the time-line
chart, project tables enable you to track progress.
Figure 25.6 An example time-line chart
Week 1Work tasks
I.1.1 Identify needs and benefits
Meet with customers
Identify needs and project constraints
Establish product statement
Milestone: Product statement defined
I.1.2 Define desired output/control/input (OCI)
Scope keyboard functions
Scope voice input functions
Scope modes of interaction
Scope documents diagnosis
Scope other WP functions
Document OCI
FTR: Review OCI with customer
Revise OCI as required
Milestone: OCI defined
I.1.3 Define the function/behavior
Define keyboard functions
Define voice input functions
Describe modes of interaction
Describe spell/grammar check
Describe other WP functions
FTR: Review OCI with customer
Revise as required
Milestone: OCI definition complete
I.1.4 Isolation software elements
Milestone: OCI defined
I.1.5 Research availability of existing software
Research text editing components
Research voice input components
Research file management components
Research spell/grammar check components
Milestone: Reusable components identified
I.1.6 Define technical feasibility
Evaluate voice input
Evaluate grammar checking
Milestone: Technical feasibility assessed
I.1.7 Make quick estimate of size
I.1.8 Create a scope definition
Review scope document with customer
Revise document as required
Milestone: Scope document complete
Week 2 Week 3 Week 4 Week 5
528 PART FOUR MANAGING SOFTWARE PROJECTS
25.10.2 Tracking the Schedule
If it has been properly developed, the project schedule becomes a road map that
defines the tasks and milestones to be tracked and controlled as the project proceeds.
Tracking can be accomplished in a number of different ways:
∙ Conducting periodic project status meetings in which each team member
reports progress and problems.
∙ Evaluating the results of all reviews conducted throughout the software
engineering process.
∙ Determining whether formal project milestones (the diamonds shown in
Figure 25.7) have been accomplished by the scheduled date.
∙ Comparing the actual start date to the planned start date for each project task
listed in the resource table (Figure 25.8).
∙ Meeting informally with practitioners to obtain their subjective assessment of
progress to date and problems on the horizon.
∙ Tracking the project velocity, which is a way of seeing how quickly the devel-
opment team is clearing the user story backlog (Section 3.5).
In reality, all these tracking techniques are used by experienced project managers.
A software project manager employs control to administer project resources, cope
with problems, and direct project staff. If things are going well (i.e., the project is on
schedule and within budget, reviews indicate that real progress is being made and
milestones are being reached), control is light. But when problems occur, you must
exercise control to reconcile them as quickly as possible. After a problem has been
diagnosed, additional resources may be focused on the problem area: staff may be
redeployed or the project schedule can be redefined.
When faced with severe deadline pressure, experienced project managers some-
times use a project scheduling and control technique called time-boxing [Jal04]. The
time-boxing strategy recognizes that the complete product may not be deliverable by
the predefined deadline.
Figure 25.7 An example project table
Planned
start
Actual
start
Planned
completep
Actual
completep
Assigned
personp
E�ort
allocated NotesWork tasks
I.1.1 Identify needs and benefits
Meet with customers
Identify needs and project constraints
Establish product statement
Milestone: Product statement defined
I.1.2 Define desired output/control/input (OCI)
Scope keyboard functions
Scope voice input functions
Scope modes of interaction
Scope documents diagnosis
Scope other WP functions
Document OCI
FTR: Review OCI with customer
Revise OCI as required
Milestone: OCI defined
I.1.3 Define the Function/behavior
Scoping will require
wk1, d1 wk1, d1 wk1, d2 wk1, d2 BLS 2 p-d more e�ort/time
wk1, d2 wk1, d2 wk1, d2 wk1, d2 JPP 1 p-d
wk1, d3 wk1, d3 wk1, d3 wk1, d3 BLS/ 1 p-d
wk1, d3 wk1, d3 wk1, d3 wk1, d3
wk1, d4 wk1, d4 wk2, d2 BLS 1.5 p-d
wk1, d3 wk1, d3 wk2, d2 JPP 2 p-d
wk2, d1 wk2, d3 MLL 1 p-d
wk2, d1 wk2, d2 BLS 1.5 pd
wk1, d4 wk1, d4 wk2, d3 JPP 2 p-d
wk2, d1 wk2, d3 MLL 3 p-d
wk2, d3 wk2, d3 all 3 p-d
wk2, d4 wk2, d4 all 3 p-d
wk2, d5 wk2, d5
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 529
The tasks associated with each increment are then time-boxed. This means that the
schedule for each task is adjusted by working backward from the delivery date for the
increment. A “box” is put around each task. When a task hits the boundary of its time
box (plus or minus 10 percent), work stops and the next task begins.
Time-boxing is often associated with agile incremental process models (Chapter 4),
and a schedule is derived for each incremental delivery. These tasks become part of
the increment schedule and are allocated over the increment development schedule.
They can be input to scheduling software (e.g., Microsoft Project) and used for track-
ing and control.
The initial reaction to the time-boxing approach is often negative: “If the work isn’t
finished, how can we proceed?” The answer lies in the way work is accomplished.
By the time the time-box boundary is encountered, it is likely that 90 percent of the
task has been completed.15 The remaining 10 percent, although important, can (1) be
delayed until the next increment or (2) be completed later if required. Rather than
becoming “stuck” on a task, the project proceeds toward the delivery date.
Tracking the Schedule
The scene: Doug Miller’s office
prior to the initiation of the
SafeHome software project.
The players: Doug Miller, manager of the
SafeHome software engineering team, and
Vinod Raman, Jamie Lazar, and other members
of the product software engineering team.
The conversation:
Doug (glancing at a PowerPoint slide): The
schedule for the first SafeHome increment
seems reasonable, but we’re going to have
trouble tracking progress.
Vinod (a concerned look on his face): Why?
We have tasks scheduled on a daily basis,
plenty of work products, and we’ve been sure
that we’re not over allocating resources.
Doug: All good, but how do we know when
the requirements model for the first increment
is complete?
Jamie: Things are iterative, so that’s difficult.
Doug: I understand that, but . . . well, for in-
stance, take “analysis classes defined.” You
indicated that as a milestone.
Vinod: We have.
Doug: Who makes that determination?
Jamie (aggravated): They’re done when
they’re done.
Doug: That’s not good enough, Jamie. We
have to schedule TRs [technical reviews,
Chapter 16], and you haven’t done that. The
successful completion of a review on the anal-
ysis model, for instance, is a reasonable mile-
stone. Understand?
Jamie (frowning): Okay, back to the drawing
board.
Doug: It shouldn’t take more than an hour to
make the corrections . . . everyone else can
get started now.
safehome
15 A cynic might recall the saying: “The first 90 percent of the system takes 90 percent of the
time; the remaining 10 percent of the system takes 90 percent of the time.”
530 PART FOUR MANAGING SOFTWARE PROJECTS
25.11 su m m a ry
A software project planner must estimate three things before a project begins: how
long it will take, how much effort will be required, and how many people will be
involved. In addition, the planner must predict the resources (hardware and software)
that will be required and the risk involved.
The statement of scope helps the planner to develop estimates using one or more
techniques that fall into two broad categories: decomposition and empirical modeling.
Decomposition techniques require a delineation of major software functions, followed
by estimates of either (1) the number of LOC, (2) selected values within the informa-
tion domain, (3) the number of use cases, (4) the number of person-months required
to implement each function, or (5) the number of person-months required for each
software engineering activity. Empirical techniques use empirically derived expres-
sions for effort and time to predict these project quantities. Automated tools can be
used to implement a specific empirical model.
Accurate project estimates generally use at least two of the three techniques just
noted. By comparing and reconciling estimates developed using different techniques,
the planner is more likely to derive an accurate estimate. Software project estimation
can never be an exact science, but a combination of good historical data and system-
atic techniques can improve estimation accuracy.
Scheduling is the culmination of a planning activity that is a primary component
of software project management. When combined with estimation methods and risk
analysis, scheduling establishes a road map for the project manager.
Scheduling begins with process decomposition. The characteristics of the project
are used to adapt an appropriate task set for the work to be done. A task network
depicts each engineering task, its dependency on other tasks, and its projected duration.
The task network is used to compute the critical path, a time-line chart, and a variety
of project information. Using the schedule as a guide, you can track and control each
step in the software process.
pro b L e m s a n d po i n t s to po n d e r
25.1. Assume that you are the project manager for a company that builds software for house-
hold robots. You have been contracted to build the software for a robot that mows the lawn for
a homeowner. Write a statement of scope that describes the software. Be sure your statement
of scope is bounded. If you’re unfamiliar with robots, do a bit of research before you begin
writing. Also, state your assumptions about the hardware that will be required. Alternate:
Replace the lawn-mowing robot with another problem that is of interest to you.
25.2. Do a functional decomposition of the robot software you described in Problem 25.1.
Estimate the size of each function in LOC. Assuming that your organization produces
450 LOC/pm with a burdened labor rate of $7,000 per person-month, estimate the effort
and cost required to build the software using the LOC-based estimation technique described
in this chapter.
25.3. Develop a spreadsheet model that implements one or more of the estimation techniques
described in this chapter. Alternatively, acquire one or more online models for estimation from
Web-based sources.
CHAPTER 25 CREATING A VIABLE SOFTWARE PLAN 531
25.4. It seems odd that cost and schedule estimates are developed during software project
planning—before detailed software requirements analysis or design has been conducted. Why
do you think this is done? Are there circumstances when it should not be done?
25.5. What is the difference between a macroscopic schedule and a detailed schedule? Is it
possible to manage a project if only a macroscopic schedule is developed? Why?
25.6. The relationship between people and time is highly nonlinear. Using Putnam’s software
equation (described in Section 25.8.2), develop a table that relates number of people to project
duration for a software project requiring 50000 LOC and 15 person-years of effort (the
productivity parameter is 5000 and B = 0.37). Assume that the software must be delivered in
24 months plus or minus 12 months.
25.7. Assume that a university has contracted you to develop an online course registration
system (OLCRS). First, act as the customer (if you’re a student, that should be easy) and
specify the characteristics of a good system. (Alternatively, your instructor will provide you
with a set of preliminary requirements for the system.) Using the estimation methods discussed
in this chapter, develop an effort and duration estimate for OLCRS. Suggest how you would:
a. Define parallel work activities during the OLCRS project.
b. Distribute effort throughout the project.
c. Establish milestones for the project.
25.8. Select an appropriate task set for the OLCRS project.
25.9. Define a task network for OLCRS described in Problem 25.8, or alternatively, for another
software project that interests you. Be sure to show tasks and milestones and to attach effort
and duration estimates to each task. If possible, use an automated scheduling tool to perform
this work.
25.10. Using a scheduling tool (if available) or paper and pencil (if necessary), develop a
time-line chart for the OLCRS project.
Design element: Quick Look icon magnifying glass: © Roger Pressman
Place an order in 3 easy steps. Takes less than 5 mins.