Posted: June 6th, 2022


The following question needs to be completed in two hours ? could always use you for future work alot of future work if this assignment is done correctly ?
ISEC 620 Homework 2
Think about a software development project that has been conducted for a hospital. The product being developed is a frontend web portal and backend software that processes the patient data resides in the hospital database. The name of the frontend web portal is MyHealth. Patients will see their test results, diagnosis reports, prescriptions, past, and upcoming reservation information in MyHealth portal. They will also have the opportunity of chatting with their doctors. Answer the questions below based on this information.
Question 1
What kind of technologies/methods should be used to ensure that the patients’ privacy will be guaranteed?
Question 2
Review the top 10 web application security risk ( Select 2 of them and explain the potential impact of those on privacy.
Question 3
Describe security practices that fall into the “Requirements” phase of the SDLC? Explain the projection of these practices in the upcoming phase of the SDLC.
Question 4 – Weekly Learning and Reflection
In two to three paragraphs of prose (i.e., sentences, not bullet lists) using APA style citations if needed, summarize and interact with the content that was covered this week in class. In your summary, you should highlight the major topics, theories, practices, and knowledge that were covered. Your summary should also interact with the material through personal observations, reflections, and applications to the field of study. In particular, highlight what surprised, enlightened, or otherwise engaged you. Make sure to include at least one thing that you’re still confused about or ask a question about the content or the field. In other words, you should think and write critically not just about what was presented but also what you have learned through the session. Questions asked here will be summarized and answered anonymously in the next class.
12h 39m remaining
In this chapter you will

• Learn basic terminology associated with software requirements

• Examine functional requirements used to implement security in systems

• Examine use cases as they apply to the requirements of a system

• Learn to build abuse cases to examine security properties of a system

• Examine operational requirements used to implement security in systems

Requirements are the blueprint by which software is designed, built, and tested. As one of the
important foundational elements, it is important to manage this portion of the software development
lifecycle (SDLC) process properly. Requirements set the expectations for what is being built and how it is
expected to operate. Developing and understanding the requirements early in the SDLC process are
important, for if one has to go back and add new requirements later in the process, it can cause
significant issues, including rework.

Functional Requirements
Functional requirements describe how the software is expected to function. They begin as business
requirements and can come from several different places. The line of business that is going to use the
software has some business functionality it wishes to achieve with the new software. These business
requirements are translated into functional requirements. The IT operations group may have standard
requirements, such as deployment platform requirements, database requirements, Disaster
Recovery/Business Continuity Planning (DR/BCP) requirements, infrastructure requirements, and more.
The organization may have its own coding requirements in terms of good programming and
maintainability standards. Security may have its own set of requirements. In the end, all of these
business requirements must be translated into functional requirements that can be followed by
designers, coders, testers, and more to ensure they are met as part of the SDLC process.

Role and User Definitions
Role and user definitions are the statements of who will be using what functionality of the software. At a
high level, these will be in generic form, such as which groups of users are allowed to use the system.
Subsequent refinements will detail specifics, such as which users are allowed which functionality as part
of their job. The detailed listing of what users are involved in a system form part of the use-case
definition. In computer science terms, users are referred to as subjects. This term is important to
understand the subject-object-activity matrix presented later in this section.

Objects are items that users (subjects) interact with in the operation of a system. An object can be a file,
a database record, a system, or program element. Anything that can be accessed is an object. One
method of controlling access is through the use of access control lists assigned to objects. As with
subjects, objects form an important part of the subject-object-activity matrix. Specifically defining the
objects and their function in a system is an important part of the SDLC. This ensures all members of the
development team can properly use a common set of objects and control the interactions appropriately.

Activities or actions are the permitted events that a subject can perform on an associated object. The
specific set of activities is defined by the object. A database record can be created, read, updated, or
deleted. A file can be accessed, modified, deleted, etc. For each object in the system, all possible
activities/actions should be defined and documented. Undocumented functionality has been the
downfall of many a system when a user found an activity that was not considered during design and
construction, but still occurred, allowing functionality outside of the design parameters.

Subject-Object-Activity Matrix
Subjects represent who, objects represent what, and activities or actions represent the how of the
subject-object-activity relationship. Understanding the activities that are permitted or denied in each
subject-object combination is an important requirements exercise. To assist designers and developers in
correctly defining these relationships, a matrix referred to as the subject-object-activity matrix is
employed. For each subject, all of the objects are listed, along with the activities for each object. For
each combination, the security requirement of the state is then defined. This results in a master list of
allowable actions and another master list of denied actions. These lists are useful in creating appropriate
use and misuse cases, respectively. The subject-object-activity matrix is a tool that permits concise
communication about allowed system interactions.

Use Cases
Use cases are a powerful technique for determining functional requirements in developer-friendly
terms. A use case is a specific example of an intended behavior of the system. Defining use cases allows
a mechanism by which the intended behavior (functional requirement) of a system can be defined for
both developers and testers. Use cases are not intended for all subject-object interactions, as the
documentation requirement would exceed the utility. Use cases are not a substitute for documenting
the specific requirements. Where use cases are helpful is in the description of complex or confusing or
ambiguous situations associated with user interactions with the system. This facilitates the correct
design of both the software and the test apparatus to cover what would otherwise be incomplete due to
poorly articulated requirements.

EXAM TIPUse cases are constructed of actors representing users and intended system behaviors, with
the relationships between them depicted graphically.

Use-case modeling shows the intended system behavior (activity) for actors (users). This combination is
referred to as a use case, and is typically presented in a graphical format. Users are depicted as stick
figures, and the intended system functions as ellipses. Use-case modeling requires the identification of
the appropriate actors, whether person, role, or process (nonhuman system), as well as the desired
system functions. The graphical nature enables the construction of complex business processes in a
simple-to-understand form. When sequences of actions are important, another diagram can be added
to explain this. Figure 7-1 illustrates a use-case model for a portion of an online account system.

FIGURE 7-1 Use-case diagram

Abuse Cases (Inside and Outside Adversaries)
Misuse or abuse cases can be considered a form of use case illustrating specifically prohibited actions.
Although one could consider the situation that anything not specifically allowed should be denied,
making this redundant, misuse cases still serve a valuable role in communicating requirements to
developers and testers. Figure 7-2 illustrates a series of misuse cases associated with the online account
management system.

FIGURE 7-2 Misuse-case diagram

In this diagram, the actor is now labeled as unauthorized. This is different from the previous
authenticated user, as this misuse actor may indeed be authenticated. The misuse actor could be
another customer, or an internal worker with some form of access required to manage the system.
Through brainstorming exercises, the development team has discovered the possibility for someone
with significant privilege—that is, a system administrator—to have the ability to create a new payee on
an account. This would enable them to put themselves or a proxy into the automatic bill-pay system.
This would not be an authorized transaction, and to mitigate such activity, a design of an out-of-band
mechanism—that is, email the user for permission—makes it significantly more difficult for this activity
to be carried out, as the misuser must now also have email access to the other user’s information to
approve the new payee. What this misuse case specifically does is draw attention to ensuring that
authenticated but not authorized users do not have the ability to interact in specific ways. As use cases
also drive testing, this misuse case ensures that these issues are also tested as another form of defense.

Misuse cases can present commonly known attack scenarios, and are designed to facilitate
communication among designers, developers, and testers to ensure that potential security holes are
managed in a proactive manner. Misuse cases can examine a system from an attacker’s point of view,
whether the attacker is an inside threat or an outside one. Properly constructed misuse cases can trigger
specific test scenarios to ensure known weaknesses have been recognized and dealt with appropriately
before deployment.

NOTESAFECode has made significant contributions to development and distribution use cases. They
have published a useful document describing “Practical Security Stories and Security Tasks for Agile
Development Environments,” available for use and free download at

Sequencing and Timing
In today’s multithreaded, concurrent operating model, it is possible for different systems to attempt to
interact with the same object at the same time. It is also possible for events to occur out of sequence
based on timing differences between different threads of a program. Sequence and timing issues such
as race conditions and infinite loops influence both design and implementation of data activities.
Understanding how and where these conditions can occur is important to members of the development
team. In technical terms, what develops is known as a race condition, or from the attack point of view, a
system is vulnerable to a time of check/time of use (TOC/TOU) attack.

EXAM TIPA time of check/time of use attack is one that takes advantage of a separation between the
time a program checks a value and when it uses the value, allowing an unauthorized manipulation that
can affect the outcome of a process.

Race conditions are software flaws that arise from different threads or processes with a dependence on
an object or resource that affects another thread or process. A classic race condition is when one thread
depends on a value (A) from another function that is actively being changed by a separate process. The
first process cannot complete its work until the second process changes the value of A. If the second
function is waiting for the first function to finish, a lock is created by the two processes and their
interdependence. These conditions can be difficult to predict and find. Multiple unsynchronized threads,
sometimes across multiple systems, create complex logic loops for seemingly simple atomic functions.
Understanding and managing record locks is an essential element in a modern, diverse object
programming environment.

Race conditions are defined by race windows, a period of opportunity when concurrent threads can
compete in attempting to alter the same object. The first step to avoid race conditions is to identify the
race windows. Then, once the windows are identified, the system can be designed so that they are not
called concurrently, a process known as mutual exclusion.

Another timing issue is the infinite loop. When program logic becomes complex—for instance, date
processing for leap years—care should be taken to ensure that all conditions are covered and that error
and other loop-breaking mechanisms do not allow the program to enter a state where the loop controls
will fail. Failure to manage this exact property resulted in Microsoft Zune devices failing if they were
turned on across the New Year following a leap year. The control logic entered a sequence where a loop
would not be satisfied, resulting in the device crashing by entering an infinite loop and becoming

EXAM TIPComplex conditional logic with unhandled states, even if rare or unexpected, can result in
infinite loops. It is imperative that all conditions in each nested loop be handled in a positive fashion.

Secure Coding Standards
Secure coding standards are language-specific rules and recommended practices that provide for secure
programming. It is one thing to describe sources of vulnerabilities and errors in programs; it is another
matter to prescribe forms that, when implemented, will preclude the specific sets of vulnerabilities and
exploitable conditions found in typical code.

Application programming can be considered a form of manufacturing. Requirements are turned into
value-added product at the end of a series of business processes. Controlling these processes and
making them repeatable is one of the objectives of a secure development lifecycle. One of the tools an
organization can use to achieve this objective is the adoption of an enterprise-specific set of secure
coding standards.

Organizations should adopt the use of a secure application development framework as part of their
secure development lifecycle process. Because secure coding guidelines have been published for most
common languages, adoption of these practices is an important part of secure coding standards in an
enterprise. Adapting and adopting industry best practices are also important elements in the secure
development lifecycle.

One common problem in many programs results from poor error trapping and handling. This is a
problem that can benefit from an enterprise rule where all exceptions and errors are trapped by the
generating function and then handled in such a manner so as not to divulge internal information to
external users.

EXAM TIPTo prevent error conditions from cascading or propagating through a system, each function
should practice complete error mitigation, including error trapping and complete handling, before
returning to the calling routine.

Logging is another area that can benefit from secure coding standards. Standards can be deployed
specifying what, where, and when issues should be logged. This serves two primary functions: it ensures
appropriate levels of logging, and it simplifies the management of the logging infrastructure.

Secure Coding Standards

Secure coding standards have been published by the Software Engineering Institute/CERT at Carnegie
Mellon University for C, C++, and Java. Each of these standards includes rules and recommended
practices for secure programming in the specific language.

Operational Requirements
Software is deployed in an enterprise environment where it is rarely completely on its own. Enterprises
will have standards as to deployment platforms, Linux, Microsoft Windows, specific types and versions
of database servers, web servers, and other infrastructure components.

Software in the enterprise rarely works all by itself without connections to other pieces of software. A
new system may provide new functionality, but would do so touching existing systems, such as
connections to users, parts databases, customer records, etc. One set of operational requirements is
built around the idea that a new or expanded system must interact with the existing systems over
existing channels and protocols. At a high level, this can be easily defined, but it is not until detailed
specifications are published that much utility is derived from the effort.

NOTEA complete SDLC solution ensures systems are secure by design, secure by default, and secure in
deployment. A system that is secure by design but deployed in an insecure configuration or method of
deployment can render the security in the system worthless.

One of the elements of secure software development is that it is secure in deployment. Ensuring that
systems are secure by design is commonly seen as the focus of an SDLC, but it is also important to
ensure systems are secure when deployed. This includes elements such as secure by default and secure
when deployed. Ensuring the default configuration maintains the security of an application if the system
defaults are chosen, and since this is a common configuration and should be a functioning configuration,
it should be secure.

Deployment Environment
Software will be deployed in the environment as best suits its maintainability, data access, and access to
needed services. Ultimately, at the finest level of detail, the functional requirements that relate to
system deployment will be detailed for use. An example is the use of a database and web server.
Corporate standards, dictated by personnel and infrastructure services, will drive many of the selections.
Although there are many different database servers and web servers in the marketplace, most
enterprises have already selected an enterprise standard, sometimes by type of data or usage.
Understanding and conforming to all the requisite infrastructure requirements are necessary to allow
seamless interconnectivity between different systems.

Requirements Traceability Matrix
The requirements traceability matrix (RTM) is a grid that assists the development team in tracking and
managing requirements and implementation details. The RTM assists in the documentation of the
relationships between security requirements, controls, and test/verification efforts. A sample RTM is
illustrated in Table 7-1. An RTM allows the automation of many requirements, providing the team to
gather sets of requirements from centralized systems. The security requirements could be brought in en
masse from a database based on an assessment of what systems and users will be involved in the
software. Software with only internal users will have a different set of requirements from that of a
customer interface across the Web. Having predefined sets of requirements for infrastructure, security,
data sources, and the like and using the RTM to promulgate them to the development teams will go a
long way in ensuring critical requirements are not overlooked.

Table 7-1 Sample Requirements Traceability Matrix

An RTM acts as a management tool and documentation system. By listing all of the requirements and
how they can be validated, it provides project managers the information they need to ensure all
requirements are appropriately managed and that none are missed. The RTM can assist with use-case
construction and ensure that elements are covered in testing.

Connecting the Dots
Requirements are the foundational element used in the development of any project. They come from
many sources. This chapter looked at functional requirements and operational requirements through
the lens of secure software design. In the first section of the book, threat modeling was covered, and
one of the key outputs from the threat modeling process is a set of requirements to mitigate known and
expected threats. The key to understanding requirements is that they represent all of the knowledge
one has with respect to building a project. The easiest set of requirements are those that represent the
features that a customer is asking for, but this is just the tip of the iceberg. Customers will never give
you the requirement “it needs to work” because, of course, that is always implied. The challenge is to
enumerate and document all of the related security, functional, and operational requirements that are
not stated because they are “implied.”

The task of creating a good list of security requirements is challenging at first, as there are so many
details, so many “sources of information,” that the sheer organization of it all is overwhelming. But as a
team goes from project to project, using a core list, refining it and adding to it, a more comprehensive
set can be developed over time. The bottom line to the story is simple: if you want a development team
to do something, it needs to be enumerated in the requirements for the project. Although this is
Chapter 7, a third of the way through the book, this is the true entry point for all work expectations in a
software development project. Every concept throughout this book only becomes a part of the team’s
effort through the requirements process.

Chapter Review
In this chapter, an examination of requirements associated with a system was covered. The chapter
began with a description of functional requirements and how these can come from numerous sources,
including the business, the architecture group to ensure interoperability with the existing enterprise
elements, and the security group to ensure adherence to security and compliance issues. The concepts
of users and roles, together with subjects and objects and the allowed activities or actions, were
presented. This chapter also covered the development of a subject-object-activity matrix defining
permitted activities.

Use cases and misuse cases were presented as a means of communicating business requirements and
security requirements across the SDLC. These tools can be powerful in communicating ambiguous
requirements and in ensuring that specific types of security issues are addressed. Other security
concerns, such as sequence and timing issues, infinite loops, and race conditions, were discussed. The
use of enterprise-wide secure coding standards to enforce conformity across the development
processes was presented. This is the first foundational element in defining an enterprise methodology
that assists in security and maintainability, and assists all members of the development team in
understanding how things work.

Operational and deployment requirements are those that ensure the system functions as designed
when deployed. To complete an examination of the requirements across a system, a requirements
traceability matrix was presented, communicating the relationship between requirements and
programmatic elements.

Quick Tips
• Functional requirements are those that describe how the software is expected to function.

• Business requirements must be translated into functional requirements that can be followed by
designers, coders, testers, and more to ensure they are met as part of the SDLC process.

• Role and user definitions are the statements of who will be using what functionality of the software.

• Objects are items that users (subjects) interact with in the operation of a system. An object can be a
file, a database record, a system, or a program element.

• Activities or actions are the legal events that a subject can perform on an associated object.

• The subject-object-activity matrix is a tool that permits concise communication about allowed system

• A use case is a specific example of an intended behavior of the system.

• Misuse or abuse cases can be considered a form of use case illustrating specifically prohibited actions.

• Sequence and timing issues, such as race conditions and infinite loops, influence both design and
implementation of data activities.

• Secure coding standards are language-specific rules and recommended practices that provide for
secure programming.

• A complete SDLC solution ensures systems are secure by design, secure by default, and secure in

• The requirements traceability matrix (RTM) is a grid that allows users to track and manage
requirements and implementation details.

To further help you prepare for the CSSLP exam, and to provide you with a feel for your level of
preparedness, answer the following questions and then check your answers against the list of correct
answers found at the end of the chapter.

1. An activity designed to clarify requirements through the modeling of expected behaviors of a system
is called what?

A. Functional requirement decomposition

B. Requirement traceability matrix

C. Threat modeling

D. Use-case modeling

2. Business requirements are translated into _____ for the development team to act upon.

A. Programming rules

B. Data lifecycle elements

C. Functional requirements

D. Data flow diagrams

3. The “who” associated with programmatic functionality is referred to as what?

A. Role or user

B. Object

C. Activity or action

D. Program manager

4. Subjects interact with ______ in the operation of a system.

A. Users

B. Objects

C. Data

D. Actions

5. Presenting a known attack methodology to the development team to ensure appropriate mitigation
can be done via what?

A. Use case

B. Misuse case

C. Security requirement

D. Business requirement

6. Race conditions can be determined and controlled via what?

A. Multithreading

B. Mutual exclusion

C. Race windows

D. Atomic actions

7. Enterprise secure coding standards ensure what?

A. Certain types of vulnerabilities are precluded

B. Code is error free

C. Code is efficient

D. Security functionality is complete

8. A grid to assist the development team in tracking and managing requirements and implementation
details is known as a:

A. Functional requirements matrix

B. Subject-object-activity matrix

C. Use case

D. Requirements traceability matrix

9. Functional requirements include all of the following except:

A. Determining specific architecture details

B. Deployment platform considerations

C. DR/BCP requirements

D. Security requirements

10. Access control lists are assigned to _____ as part of a security scheme.

A. Users

B. Roles

C. Objects

D. Activities

11. To prevent error conditions from propagating through a system, each function should:

A. Log all abnormal conditions

B. Include error trapping and handling

C. Clear all global variables upon completion

D. Notify users of errors before continuing

12. Corporate standards, driven by defined infrastructure services, will drive:

A. Deployment environment requirements

B. Database requirements

C. Web server requirements

D. Data storage requirements

13. Complex conditional logic can result in _______ for unhandled states.

A. Infinite loops

B. Race conditions

C. Memory leaks

D. Input vulnerabilities

14. Use cases should be constructed for:

A. All requirements

B. All requirements that have security concerns

C. Business requirements that are poorly defined

D. Implementation features that need testing

15. To assist designers and developers in correctly defining the relationships between users and the
desired functions on objects, a ______ can be employed.

A. Functional requirements matrix

B. Requirements traceability matrix

C. Use case

D. Subject-object-activity matrix

1. D. Defining use cases provides a mechanism by which the intended behavior (functional
requirement) of a system can be defined for both developers and testers.

2. C. Functional requirements begin as business requirements and can come from several different

3. A. Role and user definitions are the statements of who will be using what functionality of the

4. B. Subjects interact with objects as defined in the subject-object-activity matrix. Although data could
be considered an object, object is the more complete answer.

5. B. Misuse cases can present commonly known attack scenarios and are designed to facilitate
communication among designers, developers, and testers to ensure that potential security holes are
managed in a proactive manner.

6. C. Race conditions are defined by race windows, a period of opportunity when concurrent threads
can compete in attempting to alter the same object. They are caused by multithreading and are resolved
through atomic actions under mutual exclusion conditions. The key is in detecting when they occur.

7. D. Secure coding standards are language-specific rules and recommended practices that provide for
secure programming.

8. D. The requirements traceability matrix (RTM) is a grid that allows users to track and manage
requirements and implementation details.

9. A. The specific architecture details come from requirements, but are not specified directly as
functional requirements.

10. C. Access control lists are associated with users, objects, and activities, but are assigned to objects.

11. B. To prevent error conditions from cascading or propagating through a system, each function
should practice complete error mitigation, including error trapping and complete handling, before
returning to the calling routine.

12. A. Deployment environment requirements include issues such as corporate standards for
databases, web services, data storage, and more.

13. A. Complex conditional logic with unhandled states, even if rare or unexpected, can result in infinite

14. C. Use cases are specifically well suited for business requirements that are not well defined.

15. D. To assist designers and developers in correctly defining the relationships between users
(subjects), objects, and activities, a matrix referred to as the subject-object-activity matrix is employed.
Security Policies and Regulations
In this chapter you will
• Explore the different types of regulations associated with secure software
• Learn how security policies impact secure development practices
• Explore legal issues associated with intellectual property protection
• Examine the role of privacy and secure software
• Explore the standards associated with secure software development
• Examine security frameworks that impact secure development
• Learn the role of securing the acquisition lifecycle and its impact on secure
Regulations and Compliance
Regulations and compliance drive many activities in an enterprise. The primary
reason behind this is the simple fact that failure to comply with rules and
regulations can lead to direct, and in some cases substantial, financial penalties.
Compliance failures can carry additional costs, as in increased scrutiny, greater
regulation in the future, and bad publicity. Since software is a major driver of
many business processes, a CSSLP needs to understand the basis behind various
rules and regulations and how they affect the enterprise in the context of their
own development efforts. This enables decision making as part of the software
development process that is in concert with these issues and enables the
enterprise to remain compliant.
Much has been said about how compliance is not the same as security. In a
sense, this is true, for one can be compliant and still be insecure. When viewed
from a risk management point of view, security is an exercise in risk
management, and so are compliance and other hazards. Add it all together, and
you get an “all hazards” approach, which is popular in many industries, as senior
management is responsible for all hazards and the residual risk from all risk
Regulations can come from several sources, including industry and trade
groups and government agencies. The penalties for noncompliance can vary as
well, sometimes based on the severity of the violation and other times based on
political factors. The factors determining which systems are included in
regulation and the level of regulation also vary based on situational factors.
Typically, these factors and rules are published significantly in advance of
instantiation to allow firms time to plan enterprise controls and optimize risk
management options. Although not all firms will be affected by all sets of
regulations, it is also not uncommon for a firm to have multiple sets of
regulations across different aspects of an enterprise, even overlapping on some
elements. This can add to the difficulty of managing compliance, as different
regulations can have different levels of protection requirements.
Many development efforts may have multiple regulatory impacts, and
mapping the different requirements to the individual data flows that they each
affect is important. For instance, if an application involves medical information
and payment information, different elements may be subject to regulations such
as PCI DSS and HIPAA. These and other common regulatory requirements are
covered later in this chapter.

NOTEFor a CSSLP, it is important to understand the various sources of security
requirements, as they need to be taken into account when executing software
development. It is also important to not mistake security functionality for the
objective of secure software development. Security functions driven by
requirements are important, but the objective of a secure development lifecycle
process is to reduce the number and severity of vulnerabilities in software.
The Federal Information Security Management Act of 2002 (FISMA) is a federal
law that requires each federal agency to implement an agency-wide information
security program. The National Institute of Standards and Technology (NIST) was
designated the agency to develop implementation guidelines, and did so through
the publication of a risk management framework (RMF) for compliance. The
initial compliance framework included the following set of objectives, which
were scored on an annual basis by the Inspector General’s office:
• Inventory of systems
• Categorize information and systems according to risk level
• Security controls
• Certification and accreditation of systems (including risk assessment and system
security plans)
• Training
As the FISMA program has matured over the past decade, NIST added the
Information Security Automation Program and the Security Content Automation
Protocol (SCAP). Currently, all accredited systems are supposed to have a set of
monitored security controls to provide a level of continuous monitoring. FISMA is
mandated for federal agencies and, by extension, contractors that implement and
operate federal information systems. Like all security programs, the effectiveness
of FISMA is directly related to the level of seriousness placed on it by senior
management. When viewed as a checklist that is for compliance purposes, its
effectiveness is significantly lower than in agencies that embrace the power of
controls and continuous monitoring as a means to reduce system-wide risk.
Currently, NIST has responded with a series of publications detailing a security
lifecycle built around a risk management framework. Detailed in NIST SP 800-39,
a six-step process to create an RMF is designed to produce a structured, yet
flexible, methodology of managing the risk associated with information systems.
The six steps are
• Categorize information systems
• Select security controls
• Implement security controls
• Assess security controls
• Authorize information systems
• Monitor security controls
Each of these steps has a separate NIST Special Publication to detail the
specifics. This is a process-based methodology of achieving desired security levels
in an enterprise. CSSLPs will need to integrate their development work into this
framework in organizations that operate under an RMF.
The Sarbanes-Oxley Act of 2002 was a reaction to several major accounting and
corporate scandals, costing investors billions and shaking public confidence in
the stock markets. Although composed of many parts, the primary element
concerned with information security is Section 404, which mandates a specific
level of internal control measures. In simple terms, the information systems used
for financial accounting must have some form of security control over integrity
so that all may have confidence in the numbers being reported by the system.
Criticized by many for its costs, it is nonetheless the current law, and financial
reporting systems must comply.
The Financial Modernization Act of 1999, also known as the Gramm-Leach-Bliley
Act (GLBA), contains elements designed to protect consumers’ personal financial
information (PFI). From a software perspective, it is important to understand that
the act specifies rules as to the collection, processing, storage, and disposal of PFI.
The three primary rules worth noting are
1. The Financial Privacy Rule, which governs the collection and disclosure of PFI,
including companies that are nonfinancial in nature.
2. The Safeguards Rule, which applies to financial institutions and covers the
design, implementation, and maintenance of safeguards deployed to protect PFI.
3. The Pretexting Protections, which addresses the use of pretexting (falsely
pretending) to obtain PFI.
While GLBA deals with PFI, the Healthcare Insurance Portability and
Accountability Act (HIPAA) deals with personal health information (PHI). PHI
contains information that can have significant value to criminal organizations.
Enacted in 1996, the privacy provisions of HIPAA were not prepared for the
industry movement to electronic records. The Health Information Technology for
Economic and Clinical Health Act (HITECH Act) is part of the American Recovery
and Reinvestment Act of 2009 (ARRA), and is designed to enhance privacy
provisions of electronic personal health information records.
Payment Card Industry Data Security Standard (PCI DSS)
PCI stands for Payment Card Industry, an industry group established to create,
manage, and enforce regulations associated with the securing of cardholder data.
There are three main standards: the Data Security Standard (PCI DSS), the
Payment Application Data Security Standard (PA DSS), and the PIN Transaction
Security (PTS). Each of these is designed to provide a basic level of protection for
cardholder data.
The PCI DSS is the governing document that details the contractual
requirements for members that accept and process bank cards. This standard
includes requirements for security management, policies and procedures,
network architecture, software design, and other critical protective measures for
all systems associated with the processing and storing of cardholder data.
Arranged in six groups of control objectives, 12 high-level requirements are
detailed. Under each of these requirements are a significant number of
subrequirements and testing procedures that are used to determine a baseline
security foundation.
The PA DSS standard is a set of requirements used by software vendors to
validate that a payment application is compliant with the requirements
associated with PCI DSS. This document describes requirements in a manner
consistent with software activity, not the firms. This is relevant, as software
vendors do not necessarily have to comply with PCI DSS, but when creating
applications designed to handle cardholder data, compliance with PA DSS signals
that the software is properly designed. Use of PA DSS alone is not sufficient, as
there are nonsoftware-associated requirements associated with cardholder data
requirements in PCI DSS that are still necessary to be compliant.
One of the most important elements of the cardholder data is the PIN, and
security aspects associated with the PIN are governed by the PTS standard. The
majority of this standard applies to hardware devices known as PIN entry devices
PCI standards are contractual requirements and can carry very severe
financial penalties for failing to comply. If a firm accepts payment cards, stores
payment card data, or makes products associated with payment cards, then there
are PCI standards to follow. These are not optional, nor are they without
significant detail, making them a significant compliance effort. And because of
the financial penalties, their importance tends to be near the head of the line in
the risk management arena.
Other Regulations
There are a myriad of lesser known, but equally important, regulations.
Authentication for banking over the Internet is governed by rules from the
Federal Financial Institutions Examination Council (FFIEC). Current FFIEC
regulations state that authentication must be multifactor in nature at a minimum.
Any systems designed for use in this environment must include this as a
Legal Issues
Legal issues frame a wide range of behaviors and work environments. This
comes from the concept that when disputes between parties arise, the legal
system is a method of resolving these disputes. Over time, a body of laws and
regulations has been created to govern activities, providing a roadmap for
behavior between parties.
Intellectual Property
Intellectual property is a legal term that recognizes that creations from the mind
can be and are property to which exclusive control can be granted to the creator.
A variety of different legal mechanisms can be used to protect the exclusive
control rights. The association of legal mechanism to the property is typically
determined by the type of property. The common forms of legal protection are
patents, copyrights, trademarks, and trade secrets.
Software Patents
There is intense debate over the extent to which software patents should be
granted, if at all. In the United States, patent law excludes issuing patents to
abstract ideas, and this has been used to deny some patents involving software.
In Europe, computer programs as such are typically excluded from patentability.
There is some overlapping protection for software in the form of copyrights,
which are covered in an upcoming section. Patents can cover the underlying
algorithms and methods embodied in the software. They can also protect the
function that the software is intended to serve. These protections are
independent of the particular language or specific coding.
A patent is an exclusive right granted by a government to the inventor for a
specified period of time. Patents are used to protect the inventor’s rights in
exchange for a disclosure of the invention. Patent law can differ between
countries. In the United States, the requirements of an invention is that it
represent something new, useful, and nonobvious. It can be a process, a machine,
an article of manufacture, or a composition of matter. Patents for software and
designs have drawn considerable attention in recent years as to whether the
ideas are nonobvious and “new.” Patents allow an inventor time to recoup their
investment in the creation of an invention. They give their owners the right to
prevent others from using a claimed invention, even if the other party claims
they independently developed a similar item and there was no copying involved.
Patent applications are highly specialized legal documents requiring significant
resources to achieve success. For patent protection to occur, patents must be
applied for prior to disclosure of the invention, with the specifics differing by
A copyright is a form of intellectual property protection applied to any
expressible form of an idea or information that is substantive and discrete.
Copyrights are designed to give the creator of an original work exclusive rights to
it, usually for a limited time. Copyrights apply to a wide range of creative,
intellectual, or artistic items. The rights given include the right to be credited for
the work, to determine who may adapt the work to other forms, who may
perform the work, who may financially benefit from it, and other related rights.
Copyrights are governed internationally through the Berne Convention, which
requires its signatories to recognize the copyright of works of authors from other
signatory countries in the same manner as it recognizes the copyright of its own
authors. For copyright to be enforceable, an application must be made to the
copyright office detailing what is being submitted as original work and desiring
protection. Unlike patents, this filing is relatively straightforward and affordable
even by individuals.
Software Copyrights
Patent protection and copyright protection constitute two different means of legal
protection that may cover the same subject matter, such as computer programs,
since each of these two means of protection serves its own purpose. Using
copyright, software is protected as works of literature under the Berne
Convention. Copyright protection allows the creator of a program to prevent
another entity from copying it.
Copyright law prohibits the direct copying of some or all of a particular version
of a given piece of software, but it does not prevent other developers from
independently writing their own versions. A common practice in the industry is
to publish interface specifications so that programs can correctly interface with
specified functions; this places specific limitations on input and output
specifications and would not result in copyright violations.
A trademark is a recognizable quality associated with a product or firm. The
nature of the trademark is to build a brand association, and hence, copying by
others is prohibited. Trademarks can be either common law–based or registered.
Registering a trademark with the government provides significantly more legal
protection and recovery options. Internationally, trademarks are managed
through the World Intellectual Property Organization, using protocols developed
in Madrid, referred to as the Madrid System.
Names are commonly trademarked to protect a brand image. In this
vein, is trademarked, as it is used to project the image of the firm.
Common terms or simply descriptive terms are not eligible for trademark
protection. In fact, trademark holders must protect their trademarks from
general generic use not aligned with their products, as they can lose a trademark
that becomes a generic term.
Trade Secrets
Trade secrets offer the ultimate in time-based protection for intellectual property.
A trade secret is just that—a secret. Trade secrets are protected by a variety of
laws, with the requirement that a firm keep a secret a secret, or at least make a
reasonable attempt to do so. The most famous trade secrets typically revolve
around food and taste, such as Coca-Cola’s recipe or Kentucky Fried Chicken’s
recipe. Should someone manage to steal the recipes, they could then attempt to
sell them to a competitor, but such attempts fail, as no respectable corporation
would subject itself to the legal ramifications of attempting to circumvent legal
protections for intellectual property. One issue with trade secrets is that should
someone independently discover the same formula, then the original trade secret
holder has no recourse.
Trade secrets are difficult to use in software, as the distribution of software,
even compiled, provides the end user with access to much information. There are
limited cases where cryptographic algorithms or seeds may be considered trade
secrets, as they are not passed to clients and can be protected. There is a limited
amount of related protection under the reverse-engineering provisions of the U.S.
Digital Millennium Copyright Act, where reverse-engineering of security
safeguards is prohibited.
Warranties represent an implied or contractually specified promise that a
product will perform as expected. When you buy computer hardware, the
warranty will specify that for some given period of time the hardware will
perform to a level of technical specification, and should it fail to do so, will
outline the vendor’s responsibilities. The warranty typically does not guarantee
that the hardware will perform the tasks the user bought it for—merely that it
will work at some specified technical level. Warranty is necessary for fitness for
use, but is not sufficient.
With respect to software, the technical specification, i.e., the program performs
as expected, is typically considered by the end user to be fitness for use on the
end user’s problem. This is not what a vendor will guarantee; in fact, most
software licenses specifically dismiss this measure, claiming the software is
licensed using terms such as “as-is” and “no warranty as to use” or “no vendor
responsibility with respect to any failures resulting from use.”
Privacy is the principle of controlling information about one’s self: who it is
shared with, for what purpose, and how it is used and transferred to other
parties. Control over one’s information is an issue that frequently involves
making a choice. To buy something over the Internet, you need to enter a credit
card or other payment method into a website. If you want the item delivered to
your house, you need to provide an address, typically your home address. While
it may seem that the answer to many privacy issues is simple anonymization, and
with the proper technology it could be done, the practical reality requires a
certain level of traceable sharing. To obtain certain goods, a user must consent to
share their information. The issues with privacy then become one of data
disposition—what happens to the data after it is used as needed for the
immediate transaction.
Privacy and Software Development
Privacy may seem like an abstract issue for CSSLP, but the ramifications
associated with software development and privacy are significant. Gone are the
days of collecting and storing any data, in any form, and in any way. There are a
myriad of privacy rules and regulations, and development teams need to be
aware of the general issues so that they can properly apply their skills to meeting
the specific requirements of a project. If a project collects personal data or stores
it and there are no specific requirements with respect to privacy, then the team
should know to raise the questions of which rules and regulations are likely to
If the data is stored for future orders, safeguards are needed. In the case of
credit card information, regulations such as PCI DSS dictate the requirements for
safeguarding such data. Data can also be used to test systems. However, the use of
customer data for system testing can place the customer data at risk. In this
instance, anonymization can work. Proper test data management includes an
anonymization step to erase connection to meaningful customer information
before use in a test environment.
Privacy Policy
The privacy policy is the high-level document that describes the principles
associated with the collection, storage, use, and transfer of personal information
within the scope of business. A privacy policy will detail the firm’s responsibility
to safeguard information. A business needs to collect certain amounts of personal
information in the course of regular business. A business still has a responsibility
to properly secure the information from disclosure to unauthorized parties. A
business may have partners with which it needs to share elements of personal
information in the course of business. A firm may also choose to share the
information with other parties as a revenue stream. The privacy policy acts as a
guide to the employees as to their responsibilities associated with customer
A customer-facing privacy policy, commonly referred to as privacy disclosure
statement, is provided to customers to inform them of how data is protected,
used, and disposed of in the course of business. In the financial sector, the
Gramm-Leach-Bliley Act mandates that firms provide clear and accurate
information as to how customer information is shared.
Personally Identifiable Information
Information that can be used to specifically identify an individual is referred to
as personally identifiable information (PII). PII is viewed as a technical term, but
it has its roots in legal terms. One of the primary challenges associated with PII is
the effect of data aggregation. Obtaining several pieces from different sources, a
record can be constructed that permits the identification of a specific individual.
Recognizing this, the U.S. government defines PII using the following from an
Office of Management and Budget (OMB) Memorandum:
Information which can be used to distinguish or trace an individual’s identity, such
as their name, social security number, biometric records, etc., alone, or when
combined with other personal or identifying information which is linked or linkable
to a specific individual, such as date and place of birth, mother’s maiden name, etc.
Common PII Elements
The following items are commonly used to identify a specific individual and are,
hence, considered PII:
• Full name (if not common)
• National identification number (i.e., SSN)
• IP address (in some cases)
• Home address
• Motor vehicle registration plate number
• Driver’s license or state ID number
• Face, fingerprints, or handwriting
• Credit card and bank account numbers
• Date of birth
• Birthplace
• Genetic information
To identify an individual, only a small subset may be needed. A study by
Carnegie Mellon University found that nearly 90 percent of the U.S. population
could be uniquely identified with only gender, date of birth, and ZIP code.
Personal Health Information
Personal health information (PHI), also sometimes called protected health
information, is the set of data elements associated with an individual’s health
care that can also be used to identify a specific individual. These elements can
include, but are not limited to, PII elements, demographic data, medical test data,
biometric measurements, and medical history information. This data can have
significant risk factors to an individual should it fall into the possession of
unauthorized personnel. For this reason, as well as general privacy concerns, PHI
is protected by a series of statutes, including HIPAA and the HITECH Act.

NOTEPHI and associated medical data are sought after by cybercriminals
because they contain both insurance information and financial responsibility
information, including credit cards, both of which can be used in fraud. In
addition, there is sufficient PII for an identity to be stolen, making health records
a highly valued source of information for cybercriminals.
Breach Notifications
When security fails to secure information and information is lost to parties
outside of authorized users, a breach is said to have occurred. Data breaches
trigger a series of events. First is the internal incident response issue—what
happened, how it happened, what systems/data were lost, and other questions
that need to be answered. In a separate vein, customers whose data was lost
deserve to be informed. The state of California was the first to address this issue
with SB 1386, a data disclosure law that requires
a state agency, or a person or business that conducts business in California, that
owns or licenses computerized data that includes personal information, as defined,
to disclose in specified ways, any breach of the security of the data, as defined, to
any resident of California whose unencrypted personal information was, or is
reasonably believed to have been, acquired by an unauthorized person.
Two key elements of the law are “unencrypted personal information” and
“reasonably believed to have been acquired by an unauthorized party.”
Encrypting data can alleviate many issues associated with breaches. “Reasonably
believed” means that certainty as to loss is not necessary, thus increasing the
span of reportable issues. Since its start in July 2003, other states have followed
with similar measures. Although no federal measure exists, virtually every state
and U.S. territory is covered by a state disclosure law.
Data Protection Principles
The term data protection is typically associated with the European Union (EU).
The EU has a long history of taking a person-centric view of privacy, beginning
with their Data Protection Directive (EUDPD). The current set of data protection
regulations is the General Data Protection Regulation, which is discussed in detail
later in the chapter.
The EUDPD equated personal data protection as a basic human right and
places strict rules on firms using personal data. Personal data could be collected
and used for specifically approved purposes, but then it must be destroyed or
altered in such a way that it is no longer personally identifiable. In the United
States, consumers must opt out of data sharing and extended data use as
proposed by firms. In the EU, it is the opposite: consumers must opt in to sharing.
This has significant implications to the collection, use, and disposal of data in an
In Europe, privacy law is much more advanced than in the United States. In the
European Union, personal data should not be processed, except when three
conditions are met: transparency, legitimate purpose, and proportionality.
Transparency means that the user must give consent for the data to be processed.
To give consent, the customer must be informed of the purpose of processing the
data, the recipients of the data, and any other information required to
understand how the data will be used. The purpose of processing the data shall
be a legitimate purpose, and the level of data should be commensurate with its
use, or proportional.
To manage differences between U.S. and EU data protection schemes during
the EUDPD era, a set of Safe Harbor Principles were instantiated. Data was
allowed to be transferred out of the European Union under these regulations,
which are designed to provide a level of protection against disclosure or loss.
Safe Harbor Principles
The Safe Harbor principles allowed non-EU firms to deal with the EUDPD by
following these seven elements:
• Notice Customers must be informed that their data is being collected and how it
will be used.
• Choice Customers must have the ability to opt out of the collection and forward
transfer of the data to third parties.
• Onward Transfer Transfers of data to third parties may only occur to other
organizations that follow adequate data protection principles.
• Security Reasonable efforts must be made to prevent loss of collected
• Data Integrity Data must be relevant and reliable for the purpose it was
collected for.
• Access Customers must be able to access information held about them and
correct or delete it if it is inaccurate.
• Enforcement There must be effective means of enforcing these rules.
Although the Safe Harbor principles have been replaced by new EU privacy
regulations, the guiding elements are still useful to understand, for they are still
the foundational principles behind privacy.
In May 2018, the EU passed new sweeping privacy regulations, the General
Data Protection Regulation (GDPR), and organizations with business ties to the
EU, including EU-based customers, will need to comply with GDPR standards.
GDPR is a complex, comprehensive set of regulations covering all enterprises and
all of the EU. The aim of GDPR is to give back to people control of their personal
data, while imposing strict rules on those hosting and processing this data,
regardless of where they are in the world. Several elements of GDPR are
sweeping in their nature: First is the definition of what comprises personal data.
There is no distinction between personal data about an individual in their
private, public, or work roles—all are covered by this regulation.
GDPR Personal Data Elements
Under GDPR, personal data is defined as any information relating to an identified
or identifiable natural person.
This includes
• online identifiers,
• IP addresses,
• and cookies
if they are capable of being linked back to the data subject.
This also includes indirect information, including physical, physiological,
genetic, mental, economic, cultural, or social identities that can be traced back to
a specific individual.
GDPR demands that individuals must have full access to information on how
their data is processed and this information should be available in a clear and
understandable way. When companies obtain data from an individual, some of
the issues that must be made clear to the individual whose data is being collected
• The identity and contact details of the organization behind the data request (who
is asking)
• The purpose of acquiring the data and how it will be used (why they are asking)
• The period for which the data will be stored (how long they will keep it)
• Whether the data will be transferred internationally (where else it can go)
• The individual’s right to access, rectify, or erase the data (right to be forgotten)
• The individual’s right to withdraw consent at any time (even after collection)
• The individual’s right to lodge a complaint
Several of these elements can be difficult to operationally implement unless
they are planned into the software itself. One of the common elements in the
press is the right to be forgotten. GDPR mandates that individuals must be able to
withdraw consent at any time and have a right to be forgotten. Included in this
right is that if data is no longer required for the reasons for which it was
collected, it must be erased.
California Consumer Privacy Act 2018 (AB 375)
In June 2018, California passed a sweeping privacy bill that holds many
similarities to GDPR. Passed in response to the threat of a ballot initiative, AB 375
mandates several key elements.
While individuals must opt out of sharing, they
• Have a right to know how personal data is being used
• Have a right to disclosure and objection relating to who data is being sold to
• Have a right to know who data has been provided to
• Experience no discrimination if they object to data being sold
• Have a right to access the data being held
The full ramifications of AB 375 are still speculation, as the bill does not go into
effect until 2020, but the intent is clear. Privacy laws are going to multiply in the
Security Standards
Standards are a defined level of activity that can be measured and monitored for
compliance by a third party. Standards serve a function by defining a level of
activity that allows different organizations to interact in a known and meaningful
way. Standards also facilitate comparisons between organizations. The process of
security in an enterprise is enhanced through the use of standards that enable
activities associated with best practices. There are a wide range of sources of
standards, including standards bodies, both international and national, and
industry and trade groups.
Security standards serve a role in promoting interoperability. In software
design and development, there will be many cases where modules from different
sources will be interconnected. In the case of web services, the WS-security
standard provides a means of secure communication between web services.
ISO is the International Organization for Standardization, a group that develops
and publishes international standards. The United States has an active
relationship to ISO through the activities of the U.S. National Committee, the
International Electrotechnical Commission (IEC), and the American National
Standards Institute (ANSI). ISO has published a variety of standards covering the
information security arena. To ensure that these standards remain relevant with
respect to ever-changing technology and threats, ISO standards are on a five-year
review cycle.
The relevant area of the standards catalog are under JTC 1 – Information
Technology, specifically subcommittees 7 (Software and Systems Engineering)
and 27 (IT Security Techniques). Depending upon the specific topic, other
subcommittees may also have useful standards
Prominent ISO Standards
The list of ISO standards is long, covering many topics, but some of the more
important ones for CSSLPs to understand are as follows:

ISO 2700X Series
The ISO 2700X series of standards does for information security what the ISO
900X series does for quality management. This series defines the relevant
vocabulary, a code of practice, management system implementation guidance,
metrics, and risk management principles. The ISO/IEC 27000 series of information
security management standards is a growing family with over 20 standards
currently in place. Broad in scope, covering more than just privacy,
confidentiality, or technical security issues, this family of standards is designed to
be applicable to all shapes and sizes of organizations.
ISO/IEC 15408 (Common Criteria) Evaluation Assurance Levels (EALs)
The following table illustrates the levels of assurance associated with specific
evaluation assurance levels correlated with the Common Criteria.

ISO 15408 Common Criteria
The Common Criteria is a framework where security functional and assurance
requirements can be specified in precise terms, allowing vendors to implement
and/or make claims about the security attributes of their products. Testing
laboratories can evaluate the products to determine if they actually meet the
claims stated using the Common Criteria framework. The Common Criteria
provide a measure of assurance that specific objectives are present in a given
The Common Criteria use specific terminology to describe activity associated
with the framework. The Target of Evaluation (TOE) is the product or system that
is being evaluated. The Security Target (ST) is the security properties associated
with a TOE. The Protection Profile (PP) is a set of security requirements associated
with a class of products, i.e., firewalls have PPs and operating systems have PPs,
but these may differ. PPs help streamline the comparison of products within
product classes.
The output of the Common Criteria process is an Evaluation Assurance Level
(EAL), a set of seven levels, from 1, the most basic, through 7, the most
comprehensive. The higher the EAL value, the higher the degree of assurance
that a TOE meets the claims. Higher EAL does not indicate greater security.
ISO/IEC 9126 Software Engineering Product Quality
Product quality is an international standard for the evaluation of software
quality. This four-part standard addresses some of the critical issues that
adversely affect the outcome of a software development project. The standard
provides a framework that defines a quality model for the software product. The
standard addresses internal metrics that measure the quality of the software and
external metrics that measure the software results during operation. Quality-of-
use metrics are included to examine the software in particular scenarios.
ISO/IEC 9126 Quality Characteristics
ISO 9126 defines six quality characteristics that can be used to measure the
quality of software:
• Functionality
• Reliability
• Usability
• Efficiency
• Maintainability
• Portability
ISO/IEC 12207 Systems and Software Engineering—Software Life Cycle
This international standard establishes a set of processes covering the lifecycle of
the software. Each process has a defined set of activities, tasks, and outcomes
associated with it. The standard acts to provide a common structure so all parties
associated with the software development effort can communicate through a
common vocabulary.
ISO/IEC 15504 Information Technology—Process Assessment
Process assessment is also known as SPICE. SPICE originally stood for Software
Process Improvement and Capability Evaluation, but international concerns over
the term Evaluation has resulted in the substitution of the term Determination
(SPICD). ISO 15504 is a set of technical standards documents for the computer
software development process. The standard was derived from ISO/IEC 12207, the
process lifecycle standard, and from maturity models like the CMM. ISO 15504 is
used for process capability determination and process improvement efforts
related to software development.
ISO 15504 defines a capability level on the following scale:

The ISO15504 series consists of a series of documents, six of which are in final
approved form, with two additional in draft stages. The series contains a
reference model, and sets of process attributes and capability levels.
The National Institute of Standards and Technology is a federal agency that is
charged with working with industry to develop technology, measurements, and
standards that align with the interests of the U.S. economy. The Computer
Security Division is the element of NIST that is charged with computer security
issues, including those necessary for compliance with the Federal Information
Security Management Act of 2002 (FISMA) and its successors. NIST develops and
publishes several relevant document types associated with information security.
The two main types of documents are Federal Information Processing Standards
and the Special Publication 800 series from the NIST Information Technology
Laboratory (ITL). The ITL’s Computer Security Division also publishes security
bulletins. Security bulletins are published on an average of six times a year,
presenting an in-depth discussion of a single topic of significant interest to the
information systems community. NIST also publishes Interagency or Internal
Reports (NISTIRs) that describe research of a technical nature.
Federal Information Processing Standards (FIPS)
The Federal Information Processing Standards (FIPS) are mandatory sets of
requirements on federal agencies and specific contractors. Although limited in
number, they are wide sweeping in authority and scope. Older FIPS had sections
describing a waiver process, but since the passage of FISMA, all aspects of FIPS
are now mandatory and the waiver process is no longer applicable.
NIST SP 800 Series
The more common set of NIST publications utilized by industry is the 800 series
of Special Publications. These documents are designed to communicate the
results of relevant research and guidelines associated with securing information
systems. The 800 series has documents ranging from describing cryptographic
protocols, to security requirements associated with a wide range of system
elements, to risk management framework elements associated with information
security governance.
SAFECode is an industry-backed organization that is committed to increasing
communication between firms on the topic of software assurance. This group
was formed by members who voluntarily share their practices, which together
form a best practice solution. SAFECode is dedicated to communicating best
practices that have been used successfully by member firms. A sampling of
SAFECode’s publications includes
• Software Assurance: An Overview of Current Industry Best Practices
• Fundamental Practices for Secure Software Development
• Fundamental Practices for Secure Software Development, 2nd Edition
• Overview of Software Integrity Controls
• Security Engineering Training
• List of security-focused stories and security tasks for agile-based development
Prominent NIST Publications
The list of NIST security publications is long, covering many topics, but some of
the more important ones are as follows:

NOTEThe users’ stories for Agile can be a valuable resource for CSSLP agile
developers to explore. See “SAFECode Releases Software Security Guidance for
Agile Practitioners.” This paper provides practical software security guidance to
Agile practitioners in the form of security-focused stories and security tasks they
can easily integrate into their Agile-based development environments. You can
find it at
One of the strengths of SAFECode’s publications is that they are not geared just
for large firms, but are applicable across a wide array of corporate sizes, from
very large to very small.
Secure Software Architecture
Secure software does not just happen—it must be designed in. This begins with
the architecture of the process that creates the software and the architecture of
the software itself. There are a wide variety of frameworks covering both process
and product security that can be employed in the development effort.
Security Frameworks
Numerous security frameworks are used by management to align processes and
objectives. Knowledge of the various frameworks is essential for CSSLPs to
understand the business environment in which development both takes place
and is meant to serve.
Control Objectives for Information and Related Technology (COBIT) is a
framework designed to assist management in bridging the gap between control
requirements, technical issues, and business risks. Published by ISACA, the
current edition is COBIT 5, which builds upon COBIT 4.1’s four domains and 34
processes by consolidating and integrating the Val IT 2.0 and Risk IT frameworks,
and also draws significantly from the Business Model for Information Security
(BMIS) and Information Technology Assurance Framework (ITAF).
COBIT 5 is based on five key principles for governance and management of
enterprise IT:
• Principle 1: Meeting Stakeholder Needs
• Principle 2: Covering the Enterprise End to End
• Principle 3: Applying a Single, Integrated Framework
• Principle 4: Enabling a Holistic Approach
• Principle 5: Separating Governance from Management
The Committee of Sponsoring Organizations of the Treadway Commission (COSO)
is a joint initiative of five private-sector organizations, established in the United
States in response to the Treadway Commission’s report on fraudulent financial
reporting. COSO has established an Enterprise Risk Management – Integrated
Framework against which companies and organizations may assess their control
systems. The COSO model describes internal control as a process consisting of five
interrelated components:
• Control environment
• Risk assessment
• Control activities
• Information and communication
• Monitoring
Updated in 2004 to include enterprise risk management, the list has been
expanded by adding objective setting, event identification, and risk response. The
model was subsequently updated in 2013, but retained the five components, now
labeling them as principles.
The Information Technology Infrastructure Library (ITIL) has been around for
over two decades and is now gaining in acceptance as a means for service
management. Developed in the United Kingdom, ITIL describes a set of practices
focused on aligning IT services with business needs. It was updated in 2011 and
has changed the naming convention from ITIL V3 (2007) to ITIL 2011. ITIL 2011
has five volumes consisting of 26 processes and functions. The five volumes are
• ITIL Service Strategy
• ITIL Service Design
• ITIL Service Transition
• ITIL Service Operation
• ITIL Continual Service Improvement
The Zachman Framework is a highly structured and formal method of defining
an enterprise. Arranged as a two-dimensional matrix, the rows represent
different yet distinct and unique views, while the columns represent
descriptors. Table 3-1 illustrates the relationships of the basic rows and columns.

Table 3-1 Basic Zachman Framework Elements
The Zachman Framework has been extended and adapted for many different
uses. A highly flexible, 36-cell relationship diagram, it can be used in a wide
variety of instances. As a simple graphical communication tool, this framework
can document a substantial amount of relationships in a single page.
The Sherwood Applied Business Security Architecture (SABSA) is a framework
and methodology for developing risk-driven enterprise information security
architectures and for delivering security infrastructure solutions that support
critical business initiatives. It was developed independently from the Zachman
Framework, but has a similar structure. The focus of SABSA is that all
requirements, including security requirements, can be derived from business
requirements. SABSA works well with the SDLC, as you can directly map the
views from SABSA (rows in Zachman) to the Security Architecture Levels from
SDLC, as shown in Table 3-2.

Table 3-2 Comparing SABSA Layers to SDLC
Software development lifecycle (SDLC) is a generic term describing a process
imposed on the development of software. There are numerous models for
software development, from the traditional waterfall and spiral models, to the
more recent agile models. Although each model for development has its
advantages and disadvantages, software developed under a process-based
lifecycle system has a greater opportunity to be secure. This is partly due to the
models themselves and partly due to the ability of an organization to perform
process improvement on the development model itself. Chapter 4 will examine
specific models and relevant outcomes.
Developed by the Software Engineering Institute (SEI) at Carnegie Mellon
University, the Capability Maturity Model Integration (CMMI) is a process metric
model that rates the process maturity of an organization on a 1 to 5 scale (Table 3-
3). As it is currently formulated, CMMI addresses three primary areas: product
and service development, service establishment and management, and product
and service acquisition.

Table 3-3 CMMI Levels
CMMI began in the software engineering field, and its predecessor was the
software CMM. The “integration” in the name signifies the integration of other
beginning CMMs into this final form. CMMI is not a development standard or
development lifecycle model. It is a framework for business process
improvement. CMMI improves performance through the improvement of
operational processes.
The Open Web Application Security Project (OWASP) is a community-driven,
open-source activity that is focused on web application security. The OWASP
community is worldwide and actively pursues best practices for web application
security in a vendor-neutral fashion. The OWASP community undertakes its work
through a series of projects that provide valuable information to all members of a
web application development environment. The most notable of these is the
OWASP Top Ten, a list of the most common web security vulnerabilities found in
software. This list has been revised periodically, with the latest version released
in May 2013.
OWASP has sponsored a large number of projects aimed at increasing
developer awareness of known issues in an effort to reduce vulnerabilities in
systems. The OWASP Development Guide, Code Review Guide, and OWASP
Testing Guide form a comprehensive review of best-practice frameworks for web
OWASP can be considered mainstream in that the current version of PCI DSS
requires web applications to be developed under an SDLC and refers to OWASP
documents as a secure coding guideline that can be employed.
Developed by Carnegie Mellon University in 2001, Operationally Critical Threat,
Asset, and Vulnerability Evaluation (OCTAVE) is a suite of tools, techniques, and
methods for risk-based information security assessment. OCTAVE is designed
around three phases: build asset-based threat profiles, identify infrastructure
vulnerabilities, and develop a security strategy.
Build Security In (BSI) is a U.S. Department of Homeland Security–backed project
communicating information on research, best practices, and generic principles
for software security. Web-based and open to the public
(, BSI acts as a collaborative
effort to provide practices, tools, guidelines, rules, principles, and other resources
that software developers, architects, and security practitioners can use to build
security into software in every phase of its development.
Trusted Computing
Trusted computing (TC) is a term used to describe technology developed and
promoted by the Trusted Computing Group. This technology is designed to ensure
that the computer behaves in a consistent and expected manner. One of the key
elements of the TC effort is the Trusted Platform Module (TPM). The TPM can hold
an encryption key that is not accessible to the system except through the TPM
chip. This assists in securing the system, but has also drawn controversy from
some quarters concerned that the methodology could be used to secure the
machine from its owner.
The Trusted Computing Group has delineated a series of principles that they
believe are essential for the effective, useful, and acceptable design,
implementation, and use of TCG technologies. These principles are
• Security
• Privacy
• Interoperability
• Portability of data
• Controllability
• Ease of use
These principles are not designed to stand in isolation, but to work together to
achieve a secure system. Although there is potential for conflict between some of
these, when taken in a properly defined context, the conflict should naturally
resolve itself.
Ring Model
The ring model was devised to provide a system-based method of protecting data
and functionality from errors and malicious behavior. The ring model is
composed of a series of hierarchical levels based on given security levels
(see Figure 3-1). The lowest ring, Ring 0, represents items that can directly
address the hardware. For instance, the BIOS and the OS kernel are both
instantiated as members of Ring 0. Each ring is a protection domain, structured in
a hierarchical manner to provide a separation between specific activities and
objects. Rings can only interact with themselves or with an adjacent ring.
Applications (Ring 3) should not read or write directly from hardware (Ring 0). By
forcing activity requests through intervening rings, this provides an opportunity
to enforce the security policy on activities being sent to objects.

FIGURE 3-1 Ring model
Reference Monitor
A reference monitor is an access control methodology where a reference
validation mechanism mediates the interaction of subjects, objects, and
operations. In a computer system architecture, a subject is either a process or a
user, and an object is an item on the system, typically in the form of a file or
socket. Subjects interact with objects via a set of operations. The reference
monitor is designed to mediate this interaction per a defined security policy. For
a reference validation mechanism to be a reference monitor, it must possess
three qualities:
• It must always be invoked and there is no path around it. This is called complete
• It must be tamper-proof.
• It must be small enough to be verifiable.
Complete mediation is required or an attacker may simply bypass the
mechanism and avoid the security policy. Without tamper-proof characteristics,
an attacker can undermine the mechanism, forcing it to fail to act properly.
Reference monitors need to be verifiable, for this is where the trust is created.
Protected Objects
A protected object is one whose existence may be known but cannot be directly
interacted with. Specifically, any interaction must be done through a protected
subsystem. The protected subsystem is designed so that only specific procedures
may be called, and these are done in a manner that facilitates verification per
security policy. Much of security is managed by control, and protected objects are
controlled-access entities that permit the enforcement of specific rules. This
foundational concept, introduced in the mid-1970s, has become one of the
predominant computer security models.
Trusted Computing Base
The term trusted computing base (TCB) is used to describe the combination of
hardware and software components that are employed to ensure security. The
Orange Book, which is part of the U.S. government’s Trusted Computer System
Evaluation Criteria (TCSEC), provides a formal definition for TCB:
The totality of protection mechanisms within it, including hardware, firmware, and
software, the combination of which is responsible for enforcing a computer security
The trusted computing base should not be confused with trustworthy
computing or trusted computing. Trusted computing base is an idea that predates
both of these other terms and goes to the core of how a computer functions. The
kernel and reference monitor functions are part of the TCB, as these elements are
the instantiation of the security policy at the device level. Functions that operate
above the TCB level—applications, for instance—are not part of the TCB and can
become compromised, but without affecting the TCB level.
Trusted Platform Module
The Trusted Platform Module (TPM) is an implementation of specifications
detailing secure cryptostorage on a chip. The current version is TPM 1.2, rev. 116,
and it is detailed in ISO/IEC 11889. The purpose of the device is to provide for
secure storage of cryptographic keys and platform authentication. Bound to the
BIOS and available to the OS, the objective is to enable a secure storage method of
keys for virtually any encryption technology. Although recent attacks have
demonstrated that the keys can be obtained from the TPM chip, these are
specialized attacks that require physical access and large capital investments in
equipment. Even then, as the attack involves physically destroying the chip, it is
not a guarantee that the protected data can be compromised.
Microsoft Trustworthy Computing Initiative
The Microsoft Trustworthy Computing Initiative is a company-wide effort to
address concerns over security and privacy. From a white paper in 2002,
Microsoft CTO Craig Mundie established four pillars of the company’s
Trustworthy Computing Initiative. In the years since, Microsoft has internalized
these objectives into all of their processes and products. The four key pillars are
security, privacy, reliability, and business integrity. Security was labeled as the
first pillar, signifying its importance going forward. But this pillar did not just
view security as a technical item, but included the social dimension as well.
Including privacy as a pillar signified to the customer base that privacy is
important to the entire computing ecosystem. Reliability was defined broadly to
include not just whether a system was functioning or not, but whether it could
function in hostile or nonoptimal situations. The pillar of business integrity was
designed to tie it all together to show responsiveness and transparency. Without
this last pillar, the previous pillars could be covered over or ignored.
Software is not always created as a series of greenfield exercises, but rather, it is
typically created by combining existing elements, building systems by connecting
separate modules. Not all software elements will be created by the development
team. Acquisition of software components has security implications, and those
are covered in detail in Chapter 20. But acquisition is an important component
that has connections throughout the lifecycle, so what follows is a brief overview
of how this topic fits into the CSSLP discussion.
Definitions and Terminology
Acquisition has its own set of terms used throughout this technical/legal
discipline, but a couple of them stand out in the secure software environment.
The first and most prevalent is commercial off-the-shelf (COTS) software. This
term describes an element that is readily available for purchase and integration
into a system. A counterpart to this is government off-the-shelf (GOTS) software.
This term refers to software that is specifically developed for government use.
GOTS tends to be more specialized and have higher costs per unit, as the base is
significantly smaller.
Build vs. Buy Decision
Software acquisition can be accomplished in two manners, either by building it
or buying it. This results in a build vs. buy decision. In today’s modular world of
software, the line between build and buy is blurred, as some elements may be
built and some purchased. Many of today’s applications involve integrating
elements such as databases, business logic, communication elements, and user
interfaces. Some elements, such as database software, are best purchased,
whereas mission-critical core activities involving proprietary business
information are generally best developed in-house. One of the key elements in
successful integration is the degree of fit between software and the existing
business processes, ensuring requirements include both the business process
perspective and generic features and functions.
Software development is an expensive undertaking. The process to develop good
software is complex, the skill levels needed can be high, and every aspect seems
to lead to higher costs. These cost structures, plus the easily transported nature of
software, makes outsourcing of development a real possibility. Wages for
developers vary across the globe, and highly skilled programmers in other
countries can be used for a fraction of the cost of local talent. In the late 1990s,
there was a widespread movement to offshore development efforts. A lot was
learned in the early days of outsourcing. Much of the total cost of development
was in elements other than the coders, and much of these costs could not be
lowered by shipping development to a cheaper group of coders based on
The geographic separation leads to greater management challenges and costs.
Having developers separate from the business team adds to the complexity,
learning curves, and cost. The level of tacit knowledge and emergent
understanding that is common on development teams becomes more challenging
when part of the team is separated by geography. So, in the end, outsourcing can
make sense, but just like build vs. buy decisions, the devil is in understanding the
details—their costs and benefits.
Contractual Terms and Service Level Agreements
Contractual terms and service level agreements are used to establish expectations
with respect to future performance. Contractual terms when purchasing software
should include references to security controls or standards that are expected to
be implemented. Specific ISO standards or NIST standards that are desired by a
supplier should be included in these mechanisms to ensure clear communication
of expectations. Service level agreements can include acceptance criteria that
software is expected to pass prior to integration.
Chapter Review
This chapter began with an examination of the different types of regulations
associated with secure software development. These regulations drive many
facets of the development process, from requiring specific requirements up front
in the process to reporting requirements once software is in operation.
Controlling the activities of an organization are the policies and procedures used
to drive and guide daily activities. The chapter explored how security policies
impact secure development practices in the organization. Many of these policies
address issues to manage the legal impacts of intellectual property development
and the legal ramifications associated with operation of systems associated with
protected data items. A discussion of protected data items and the roles of
security and privacy associated with the development process was presented.
Standards act as guiding elements, providing coordinating information
associated with complex interlocking systems. The role that various security
standards associated with secure development was presented. All of these
elements exist in a framework that enables process improvement and
management, and the secure lifecycle process is presented in the next chapter. To
prepare the reader for the specific framework associated with secure
development, a series of supporting frameworks was presented in this chapter.
The chapter ended with a discussion of the role that acquisition plays in the
process of secure software development. Examining the build vs. buy decision,
coupled with outsourcing and contractual elements, provides information on
securing elements not built in-house.
Quick Tips
• Regulations and compliance form the basis of many security efforts.
• FISMA is the federal law governing information security for government systems
in the United States.
• Sarbanes-Oxley dictates internal controls for public firms in the United States.
• HIPAA and the HITECH Act govern information security with respect to medical
records in the United States.
• PCI DSS is a set of standards that apply to the credit card issuers, including
• Intellectual property is protected through patents, copyrights, trademarks, and
trade secrets.
• Privacy is the principle of controlling information about one’s self.
• Personally identifiable information (PII) should be protected in systems at all
• There are numerous standards from NIST and ISO applicable to software
• There are a wide variety of frameworks covering both process and product
security that can be employed in the development effort.
• Common process frameworks include COBIT, ITIL, CMMI, and SDLC.
• Trusted computing is the set of technologies to improve computer security.
• Computer security models such as the ring model, reference monitor, and
protected objects provide concepts to implement security.
• Software acquisition can have an effect on system security, with procurement
and contractual implications.
To further help you prepare for the CSSLP exam, and to provide you with a feel
for your level of preparedness, answer the following questions and then check
your answers against the list of correct answers found at the end of the chapter.
1. The primary governing law for federal computer systems is:
B. Sarbanes-Oxley
D. Gramm-Leach-Bliley
2. Which of the following is a security standard associated with the collection,
processing, and storing of credit card data?
A. Gramm-Leach-Bliley
3. To protect a novel or nonobvious tangible item that will be sold to the public, one
can use which of the following?
A. Patent
B. Trademark
C. Trade secret
D. Licensing
4. The organization responsible for the Top Ten list of web application
vulnerabilities is:
C. Microsoft
5. When using customer data as test data for production testing, what process is
used to ensure privacy?
A. Data anonymization
B. Delinking
C. Safe Harbor principles
D. Data disambiguation
6. Which of the following is not a common PII element?
A. Full name
B. Order number
C. IP address
D. Date of birth
7. Which of the following is an important element in preventing a data breach
when backup tapes are lost in transit?
A. Service level agreements with a backup storage company
B. Use of split tapes to separate records
C. Proprietary backup systems
D. Data encryption
8. To interface data sharing between U.S. and European firms, one would invoke:
A. GDPR principles
B. Safe Harbor principles
C. Onward transfer protocol
D. Data protection regulation
9. Which standard is characterized by Target of Evaluation and Security Targets?
A. ISO 9126 Software Quality Assurance
B. ISO 15288 Systems and Software Engineering
C. ISO 2700X series
D. ISO 15408 Common Criteria
10. Which of the following are mandatory for use in federal systems?
A. NIST SP 800 series
D. ITL security bulletins
11. Which of the following is not a framework to improve IT operations?
12. The third level of the CMMI model is called:
A. Quantified
B. Managed
C. Defined
D. Optimizing
13. Reference monitors must possess all of the following properties except:
A. Efficient
B. Complete mediation
C. Tamper-proof
D. Verifiable
14. HIPAA and HITECH specify protection of which of the following?
15. The foundations of privacy include the following items:
A. Notice, choice, security
B. Nonrepudiation, notice, integrity
C. Enforcement, onward transfer, verifiable
D. Impact factor, security, access
1. C. The Federal Information Security Management Act of 2002 (FISMA) is a federal
law that requires each federal agency to implement an agency-wide information
security program.
2. B. The PCI DSS is the governing document that details the contractual
requirements for members that accept and process bank cards.
3. A. Patents are used to protect intellectual property that is disclosed in use.
4. D. One of OWASP’s products is the ten most critical web application security
5. A. Anonymizing the data, stripping it of customer PII, is part of the test data
management process.
6. B. Order numbers cannot be correlated to other PII elements, making them non-
7. D. Encrypted data is no longer useful data, but simply ones and zeros.
8. A. GDPR principles apply to all EU data. The Safe Harbor principles that allowed
the harmonization of U.S. and EU privacy rules no longer are sufficient.
9. D. The Common Criteria has TOE, ST, and PP as elements.
10. B. Federal Information Processing Standards (FIPS) are mandatory requirements
for federal systems.
11. D. OWASP is an organization dedicated to improving web application security.
12. C. The levels for CMMI are 1 – Initial, 2 – Managed, 3 – Defined, 4 – Quantitatively
Managed, and 5 – Optimizing.
13. A. Reference monitors need to exhibit complete mediation, be tamper-proof, and
be verifiable.
14. A. HIPAA and HITECH are both concerned with personal health information
15. A. Foundational privacy elements are notice, choice, onward transfer, security,
data integrity, access, and enforcement.

Expert paper writers are just a few clicks away

Place an order in 3 easy steps. Takes less than 5 mins.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price: