Posted: August 1st, 2022

PLEASE DO NOT SUBMIT A BID IF YOU DO NOT HAVE EXPERIENCE WITH GRADUATE-LEVEL WRITING. MUST FOLLOW ALL INSTRUCTIONS MUST BE FOLLOWED, AND NO PLAGIARISM. USE THE SOURCES INCLUDED. AND ANSWER ALL QUESTIONS TO DISCUSSION OR ASSIGNMENT.

HUM5210Week6Assignment xChapter16 Chapter15 Chapter20

  PLEASE DO NOT SUBMIT A BID IF YOU DO NOT HAVE EXPERIENCE WITH GRADUATE-LEVEL WRITING. MUST FOLLOW ALL INSTRUCTIONS MUST BE FOLLOWED, AND NO PLAGIARISM. USE THE SOURCES INCLUDED. AND ANSWER ALL QUESTIONS TO DISCUSSION OR ASSIGNMENT.  

Week 6 – Assignment

Proposal for a Volunteer Program

Imagine that you have recently accepted a position as the new Volunteer Administrator at Difference Today Nonprofit (hypothetical). Because of your expertise, you have been hired to establish the volunteer program.

Develop a proposal describing your volunteer program and how you see yourself working as the Volunteer Administrator for Difference Today Nonprofit. This is an organization struggling with recruiting, retaining, and coaching volunteers. The organization is well-funded by a for-profit company and has experienced great success, but has never established a strong volunteer program. You are the first Volunteer Administrator that the organization has hired.

Your role will be to assess how to:

1. Create and manage the volunteer program.

2. Prepare the organization prior to launching the program.

3. Develop a strategy for recruiting and retaining volunteers.

4. Train and develop volunteers.

5. Develop policies and procedures for the volunteer program.

6. Evaluate the effectiveness of the volunteer program.

Your volunteer program proposal should include a section corresponding to each of the above six issues. Support your recommendations in each section, where appropriate, with research or examples from scholarly and credible professional sources.

In addition, since you are responsible for developing the comprehensive program to be rolled out in the next six months, a timeline for important functions and milestones should be included in its own section of the proposal. It will help to think in terms of what is in the best interest of Difference Today Nonprofit, the volunteers, and the support staff. 

Your paper should be 2,800-3,500 words (8-10 pages) in length, not including the title page, abstract, and reference sections. You should use and cite a minimum of 10 scholarly and credible professional sources in support of your proposal. All sections of your paper, including references, must follow APA guidelines.

Resources

Required References

Connors, T. D. (2011). 

Wiley nonprofit law, finance and management series: volunteer management handbook: leadership strategies for success (Links to an external site.)

 (2nd ed.). Hoboken, NJ: John Wiley & Sons. ISBN-13: 9780470604533. 

Chapter 15: Evaluating the Volunteer Program
Chapter 16: Evaluating Impact of Volunteer Programs

Rosenthal, R. J., & Baldwin, G. (2015). 

Volunteer engagement 2.0: Ideas and insights changing the world (Links to an external site.)

. Somerset, NJ: John Wiley & Sons. ISBN-13: 9781118931882. Found in the University of the University of Arizona Global Campus ebrary.
Chapter 20: Measuring the Volunteer Program

Recommended References

Buote, D. (2013, November 13). What is program evaluation? [Video file]. Retrieved from

What is program evaluation?: A Brief Introduction (Links to an external site.)

Centers for Disease Control and Prevention. (2016, November 17). A framework for program evaluation [Web page]. Retrieved from 

https://www.cdc.gov/eval/framework/index.htm

Required References

Connors, T. D. (2011). 

Wiley nonprofit law, finance and management series: volunteer management handbook: leadership strategies for success (Links to an external site.)

 (2nd ed.). Hoboken, NJ: John Wiley & Sons. ISBN-13: 9780470604533. 

Lee, Y., Won, D., & Bang, H. (2014). 

Why do event volunteers return? Theory of planned behavior (Links to an external site.)

. International Review on Public and Non-Profit Marketing, 11(3), 229-241. doi:10.1007/s12208-014-0117-0 

NCVO Knowhow Nonprofit. (2015, December 15). Volunteer policies [Web page]. Retrieved from 

https://knowhownonprofit.org/people/volunteers/keeping/policy (Links to an external site.)

Pitney, N. (2013). Safeguarding volunteers with effective risk management. Retrieved from 

https://nonprofitquarterly.org/safeguarding-volunteers-with-effective-risk-management/ (Links to an external site.)

Rosenthal, R. J., & Baldwin, G. (2015). 

Volunteer engagement 2.0: Ideas and insights changing the world (Links to an external site.)

. Somerset, NJ: John Wiley & Sons. ISBN-13: 9781118931882. Found in the University of the University of Arizona Global Campus ebrary.

United States Department of Labor. (n.d.). Fair Labor Standards Act Advisor. Retrieved from 

http://webapps.dol.gov/elaws/whd/flsa/docs/volunteers.asp (Links to an external site.)

Volunteer Protection Act of 1997, 42 U.S.C. § 14501 (1997, May 19). Retrieved from 

https://www.gpo.gov/fdsys/pkg/CRPT-105hrpt101/pdf/CRPT-105hrpt101-pt1  (Links to an external site.)

Recommended References

Agovino, T. (2016). 

The giving generation (Links to an external site.)

. HR Magazine, 61(7), 36-38, 40, 42, 44. Found in the University of the University of Arizona Global Campus ebrary.

Alfes, K., Shantz, A., & Saksida, T. (2015). 

Committed to whom? Unraveling how relational job design influences volunteers’ turnover intentions and time spent volunteering. (Links to an external site.)

 Voluntas: International Journal of Voluntary & Nonprofit Organizations, 26(6), 2479-2499. doi:10.1007/s11266-014-9526-2

Buote, D. (2013, November 13). What is program evaluation? [Video file]. Retrieved from
What is program evaluation?: A Brief Introduction (Links to an external site.)

Centers for Disease Control and Prevention. (2016, November 17). A framework for program evaluation [Web page]. Retrieved from 

https://www.cdc.gov/eval/framework/index.htm (Links to an external site.)

Dunn, J., Chambers, S. K., & Hyde, M. K. (2016).

 Systematic review of motives for episodic volunteering (Links to an external site.)

. Voluntas: International Journal of Voluntary and Nonprofit Organizations, 27(1), 425-464. doi:10.1007/s11266-015-9548-4

Elias, J. K., Paulomi, S., and Seema, M. (2016).

 Long-term engagement in formal volunteering and well-being: An exploratory Indian study (Links to an external site.)

. Behavioral Sciences 6(4), 20. doi:10.3390/bs6040020

Groble, P., & Brudney, J. L. (2015). 

When good intentions go wrong: Immunity under the volunteer protection act (Links to an external site.)

. Nonprofit Policy Forum, 6(1), 3-24. doi:10.1515/npf-2014-0001 

Kolar, D., Skilton, S., & Judge, L. W. (2016). 

Human resource management with a volunteer workforce (Links to an external site.)

. Journal of Facility Planning, Design, and Management, 4(1). doi:10.18666/JFPDM-2016-V4-I1-7300

Manetti, G., Bellucci, M., Como, E., & Bagnoli, L. (2015). 

Investing in volunteering: Measuring social returns of volunteer recruitment, training and management (Links to an external site.)

. Voluntas: International Journal of Voluntary and Nonprofit Organizations, 26(5), 2104-2129. doi:10.1007/s11266-014-9497-3

Mind Tools. (n.d.). SWOT analysis: Discover new opportunities, manage and eliminate threats [Web page]. Retrieved from 

http://www.mindtools.com/pages/article/newTMC_05.htm (Links to an external site.)

Nesbit, R., Rimes, H., Christensen, R. K., & Brudney, J. L. (2016). 

Inadvertent volunteer managers: Exploring perceptions of volunteer managers’ and volunteers’ roles in the public workplace (Links to an external site.)

. Review of Public Personnel Administration, 36(2), 164-187. doi:10.1177/0734371X15576409 

Pynes, J. E. (2013). 

Training and career development (Links to an external site.)

. Human resources management for public and nonprofit organizations: A strategic approach (4th ed.). pp. 275-302. Somerset, NJ: Jossey-Bass. ISBN-13: 9781118398623. Found in the University of the University of Arizona Global Campus ebrary.

Riddle, R. (2016, November 14). 5 deadly sins of recruiting volunteers [Blog post]. Retrieved from 

http://blogs.volunteermatch.org/engagingvolunteers/2016/11/14/5-deadly-sins-of-recruiting-volunteers/ (Links to an external site.)

Scott, C. L. (2016). 7 reasons nonprofit organizations have trouble recruiting volunteers [Video file]. Retrieved from

7 Reasons Nonprofit Organizations Have Trouble Recruiting Volunteers (Links to an external site.)

Sellon, A. (2014). 

Recruiting and retaining older adults in volunteer programs: Best practices and next steps (Links to an external site.)

. Ageing International, 39(4), 421-437. 

Studer, S. (2016). 

Volunteer management: Responding to the uniqueness of volunteers (Links to an external site.)

. Nonprofit and Voluntary Sector Quarterly, 45(4), 688-714. doi:10.1177/0899764015597786

Stukas, A. A., Snyder, M., & Clary, E. G. (2016). 

Understanding and encouraging volunteerism and community involvement (Links to an external site.)

. The Journal of Social Psychology, 156(3), 243-255. doi:10.1080/00224545.2016.1153328 

CHAPTER 16

Evaluating Impact of
Volunteer Programs

R. Dale Safrit, EdD
North Carolina State University

This chapter introduces and defines the closely related concepts of evaluation, im-pact and accountability, especially as applied to volunteer programs. The author dis-
cusses four fundamental questions that guide the development and implementation of
an impact evaluation and subsequent accountability of a volunteer program.

Evaluation in Volunteer Programs

The concept of evaluation as applied to volunteer programs is not new. As early as
1968, Creech suggested a set of criteria for evaluating a volunteer program and con-
cluded, “Evaluation, then, includes listening to our critics, to the people around us, to
experts, to scientists, to volunteers so that we may get the whole truth [about our pro-
grams]” (p.2). This approach to evaluation was well ahead of its time since up until the
past decade, when authors within our profession either only addressed the evaluation
of holistic volunteer programs superficially (e.g., Brudney, 1999; Naylor, 1976; O’Con-
nell, 1976; Stenzel & Feeney, 1968; Wilson, 1979) or not at all (e.g., Naylor, 1973; Wil-
son, 1981). Even in the first edition of this text, fewer than four total pages of text were
dedicated to the topic of evaluation within chapters dedicated to other traditional vol-
unteer program management topics, including recruiting and retaining volunteers
(Bradner, 1995), training volunteers (Lulewicz, 1995), supervising volunteers (Brud-
ney, 1995; Stepputat, 1995), improving paid staff and volunteer relations (Macduff,
1995), monitoring the operations of employee volunteer programs (Seel, 1995), in-
volving board members (Graff, 1995), and determining a volunteer program’s success
(Stepputat, 1995).

However, for volunteer programs operating in contemporary society, evaluation is
a critical, if not the most critical, component of managing an overall volunteer program
and subsequently documenting the impacts and ultimate value of the program to the

389

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.

C
o
p
yr

ig
h
t
©

2
0
1
1
.
Jo

h
n
W

ile
y

&
S

o
n
s,

I
n
co

rp
o
ra

te
d
.
A

ll
ri
g
h
ts

r
e
se

rv
e
d
.

target clientele it is designed to serve as well as the larger society in which it operates.
As early as 1982, Austin et al. concluded that “Only through evaluation can [nonprofit]
agencies make their programs credible to funding agencies and government authori-
ties” (p. 10). In 1994, Korngold and Voudouris suggested the evaluation of impact on
the larger community as one phase of evaluating an employee volunteer program.

The critical role of volunteer program impact evaluation in holistic volunteer man-
agement became very apparent during the final decade of the twentieth century, and
continues today (Council for Certification in Volunteer Administration, 2008; Merrill &
Safrit, 2000; Safrit & Schmiesing, 2005; Safrit, Schmiesing, Gliem, & Gliem, 2005).
While most volunteer managers understand and believe in evaluation, they most often
have focused their efforts on evaluating the performance of individual volunteers and
their contributions to the total program and/or organization. In this sense, evaluation
has served an important managerial function in human resource development, the
results of which are usually known only to the volunteer and volunteer manager. As
Morley, Vinson, and Hatry (2001) noted:

Nonprofit organizations are more often familiar with monitoring and reporting
such information as: the number of clients served; the quantity of services,
programs, or activities provided; the number of volunteers or volunteer hours
contributed; and the amount of donations received. These are important data,
but they do not help nonprofit managers or constituents understand how well
they are helping their clients. (p. 5

)

However, as nonprofit organizations began to face simultaneous situations of
stagnant or decreasing public funding and increasing demand for stronger account-
ability of how limited funds were being used, volunteer program impact evaluation
moved from a human resource management context to an organizational develop-
ment and survival context. The volunteer administration profession began to recog-
nize the shifting attitudes toward evaluation, and in the early 1980’s the former
Association for Volunteer Administration (AVA) defined a new competency funda-
mental to the profession as “the ability to monitor and evaluate total program results
. . . [and] demonstrate the ability to document program results” (as cited in Fisher &
Cole, 1993, pp. 187, 188). Administrators and managers of volunteer-based programs
were increasingly called on to measure, document, and dollarize the impact of their
programs on clientele served and not just the performance of individual volunteers
and the activities they contribute (Safrit & Schmiesing, 2002; Safrit, Schmiesing, King,
Villard, & Wells, 2003; Schmiesing & Safrit, 2007). This intensive demand for greater
accountability initially arose from program funders (public and private) but quickly
escalated to include government, the taxpaying public, and even the volunteers them-
selves. As early as 1993, Taylor and Sumariwalla noted:

Increasing competition for tax as well as contributed dollars and scarce
resources prompt donors and funders to ask once again: What good did the do-
nation produce? What difference did the foundation grant or United Way alloca-
tion make in the lives of those affected by the service funded? (p. 95)

According to Safrit (2010, p. 316), “The pressure on nonprofit organizations to
evaluate the impact of volunteer-based programs has not abated during the first

390 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

decade of the new [21st] century, and if anything has grown stronger.” With regards to
overall volunteer management, evaluation continues to play an important role in the
human resource management of individual volunteers; most volunteer managers are
very familiar and comfortable with this aspect of evaluation in volunteer programs.
However, today’s volunteer managers are less knowledgeable, skilled, and comfort-
able with the concept of impact evaluation as only the first (if important) step in meas-
uring, documenting, and communicating the effects of a volunteer program
immediately on the target clientele served by the organization’s volunteers, and ulti-
mately on the surrounding community.

A Symbiotic Relationship: Evaluation, Impact, and

Accountability

In the overwhelming majority of both nonformal workshops and formal courses
I have taught, participants will inevitably use three terms almost interchangeably
in our discussions of evaluating volunteer programs. The three concepts are symbi-
otically linked and synergistically critical to contemporary volunteer programs,
yet they are not synonymous. The three terms are evaluation, impact, and
accountability.

Evaluation

Very simply stated, evaluation means measurement. We “evaluate” in all aspects of
our daily lives, whether it involves measuring (evaluating) the outside temperature to
determine if we need to wear a coat to work, measuring (evaluating) the current bal-
ance in our checking account to see if we can afford to buy a new piece of technol-
ogy, or measuring (evaluating) the fiscal climate in our workplace to decide if it is a
good time to ask our supervisor for a salary increase. However, for volunteer pro-
grams, “evaluation involves measuring a targeted program’s inputs, processes, and
outcomes so as to assess the program’s efficiency of operations and/or effectiveness
in impacting the program’s targeted clientele group” (Safrit, 2010, p. 318).

The duel focus of this definition on a volunteer program’s efficiency and effective-
ness is supported by contemporary evaluation literature. Daponte (2008) defined
evaluation as being “done to examine whether a program or policy causes a change;
assists with continuous programmatic improvement and introspection” (p. 157).
Royse, Thyer, and Padgett (2010) focused on evaluation as “a form of appraisal…that
examines the processes or outcomes of an organization that exists to fulfill some so-
cial need” (p. 12). These definitions each recognize the important role of evaluation in
monitoring the operational aspects of a volunteer program (i.e., inputs and processes)
yet ultimately emphasize the program’s ultimate purpose of engaging volunteers to
help bring about positive changes in the lives of the program’s targeted audience (i.e.,
outcomes). These positive changes are called impacts.

Impact

Contrary to popular belief, volunteer programs do not exist for the primary purpose of
engaging volunteers merely to give the volunteers something to do or for supplying
an organization with unpaid staff to help expand its mission and purpose. Rather,

A Symbiotic Relationship: Evaluation, Impact, and Accountability 391

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

volunteer programs ultimately seek to bring about positive impacts in the lives of the
targeted clientele the volunteers are trained to support either directly (through direct
service to individual clients) or indirectly (through direct service to the service-
providing organization). The latter statement does nothing to discount or demean the
critical involvement of volunteers, but instead challenges a volunteer manager to con-
tinually focus and refocus the engagement of volunteers on the ultimate mission of the
sponsoring organization and the outcomes it seeks to bring about. In other words, it
forces volunteer managers to identify and focus on the volunteer program’s desired
impacts.

According to Safrit (2010):

Impact may be considered the ultimate effects and changes that a volunteer-
based program has brought about upon those involved with the program (i.e., its
stakeholders), including the program’s targeted clientele and their surrounding
neighborhoods and communities, as well as the volunteer organization itself
and its paid and volunteer staff. (p. 319)

This inclusionary definition of impact focuses primarily on the organization’s rai-
son-d’être, and secondarily on the organization itself and its volunteers. Thus, it paral-
lels and compliments nicely the earlier definition of evaluation as being targeted first
toward the volunteer program’s targeted clientele, and secondly on internal processes
and operations. Subsequently, volunteer managers must constantly measure the ulti-
mate outcomes of volunteer programs, or stated more formally, evaluate the volunteer
program’s impacts. However, merely evaluating a volunteer program’s impacts is not
in itself a guarantee for the program’s continued success and/or survival; however
positive, the knowledge gained by evaluating a volunteer program’s impacts are prac-
tically meaningless unless they are strategically communicated to key leaders and de-
cision makers connected to the sponsoring organization.

Accountability

Accountability within volunteer programs involves the strategic communication of the
most important impacts of a volunteer program, identified through an evaluation pro-
cess, to targeted program stakeholders, both internal and external to the organization.
Internal stakeholders would include paid staff, organizational administrators, board
members, volunteers, and the clientele served; external stakeholders include funders
and donors, professional peers, government agencies and other legitimizers, and the
larger community in which the organization operates.

Boone (1985) was the first author to describe the critical role of accountability in
educational programs and organizations, and the previous definition is based largely
on that of Boone, Safrit, and Jones (2002). Unfortunately, volunteer managers are
sometimes hesitant to share program impacts even when they have identified them
through an effective evaluation; they often consider such strong accountability as be-
ing boastful or too aggressive. However, accountability is the third and final concept
critically linking the previous concepts of evaluation and impact to a volunteer
program’s or organization’s continued survival. Volunteer managers must accept
the professional responsibility in our contemporary impact-focused society to

392 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

proactively plan for targeted accountability, identifying specific key stakeholders and
deciding what specific program impacts each stakeholder type wants to know. This
targeted approach to volunteer program accountability will be discussed in more
detail later in this chapter.

Four Fundamental Questions in Any Volunteer Program Impact Evaluation

Evaluation is a relatively young concept within the educational world; Ralph Tyler
(1949) is often credited with coining the actual term itself, evaluation, to refer to the
alignment between measurement and testing with educational objectives. And there is
no dearth in the literature of various approaches and models for program evaluation.
Some models are more conceptual and focus on the various processes involved in
evaluation (e.g., Fetterman, 1996; Kirkpatrick, 1959; Rossi & Freeman, 1993; Stuffle-
beam, 1987) while others are more pragmatic in their focus (e.g., Combs & Faletta,
2000; Holden & Zimmerman, 2009; Patton, 2008). However, for volunteer managers
with myriad professional responsibilities in addition to but including volunteer pro-
gram evaluation, I suggest the following four fundamental questions that should guide
any planned evaluation of a volunteer-based program.

Question 1: Why Do I Need to Evaluate the Volunteer Program?

Not every volunteer program needs to be evaluated. This may at first appear to be a
heretical statement coming from the author of a chapter about volunteer program eval-
uation, and theoretically it is. Pragmatically, however, it is not. Many volunteer pro-
grams are short term by design, or are planned to be implemented one-time only.
In contrast, some volunteer programs are inherent with the daily operations of a volun-
teer organization, or are so embedded within the organization’s mission that they are
invisible to all but organizational staff and administrators. Within these contexts, a vol-
unteer manager must decide whether the evaluation of such a program warrants the
required expenditure of time and human and materials resources. Furthermore, one
cannot (notice that I did not say, may not) evaluate any volunteer program for which
there are no measurable program objectives. This aspect of Question 1 brings us again
to the previous discussion of volunteer program impacts: What is it that the volunteer
program seeks to accomplish within its targeted clientele? What ultimate impact is the
volunteers’ engagement designed to facilitate or accomplish?

Any and all volunteer program impact evaluations must be based on the measur-
able program objectives targeted to the program’s clientele (Safrit, 2010). Such mea-
surable program objectives are much more detailed than the program’s mere goal,
and define key aspects of the program’s design, operations, and ultimate outcomes. A
measurable program objective must include each of the following five critical
elements:

1. What is the specific group or who are the specific individuals that the volunteer
program is targeted to serve (i.e., the program’s clientele)?

2. What specific program activities will be used to interact with the targeted clientele
group (i.e., the intervention that involves volunteers)?

Four Fundamental Questions in Any Volunteer Program Impact Evaluation 393

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

3. What specific change is the intervention designed to bring about within the tar-
geted clientele group (i.e., program outcome or impact)?

4. What level of change or success does the program seek to achieve?
5. How will the intervention’s success be evaluated?

As an example, too often I encounter the volunteer types of volunteer program
objectives:

& “We will recruit at least 50 new teen volunteers to help with the new Prevent
Youth Obesity Now Program.”

& “At least 100 individuals will participate in the volunteer-delivered Career Funda-
mentals Program.”

& “Organizational volunteers will contribute a minimum of 1,000 total volunteer
hours mentoring adults who cannot read and/or write.”

Now consider their correctly written measurable program objectives and
components:

& “As a result of the teen volunteer staffed Prevent Youth Obesity Now summer day
camp, at least 50% of the participating 200 overweight youth will adopt and main-
tain at least one new proper nutrition practice, as reported by their parents in a
six-month follow-up mailed questionnaire.” (Target audience: 200 overweight
youth. Planned intervention: teen volunteer staffed summer day camp. Desired
change among target audience: adoption of at least one new proper
nutrition practice. Desired level of success: 50% of participating youth. How suc-
cess will be evaluated: 6-month post-camp questionnaire mailed to participants’
parents.)

& “At least 50% of currently unemployed participants in the six-week Career
Fundamentals Program taught by volunteers will be able to describe one new
workplace skill they learned as a result of the program, as measured by a volun-
teer-delivered interview during the final Program session.” (Target audience:
unemployed individuals. Planned intervention: volunteer taught workshop
session. Desired change among target audience: learning new workplace skills.
Desired level of success: 50% of participants. How success will be evaluated: exit
interview conducted by a volunteer.)

& “At least 30% of the adults participating in the six-week literacy volunteer mentor-
ing program will improve their reading skills by ten percentile points as measured
by a standardized reading test administered at the first and final sessions.” (Target
audience: illiterate adults. Planned intervention: volunteer mentoring program.
Desired change among target audience: improved reading skills. Desired level of
success: 30% of participants. How success will be evaluated: standardized reading
tests.)

A final aspect of Question 1 involves the use of “logic” models in evaluating
volunteer programs so called because they seek to outline and follow the logical
development and implementation of a program or intervention from its conception
through to its targeted long-term impact. Logic models are not new to volunteer

394 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

programs (Honer, 1982; Safrit & Merrill, 1998, 2005) and apply four standard compo-
nents to program development and impact evaluation (Bennett & Rockwell, 1994;
Frechtling, 2007; W.K. Kellogg Foundation, 2000):

1. Inputs. Actual and in-kind resources and contributions devoted to the project
2. Activities. All activities and events conducted or undertaken so as to achieve the

program’s identified goal
3. Outputs. Immediate, short term services, events, and products that document the

implementation of the project
4. Outcomes. The desired long-term changes achieved as a result of the project

Unfortunately, space does not allow for an in-depth discussion of the use of logic
models in evaluating volunteer program impacts. However, Exhibit 16.1 illustrates the
application of logic modeling in a volunteer-delivered program designed to decrease
overweight and/or obesity among teens. Note the strong correlation between the pro-
gram’s measurable program objectives and the Outcomes component for the volun-
teer program.

EXHIBIT 16.1 Sample Logic Model for a Volunteer Program Focused on Decreasing Teen
Obesity

Inputs Activities Outputs Outcomes

$350 in nutrition
curricula
purchased

$750 for use of the
day camp
facility (in-kind)

10 members of the
program
advisory
committee

12 adult volunteers
working with
the program

Program
coordinator
devoted 3
workweeks (120
hours) to
planning and
implementing
the program

Three 2-hour
meetings
conducted of
the program
advisory
committee

Three 3-hour
volunteer
training sessions
conducted

At least 30 teens
who are
clinically obese
will participate
in the 3-day,
21-hour
program

At least 10 adult
volunteers will
serve during the
actual day camp

Program advisory
committee
members will
volunteer to
teach program
topics to
participants
during the day
camp

At least 80% of teen
participants will
increase their
knowledge of proper
nutrition and/or the
importance of exercise
along with diet as
evaluated using a pre/
post-test survey

At least 70% of teen
participants will
demonstrate new skills
in preparing healthy
snacks and meals as
evaluated by direct
observation by
program volunteers

At least 50% of teen
participants will aspire
to eat more nutritious
meals and to exercise
daily as indicated by a
post-test survey

Source: � 2009 R. Dale Safrit. All Rights Reserved.

Four Fundamental Questions in Any Volunteer Program Impact Evaluation 395

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

Question 2: How Will I Collect the Required Impact Evaluation Data?

Once targeted impacts have been identified for a volunteer program, thus answering
Question 1 as to why the program is to be evaluated, a volunteer manager must next
decide on actual methods to be used to collect the evaluation data. If measurable pro-
gram objectives have been developed, then Question 2 is easily answered. However,
oftentimes the evaluation component of a measurable program objective is the final
one to be decided, simply because the other four components tend to naturally pre-
empt it during the conceptual development of a volunteer program evaluation. Fur-
thermore, data collection methods may largely be defined and/or constrained based
on the type of program intervention and/or the numbers and type of target audience
(i.e., data collection methods will naturally differ between one-on-one and mass-audi-
ence-delivered volunteer programs, adult and youth audiences, etc.).

Basically, two types of data (and thus data collection methods) exist: qualitative
and quantitative. Thomas (2003) provides a very fundamental description of both:

The simplest way to distinguish between qualitative and quantitative may be to
say that qualitative methods involve a researcher describing kinds of character-
istics of people and events without comparing events in terms of measurements or
amounts. Quantitative methods, on the other hand, focus attention on measure-
ments and amounts (more and less, larger and smaller, often and seldom, simi-
lar and different) of the characteristics displayed by the people and events that
the researcher studies. (p. 1)

And, both types of data are important in documenting impact of volunteer pro-
grams. According to Safrit (2010):

Within non-academic contexts (including volunteer programs), quantitative
methods are most commonly used in program evaluations. Quantitative methods
allow the evaluator to describe and compare phenomena and observations in
numeric terms. Their predominance may largely be due to the increasing de-
mand for “number-based evidence” as accountability within nonprofit programs
and organizations. However, qualitative methods may also be used very effec-
tively in volunteer program impact evaluations. Qualitative methods focus upon
using words to describe evaluation participants’ reactions, beliefs, attitudes, and
feelings and are often used to put a “human touch” on impersonal number
scores and statistics. (p. 333)

The discussion is not necessarily qualitative-versus-quantitative; rather, a volun-
teer manager needs to once again consider critical factors affecting the program’s im-
pact evaluation such as the purpose of the evaluation; possible time constraints;
human and material (including financial) resources available; to whom the evaluation
is targeted; etc.

There is a wide array of qualitative methods available for a volunteer manager to
utilize in evaluating impacts of a volunteer program (Bamberger, Rugh, & Mabry,
2006; Dean, 1994; Krueger & Casey, 2000; Miles & Huberman, 1994; Thomas, 2003;
Wells, Safrit, Schmiesing, & Villard, 2000), including (but not limited to) case studies,

396 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

ethnographs, content analysis, participant observation, and experienced narratives. Of
these, however, Spaulding (2008) suggested that the case study approach using par-
ticipant interviews and focus groups to collect data is by far the most common qualita-
tive method used with volunteer programs. Again, space limitations do not allow for
an in-depth discussion of these methods. (For a more in-depth discussion of using
case studies with volunteer programs, see Safrit, 2010.) However, the author suggests
that qualitative methods are most appropriate in evaluating volunteer programs that
are targeted to a relatively small group of clientele, for whom a few, focused practice
or behavioral skills and/or changes are the desired program impact. Qualitative evalu-
ation methods required considerably more time and human resources to conduct
properly, and data should be collected by well-trained individuals who conduct indi-
vidual interviews and/or focus groups. Qualitative methods are most effective when
the desired targeted accountability is focused on personal/human interest and affec-
tive/emotional impacts of the volunteer program.

However, when volunteer programs are designed to reach large numbers of tar-
geted clientele and seek to impact their knowledge and/or attitudes, quantitative
methods are probably more appropriate for the volunteer program impact evaluation.
Unfortunately, in today’s society demanding increased accountability, volunteer orga-
nizations are called on all too often to reach ever-increasing numbers of targeted cli-
ents with stagnant or decreasing resources, and then to dollarize the program’s
impacts on clients. Quantitative methods are also easier to analyze and summarize,
and are best when it is important or necessary to translate measured program
impacts into dollar amounts that are required by funders and legitimizers.

Consequently, quantitative evaluation methods are overwhelmingly the most
prevalent approach to collecting volunteer program impact data, and the most com-
mon quantitative method used are survey designs using questionnaires to collect
data. According to Safrit (2010):

Translated into volunteer program terms…conducting a survey to evaluate [vol-
unteer] program impact involves: identifying the volunteer program of interest;
identifying all program clientele who have participated in the program and se-
lecting participants for the evaluation; developing a survey instrument (question-
naire) to collect data; collecting the data; and analyzing the data so as to reach
conclusions about program impact. (pp. 336–337)

When using surveys to evaluate volunteer program impacts, there are important
considerations to be made by the volunteer manager regarding participant selection,
instrumentation, and data collection and analysis procedures (Dillman, Smyth, &
Christian, 2008). Safrit (2010) provides an in-depth discussion of each consideration
that space limitations prohibit in this chapter. However, the prevalence today of per-
sonal computers, data analysis software programs designed for non-statisticians, “sur-
vey-design-for-dummies” type texts, and very affordable do-it-yourself web-based
questionnaire companies all make it much easier for a volunteer manager with only a
fundamental background in quantitative evaluation methods to plan, design, and con-
duct a valid and reliable survey design quantitative evaluation of a volunteer program
using a face-to-face, mailed, e-mailed, or web-based questionnaire to collect impact
data from targeted clientele.

Four Fundamental Questions in Any Volunteer Program Impact Evaluation 397

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

The author must point out that in some situations, neither qualitative nor quantita-
tive methods alone are adequate to collect the type of data necessary to document
impacts of large volunteer programs with multiple measurable program objectives tar-
geted to a diverse program clientele. In such situations, the volunteer manager may
best decide to use some qualitative approaches together with some quantitative
approaches, or rather, a mixed methods approach (Creswell, 1994). According to
Safrit (2010):

The most common type of mixed method approach to impact evaluations of vol-
unteer programs involves a “two-phase design” in which the evaluator first uses
qualitative methods in a first phase to identify and describe key themes describing
a volunteer program’s impacts upon clientele, and subsequently quantitative
methods to be able to quantify and compare the intensity and pervasiveness of
the impacts among the clientele. (p. 340)

Question 3: Who Wants (or Needs) to Know What About the Evaluation Findings?

In response to Question 3, most volunteer managers would quickly answer, “I want
everyone to know everything about all of our volunteer programs.” However, the
stark reality is that different stakeholders will want (and need) to know different
aspects of any volunteer program evaluation. Some stakeholders are extremely
busy, and only have minimal time to review a volunteer program evaluation report.
Others will have a very focused interest in the program, especially if they have con-
tributed materials and/or resources and are therefore most interested in the bottom
line. Volunteers themselves, on the other hand, may be less concerned about the
financial aspects of the program in which they volunteer, and may be more con-
cerned about exactly what difference they have made in the lives of the clientele
they have served.

To help volunteer managers answer this question in an objective and realistic
manner, Safrit (2010) developed a program accountability matrix (Exhibit 16.2). In the
matrix, specific types of internal and external volunteer program stakeholders are
listed in the far left column, and the standard types of evaluation data based on logic
modeling (i.e., inputs, activities, outputs, and outcomes) are listed across the top of
the matrix. According to the author:

To use the Matrix, the [volunteer manager] simply answers the following question
for each type of evaluation information, for each type of stakeholder: “If time and
resources are limited, does this stakeholder really want to know this type of evalu-
ation information?” If the answer is “yes,” then the VRM simply places a mark [X]
in the cell where the specific stakeholder and evaluation information intercept; if
the answer is “no,” then the cell is left empty. The caveat for developing an effec-
tive Accountability Matrix is that the [volunteer manager] must be brutally honest
and frank in responding to the question for each stakeholder group and each type
of evaluation information; s/he must recognize and manage the previously de-
scribed bias that everyone wants to know everything about a specific volunteer
program. (p. 342)

398 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

Once the matrix is completed for the specified volunteer program, the volunteer
manager simply looks at the column totals for each potential type of evaluation data.
Those columns (i.e., data types) with the highest totals should therefore be the priorit-
ies for the volunteer manager to focus on in the impact evaluation, especially when
time and resources are limited. This targeted approach to volunteer program account-
ability serves as a very useful tool in not only deciding to whom to communicate im-
pacts at the end of a volunteer-based program, but in also deciding what needs to be
evaluated about the volunteer program even before it is initiated.

MONETIZING IMPACTS Once targeted volunteer program impacts have been identified
and evaluated, volunteer managers are often faced with a seemingly impossible chal-
lenge—converting measured volunteer program impacts into monetary values. Again,
this challenge is a fiscal reality encountered ever more frequently today as a result
of increased pressure (or demands) from funders, government agencies, and other
legitimizers. However, it is made much easier by (1) having measurable program
objectives identified for the volunteer program from its inception, (2) the use

EXHIBIT 16.2 Sample Completed Program Accountability Matrix for a Volunteer Program
Focused on Decreasing Teen Obesity

Type of Volunteer Program
Stakeholder

If time and resources are limited, does this
stakeholder really want to know specifics about
the volunteer program’s…

Inputs Activities Outputs Outcomes

Internal Stakeholders

Volunteer Manager X X

X X

Program Director X X X X

Organization’s Executive Director X X

Organization’s Board of Directors X

Program Volunteers X X

Other Stakeholder?
(Advisory Committee Members)

X X

External Stakeholders

Program Clientele X X

Program Funders X X

Program Collaborators X

Community Leaders X

Government Leaders
(County Commissioners)

X X

Other Stakeholders?
(County Health Department)

X X

TOTALS 4 3 7 11

Source: � 2009 R. Dale Safrit, All Rights Reserved.

Four Fundamental Questions in Any Volunteer Program Impact Evaluation 399

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

of logic modeling, and (3) having completed a realistic accountability matrix. And as
Key (1994) noted, there may be existing sources of volunteer program benefits other
than the impact evaluation’s findings:

There are numerous potential sources of data for analysis of program benefits:
1. Existing records and statistics kept by the agency, legislative committees, or
agency watchdogs . . .; 2. Feedback from the program’s clients…obtained
through a questionnaire or focus group; 3. Ratings by trained observers; 4. Expe-
rience of other governments or private or nonprofit organizations; and 5. Special
data gathering. (p. 470)

Furthermore, the idea that volunteer program outputs and outcomes (impacts)
can be converted into monetary values is not new, since Karn proposed a system to
do so as early as 1982. Most recently, Anderson and Zimmerman (2003) identified five
methods that could be used to place a dollar estimate on volunteers’ time:

1. Using an average wage of all wage earners in the geographical area served by the
volunteer program

2. Using a wage earned by a professional who does paid work comparable to the
service contributed by the volunteer

3. Using the standard average dollar value of an hour of a volunteer’s time that is
calculated and published annually by the Independent Sector (2010)

4. Using the living wage that the United States federal government calculates as that
needed for an individual to maintain a standard of living above the current pov-
erty level

5. Using the local or state minimum wage that any employer must pay an employee
as dictated by law

Exhibit 16.3 illustrates how the volunteer manager for a volunteer-based teen
obesity program calculated estimated dollar amounts for the logic model components
of the program.

COMPARING COSTS AND BENEFITS IN VOLUNTEER PROGRAMS Once program inputs, ac-
tivities, outputs, and outcomes (impacts) have all been translated into dollar amounts,
a volunteer manager may take the program’s impact evaluation to a final and highest
level of accountability by borrowing one or more of three powerful statistics from the
field of applied economics: cost savings analysis (CSA), benefit-cost analysis (BCA),
and/or return on investment (ROI).

CSA is the estimated dollar value of any potential volunteer program costs that
were not required to be spent as a direct result of volunteer involvement in the pro-
gram. In the teen obesity program example (Exhibit 16.3), volunteers saved the pro-
gram and sponsoring organization an estimated $8,602. BCA is the calculated
estimated ratio comparing the net benefits of the volunteer program to the total costs
of conducting the program (Key, 1994; Moore, 1978; Royse et al., 2010). A BCA of 1
(written as 1:1) indicates that the value of the volunteer program’s benefits equaled
the value of the program’s total costs, whereas a BCA of 2 (written as 2:1) indicates
that for each $1.00 in program costs, $2.00 were realized in program benefits. For the

400 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

EX
H
IB
IT
16
.3

E
x
am

p
le
s

o
f

C
o
n
v
e
rt
in
g
M
e
tr
ic
s
in

a
V
o
lu
n
te
e
r
P
ro

g
ra
m

F
o
cu

se
d
o
n
D
e
cr
e
as
in
g
T
e
e
n
O
b
e
si
ty

in
to

D
o
ll
ar

A
m
o
u
n
ts

In
p
u
ts

A
ct
iv
it
ie
s

O
u
tp
u
ts

O
u
tc
o
m
e
s

$
3
5
0
(i
n
n
u
tr
it
io
n

cu
rr
ic
u
la

p
u
rc
h
as
e
d
)

$
7
5
0
(u
se

o
f
th
e

d
ay

ca
m
p

fa
ci
li
ty
;i
n

k
in
d
)

$
3
,6
0
0
(t
o
ta
l

co
st
s
o
f
d
ay

ca
m
p
su
p
p
li
e
s,

m
e
al
s,
sn
ac
k
s,

e
q
u
ip
m
e
n
t,

e
tc
.)

$
1
,8
0
0
(v
o
lu
n
te
e
r
m
an

ag
e
r’
s
sa
la
ry

d
e
v
o
te

d
to

p
ro
g
ra
m

;
3
w
o
rk
w
e
e
k
s
D

1
2
0
h
o
u
rs
fo
r

p
la
n
n
in
g
an

d
im

p
le
m
e
n
ti
n
g
th
e
p
ro
g
ra
m

@
$
1
5
.0
0
/h
r.
sa
la
ry

ra
te
)

$
4
1
4
(v
o
lu
n
te
e
r
m
an

ag
e
r’
s
w
o
rk

b
e
n
e
fi
ts

ca
lc
u
la
te
d
as

2
3
%
o
f
sa
la
ry
)

$
1
,1
7
0
(p
ro
g
ra
m

ad
v
is
o
ry

co
m
m
it
te
e

m
e
m
b
e
rs

ti
m
e
;
th
re
e
2
-h
o
u
r
m
e
e
ti
n
g
s
co

n
d
u
ct
e
d
fo
r
1
0

m
e
m
b
e
rs
@

$
6
.5
0
/

h
r.
m
in
im

u
m

w
ag
e
1
)

$
4
,0
9
5
(c
o
st
o
f

v
o
lu
n
te
e
rs


ti
m
e
fo
r
tr
ai
n
in
g
;

th
re
e
3
-h
o
u
r
v
o
lu
n
te
e
r
tr
ai
n
in
g
se
ss
io
n
s
fo
r

2
1
v
o
lu
n
te
e
rs
@
$
6
.5
0
/h
r.
m
in
im

u
m
w
ag
e
1
)

$
1
,4
0
4
(c
o
st
s
o
f
p
ar
ti
ci
p
at
in
g
3
6

o
b
e
se

te
e
n
s

p
ar
e
n
ts

co

st
s
to

tr
an

sp
o
rt
p
ar
ti
ci
p
an

ts
to

d
ay

ca
m
p
;
2
h
rs
./
d
ay

@
3
d
ay
s
@
3
6
p
ar
e
n
ts
@

$
6
.5
0
/h
r.
m
in
im
u
m
w
ag
e
1
)

$
2
,3
2
0
.5
0
(2
1
ad

u
lt

v
o
lu
n
te
e
rs
co

n
tr
ib
u
te
d

3
5
7
to
ta
l
h
o
u
rs
d
u
ri
n
g

th
e
ac
tu
al
d
ay

ca
m
p
@

$
6
.5
0
/h
r.
m
in
im
u
m
w
ag
e
1
)

$
2
6
6
.5
0
(e
ig
h
t
p
ro
g
ra
m

ad
v
is
o
ry
co
m
m
it
te
e

m
e
m
b
e
rs
v
o
lu
n
te
e
re
d

4
1
to
ta
lh

o
u
rs
to

te
ac
h

p
ro
g
ra
m

to
p
ic
s
d
u
ri
n
g

th
e
d
ay

ca
m
p
@
$
6
.5
0
/

h
r.
m
in
im
u
m
w
ag
e
1
)

8
4
%

(n
D

3
0
)
o
f
te
e
n
p
ar
ti
ci
p
an

ts
in
cr
e
as
e
d

th
e
ir
k
n
o
w
le
d
g
e
o
f
p
ro
p
e
r
n
u
tr
it
io
n
an

d
/o
r

th
e
im

p
o
rt
an

ce
o
f

e
x
e
rc
is
e

al
o
n
g
w
it
h
d
ie
t

as
e
v
al
u
at
e
d
u
si
n
g
a
p
re
-/
p
o
st
te
st
su
rv
e
y

7
5
%
(n
D

2
7
)
o
f
te
e
n
p
ar
ti
ci
p
an

ts
d
e
m
o
n
st
ra
te
d

n
e
w

sk
il
ls
in

p
re
p
ar
in
g
h
e
al
th
y
sn
ac
k
s
an

d
m
e
al
s
as

e
v
al
u

at
e
d
b
y

d
ir
e
ct
o
b
se
rv
at
io
n
b
y

p
ro
g
ra
m
v
o
lu
n
te
e
rs

5
4
%
(n
D

1
9
)
o
f
te
e
n
p
ar
ti
ci
p
an

ts
as
p
ir
e
d

to

e
at
m
o
re

n
u
tr
it
io
u
s
m
e
al
s
an

d
to
e
x
e
rc
is
e

d
ai
ly
as

in
d
ic
at
e
d
b
y
a
p
o
st
te
st
su
rv
e
y

T
o
ta
l

E
st
im

a
te
d
P
ro

g
ra
m

C
o
st
s

T
o
ta
l
E
st
im

a
te
d
P
ro
g
ra
m

B
e
n
e
fi
ts

$
4
,7
0
0

$
8
,8
8
3

$
2
,5
8
7

If
,a
s
a
re
su
lt
o
f
th
e
p
ro
g
ra
m
,
a
m
e
re

1
0
%
o
f

o
b
e
se

te
e
n
s
w
h
o
d
e
m
o
n
st
ra
te
d
n
e
w

sk
il
ls
in
p
re
p
ar
in
g
h
e
al
th
y
sn
ac
k
s
an

d
m
e
al
s
(n
D

3
)

h
ad

to
m
ak

e
o
n
e
fe
w
e
r
v
is
it
to

a
d
o
ct
o
r
e
ac
h

y
e
ar

fo
r
th
e
n
e
x
t
5
y
e
ar
s,
at
an

av
e
ra
g
e

d
o
ct
o
rs

v
is
it
co

st
o
f
$
1
2
0
,t
h
e
n
th
e
p
ro
g
ra
m

w
o
u
ld

h
av
e
sa
v
e
d
a
m
in
im

u
m

o
f
$
1
,8
0
0
in

m
e
d
ic
al
co

st
s.

(c
o
n
ti
n
u
ed

)

401
Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.

C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

If
,a
s
a
re
su
lt
o
f
th
e
p
ro
g
ra
m
,
a
m
e
re
1
0
%
o
f
o
b
e
se

te
e
n
s
w
h
o
as
p
ir
e
d
to

e
at
m
o
re
n
u
tr
it
io
u
s
m
e
al
s
an

d
e
x
e
rc
is
e
re
g
u
la
rl
y

(n
D

2
)
d
id

n
o
t
d
e
v
e
lo
p
T
y
p
e
II
d
ia
b
e
te
s,
at
a

co
st
to

so
ci
e
ty

o
f
$
2
.5
m
il
li
o
n
as

e
st
im

at
e
d
b
y

th
e
St
at
e
D
e
p
t.
o
f
H
e
al
th

Se
rv
ic
e
s,
th
e
n
th
e

p
ro
g
ra
m
w
o
u
ld
h
av
e
sa
v
e
d
a
m
in
im
u
m
o
f

$
5
.0
m
il
li
o
n
.

$
1
3
,5
8
3

$
5
,0
0
2
,5
8
7

E
st
im

a
te
d
C
S
A
:
$
8
,6
0
2

E
st
im

a
te
d
B
C
A
:
3
6
8
:1

E
st
im

a
te
d
R
O
I:
3
,6
7
3
%

1
N
o
te
:
T
h
is
$
6
.5
0
fi
g
u
re

is
u
se
d
b
y
th
e
au

th
o
r
fo
r
il
lu
st
ra
ti
v
e
p
u
rp
o
se
s
o
n
ly
.
A
s
o
f
Ju
ly

2
4
,
2
0
0
9
,
th
e
o
ffi
ci
al

U
.S
.
fe
d
e
ra
l
h
o
u
rl
y
m
in
im

u
m

w
ag
e
w
as

in
cr
e
as
e
d
to

$
7
.2
5
.

So
u
rc
e:

2
0
0
9
R
.D

al
e
Sa
fr
it
.A

ll
R
ig
h
ts
R
e
se
rv
e
d
.

EX
H
IB
IT
16
.3

(C
o
n
ti
n
u
ed

)
T
o
ta
l
E
st
im
a
te
d
P
ro
g
ra
m
C
o
st
s
T
o
ta
l
E
st
im
a
te
d
P
ro
g
ra
m
B
e
n
e
fi
ts

402
Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.

C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

teen obesity program, the total costs were estimated at $13,583 and total program ben-
efits were estimated at $5,002,587, resulting in an astounding BCA of 368:1; that is, for
every $1.00 spent on the obesity program, an estimated $368 was generated in pro-
gram benefits.

The ultimate impact accountability statistic that a volunteer manager may calcu-
late is ROI for a volunteer program. According to Key (1994), ROI “is the discount rate
that would make the present value of the [volunteer] project equal to zero” (p. 461).
ROI is the percentage resulting from subtracting a program’s total costs from its total
benefits, dividing that figure by the total costs, and multiplying that figure by 100 (J. J.
Phillips, 2003; P. P. Phillips, 2002; P. P. Phillips & J. J. Phillips, 2005). Thus, in the obe-
sity program example, the ROI is $4,989,004 (i.e., $5,002,587 in total benefits minus
$13,583 in total costs) divided by the total costs of $13,583, and finally multiplied by
100, resulting in 3,673%. Thus, for every $1.00 invested in the volunteer delivered pro-
gram, a net monetary value of $3,673 was generated that benefited the total
community.

Question 4: How Do I Communicate the Evaluation Findings?

This fourth and final question may appear to be the easiest to answer, but still requires
thoughtful consideration by a volunteer manager. Going back to the discussion of the
program accountability matrix, specific volunteer program stakeholders may require
very specific types of evaluation findings reports, and unfortunately, one size may not
fit all! Some may desire a thorough and comprehensive final report describing the vol-
unteer program in detail and all program impacts measured; others may simply wish
to see an executive summary of key program impacts related directly to their program
involvement. Hendricks (1994) concluded:

If a tree falls in the forest and no one hears it, does it make a sound? If an evalua-
tion report falls on someone’s desk and no one reads it, does it make a splash?
None whatsoever, yet we evaluators still rely too often on long tomes filled with
jargon to “report” our evaluation results. (p. 549)

Safrit (2010) identified three important aspects regarding the accountability function
as related directly to Question 4 for a volunteer manager to consider in deciding how to
communicate the findings of a volunteer program impact evaluation to targeted stake-
holders. First, the volunteer manager must identify the specific recipient of the commu-
nication. This has, of course, been addressed by Question 3. Secondly, the volunteer
manager must identify the specific message to be communicated. Again, this has been
decided by answering Question 1. Together, both of these aspects have been identified
more specifically in the completed program accountability matrix.

The third aspect of accountability, however, if for the volunteer manager to iden-
tify the specific format for the evaluation impact report, and the specific medium to be
used to communicate the report. The most common format and medium used to com-
municate the findings of a volunteer program impact evaluation for accountability pur-
poses is a written final report. Typical components of such a final report include an
introduction to the volunteer program, a description of the methods used to evaluate
the program’s impacts, the evaluation findings, and a thorough discussion of the

Four Fundamental Questions in Any Volunteer Program Impact Evaluation 403

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

findings’ implications for the targeted clientele served, the sponsoring organization,
and the larger community (Royse et al., 2010).

However, other written report formats may better serve some targeted stake-
holders. Executive summaries are short (i.e., 2 to 4 pages) annotated compilations
of information contained in the larger final report, highlighting only the most impor-
tant aspects of the volunteer program and its evaluation findings of particular inter-
est to the executive summary’s targeted audience. Even more concise are impact
statements or fact sheets that present impact evaluation data and findings in tabular
or visual formats, much like Exhibit 16.3 for the teen obesity program. And finally, in
today’s multimedia, 24/7 world, the volunteer manager should not overlook oppor-
tunities to communicate the impact evaluation findings of a volunteer program in
any combination of written and visual formats that may be posted to the organiza-
tion’s web page or streamed onto the Internet. Whatever the reporting format,
Patton’s (2008, p. 509) recommendations for making the evaluation’s accountability
report as user-focused and user-friendly as possible should be considered carefully
by a volunteer manager:

1. “Be intentional about reporting…know the purpose of the report and stay true to
that purpose.”

2. “Stay user-focused: Focus the report on the priorities of primary intended users.”
These first two recommendations have been addressed previously in this

chapter, but they emphasize the importance of a volunteer manager:
& Identifying specific target stakeholders for a volunteer program evaluation
& Identifying which aspect of the logic model each stakeholder type will specifi-

cally want to know
& Collecting appropriate data to support that aspect
& Reporting the findings in a format desired by the stakeholder group

3. “Organize and present the findings so as to facilitate understanding and
interpretation.”

This recommendation points to the prior discussion of the need for a volun-
teer manager to customize the actual report into a format preferred by a specific
stakeholder group. Again, one style (and one format) does not fit all stakeholders.
Most stakeholders will prefer a written report, but today’s technological advances
make multimedia options readily available as well.

4. “Avoid surprising primary stakeholders.”
No one likes a surprise, but of course, a positive surprise is more readily ac-

cepted than a negative surprise. Begin the accountability report of any volunteer
program with the most positive and important impact evaluation findings, and
then address “areas for improvement” or “findings of some concern.” However,
as an evaluator, a volunteer manager has an ethical responsibility to communicate
all appropriate evaluation findings, and not to exclude any findings that may
make the volunteer manager or program administrator uncomfortable.

5. “Prepare users to engage with and learn from ‘negative’ findings.”
If necessary, present any negative findings one-on-one with key stakehold-

ers, asking them for their reactions, insights, and/or opinions, before surprising
them in a large group, formal session or meeting. Then incorporate this input into
the final version of the impact evaluation report.

404 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

6. “Distinguish dissemination from use.”
Sharing the findings of a volunteer program impact evaluation with key stake-

holders is a critical component of the accountability responsibility of a volunteer
manager. However, the goal should be to move stakeholders beyond a mere dis-
cussion of what went well and what went wrong, to a higher level of discussion,
one that focuses the impact evaluation findings on strengthening the volunteer pro-
gram in ways and areas that better serve the clientele the program is designed to
target—and that better fulfill the organization’s mission and purpose.

References

Anderson, P. A., & Zimmerman, M. E. (2003). Dollar value of volunteer time: A review
of five estimation methods. Journal of Volunteer Administration, 21 (2), 39–44.

Austin, M. J., Cox. G., Gottlieb, N., Hawkins, J. D., Kruzich, J. M., & Rauch, R. (1982).
Evaluating your agency’s programs. Newbury Park, CA: Sage.

Bamberger, M., Rugh, J., & Mabry, L. (2006). Real world evaluation: Working under
budget, time, data, and political constraints. Thousand Oaks, CA: Sage.

Bennett, C., & Rockwell, K. (1994, December). Targeting outcomes of programs
(TOP): An integrated approach to planning and evaluation. Retrieved from
http://citnews.unl.edu/TOP/english/

Boone, E. J. (1985). Developing programs in adult education. Englewood Cliffs, NJ:
Prentice- Hall.

Boone, E. J., Safrit, R. D., & Jones, J. M. (2002). Developing programs in adult educa-
tion (2nd ed.). Prospect Heights, IL: Waveland Press.

Bradner, J. H. (1995). Recruitment, orientation, and training. In T. D. Connors (Ed.), The
volunteer management handbook (pp. 61–81). New York, NY: John Wiley & Sons.

Brudney, J. L. (1995). Preparing the organization for volunteers. In T. D. Connors
(Ed.), The volunteer management handbook (pp. 36–60). New York, NY: John
Wiley & Sons.

Brudney, J. L. (1999, Autumn). The effective use of volunteers: Best practices for the
public sector. Law and Contemporary Problems, 219, 219–253.

Combs, W. L., & Falletta, S. V. (2000). The targeted evaluation process. Alexandria, VA:
American Society for Training & Development.

Council for Certification in Volunteer Administration. (2008). Body of knowledge in
volunteer administration. Retrieved from www.cvacert.org/certification.htm

Creech, R.B. (1968). Let’s measure up! A set of criteria for evaluating a volunteer pro-
gram. Volunteer Administration, 2(4), 1–18.

Creswell, J. W. (1994). Research design: Qualitative & quantitative approaches. Thou-
sand Oaks, CA: Sage.

Daponte, B. O. (2008). Evaluation essentials: Methods for conducting sound research.
San Francisco, CA: Jossey-Bass.

Dean, D. L. (1994). How to use focus groups. In J. S. Wholey, H. P. Hatry, & K. E.
Newcomer (Eds.), Handbook of practical program evaluation (pp. 338–349). San
Francisco, CA: Jossey-Bass.

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2008). Internet, mail, and mixed-mode
surveys: The tailored design method. Hoboken, NJ: John Wiley & Sons.

References 405

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

http://citnews.unl.edu/TOP/english/

http://www.cvacert.org/certification.htm

Fetterman, D. M. (1996). Empowerment evaluation. In D. M. Fetterman, S. J. Kaftarian,
& A. Wandsersman (Eds.), Empowerment evaluation: Knowledge and tools for
self-assessment & accountability (pp. 3–46). Thousand Oaks, CA: Sage.

Fisher, J. C., & Cole, K. M. (1993). Leadership and management of volunteer pro-
grams. San Francisco, CA: Jossey-Bass.

Frechtling, J. A. (2007). Logic modeling methods in program evaluation. San Fran-
cisco, CA: John Wiley & Sons.

Graff, L. L. (1995). Policies for volunteer programs. In T. D. Connors (Ed.), The volun-
teer management handbook (pp. 125–155). New York, NY: John Wiley & Sons.

Hendricks, M. (1994). Making a splash: Reporting evaluation results effectively. In J. S.
Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program
evaluation (pp. 549–575). San Francisco, CA: Jossey-Bass.

Holden, D. J., & Zimmerman, M. A. (2009). A practical guide to program evaluation:
Theory and case examples. Los Angeles, CA: Sage.

Honer, A. S. (1982). Manage your measurements, don’t let them manage you! Volun-
teer Administration, 14(4), 25–29.

Independent Sector. (2010). Value of volunteer time. Retrieved from www.independent
sector.org/volunteer_time

Karn, G. N. (1982). Money talks: A guide to establishing the true dollar value of
volunteer time. Journal of Volunteer Administration, 1(2), 1–17.

Key, J. E. (1994). Benefit-cost analysis in program evaluation. In J. S. Wholey, H. P.
Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation
(pp. 456–488). San Francisco, CA: Jossey-Bass.

Kirkpatrick. D. L. (1959). Techniques for evaluating training programs. Journal of the
American Society for Training and development, 13(11–12), 23–32.

Korngold, A., & Voudouris, E. (1994). Business volunteerism: Designing your program
for impact. Cleveland, OH: Business Volunteerism Council.

Krueger, R., & Casey, M. A. (2000). Focus groups: A practical guide for applied re-
search (3rd ed.). Thousand Oaks, CA: Sage.

Lulewicz, S. J. (1995). Training and development of volunteers. In T. D. Connors (Ed.),
The volunteer management handbook (pp. 82–102). New York, NY: John Wiley
& Sons.

Macduff, N. (1995). Volunteer and staff relations. In T. D. Connors (Ed.), The volunteer
management handbook (pp. 206–221). New York, NY: John Wiley & Sons.

Merrill, M., & Safrit, R. D. (2000, October). Bridging program development and impact
evaluation? Proceedings of the 2000 International Conference on Volunteer Ad-
ministration (p. 63). Phoenix, AZ: Association for Volunteer Administration.

Miles, M. B., & Huberman, A. B. (1994). Qualitative data analysis: A sourcebook of
new methods. Beverly Hills, CA: Sage.

Moore, N. A. (1978). The application of cost-benefits analysis to volunteer programs.
Volunteer Administration, 11(1), 13–22.

Morley, E., Vinson, E., & Hatry, H. P. (2001). Outcome measurement in nonprofit
organizations: Current practices and recommendations. Washington, DC:
Independent Sector.

Naylor, H. H. (1973). Volunteers today: Finding, training and working with them.
Dryden, NY: Dryden Associates.

Naylor, H. H. (1976). Leadership for volunteering. Dryden, NY: Dryden Associates.

406 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

http://www.independentsector.org/volunteer_time

O’Connell, B. (1976). Effective leadership in voluntary organizations. Chicago,
IL: Follett.

Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Los Angeles, CA: Sage.
Phillips, J. J. (2003). Return on investment in training and performance improvement

programs (2nd ed.). Amsterdam, Netherlands: Butterworth Heinemann.
Phillips, P. P. (2002). The bottom line on ROI: Basics, benefits, & barriers to measuring

training & performance improvement. Atlanta, GA: CEP Press.
Phillips, P. P., & Phillips, J. J. (2005). Return on investment: ROI basics. Alexandria,

VA: American Society for Training & Development.
Rossi, P. H., & Freeman, H. E. (1993). Evaluation: A systematic approach. Newbury

Park, CA: Sage.
Royse, D., Thyer, B. A., & Padgett. D. K. (2010). Program evaluation: An introduction

(5th ed.). Belmont, CA: Wadsworth.
Safrit, R. D. (2010). Evaluation and outcome measurement. In K. Seel (Ed.), Volunteer

administration: Professional practice (pp. 313–361). Markham, ON: LexisNexis
Canada.

Safrit, R. D., & Merrill, M. (1998). Assessing the impact of volunteer programs. Journal
of Volunteer Administration, 16(4), 5–10.

Safrit, R. D., & Merrill, M. (2005, Nov.). The seven habits of highly effective managers
of volunteers. Proceedings of the 10th International Association of Volunteer
Efforts (IAVE) Asia-Pacific Regional Volunteer Conference (p. 67). Hong Kong,
China: IAVE.

Safrit, R. D., & Schmiesing, R. J. (2002, October). Measuring the impact of a stipended
volunteer program: The Ohio 4-H B.R.I.D.G.E.S. experience. Proceedings of the
2002 International Conference on Volunteer Administration (p. 16). Denver,
CO: Association for Volunteer Administration.

Safrit, R. D., & Schmiesing, R. J. (2005). Volunteer administrators’ perceptions of
the importance of and their current levels of competence with selected volun-
teer management competencies. Journal of Volunteer Administration, 23(2),
4–10.

Safrit, R. D., Schmiesing, R. J., Gliem, J. A., & Gliem, R. R. (2005). Core competencies
for volunteer administration: An empirical model bridging theory with profes-
sional best practice. Journal of Volunteer Administration, 23(3), 5–15.

Safrit, R. D., Schmiesing, R., King, J. E., Villard, J., & Wells, B. (2003). Assessing the
impact of the three-year old Ohio Teen B.R.I.D.G.E.S. program. Journal of
Volunteer Administration, 21(2), 12–16.

Schmiesing, R., & Safrit, R. D. (2007). 4-H Youth Development professionals’ percep-
tions of the importance of and their current level of competence with selected
volunteer management competencies. Journal of Extension, 45(3). Retrieved
from www.joe.org/joe/2007June/rb1p.shtml

Seel, K. (1995). Managing corporate and employee volunteer programs. In T. D.
Connors (Ed.), The volunteer management handbook (pp. 259–289). New York,
NY: John Wiley & Sons.

Spaulding, D. T. (2008). Program evaluation in practice: Core concepts and examples
for discussion and analysis. San Francisco, CA: Jossey-Bass.

Stenzel, A. K., & Feeney, H. M. (1968). Volunteer training and development: A man-
ual for community groups. New York, NY: Seabury Press.

References 407

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

http://www.joe.org/joe/2007June/rb1p.shtml

Stepputat, A. (1995). Administration of volunteer programs. In T. D. Connors (Ed.),
The volunteer management handbook (pp. 156–186). New York, NY: John Wiley
& Sons.

Stufflebeam, D. L. (1987). The CIPP model for program evaluation. In G. F. Madaus,
M. S. Scriven, & D. L. Stufflebeam (Eds.), Evaluation models: Views on educa-
tional and human services evaluation (pp. 117–141). Boston, MA: Kluwer-
Nijhoff.

Taylor, M. E., & Sumariwalla, R. D. (1993). Evaluating nonprofit effectiveness: Over-
coming the barriers. In D. R. Young, R. M. Hollister, & V. A. Hodgkinson (Eds.),
Governing, leading, and managing nonprofit organizations (pp. 43–62). San
Francisco, CA: Jossey-Bass.

Thomas, R. M. (2003). Blending qualitative & quantitative research methods in theses
and dissertations. Thousand Oaks, CA: Corwin Press.

Tyler, R. W. (1949). Basic principles of curriculum and instruction. Chicago, IL:
University of Chicago Press.

Wells, B., Safrit, R. D., Schmiesing, R., & Villard, J. (2000, October). The power is in the
people! Effectively using focus groups to document impact of volunteer pro-
grams. Proceedings of the 2000 International Conference on Volunteer Adminis-
tration (p. 59). Phoenix, AZ: Association for Volunteer Administration.

Wilson, M. (1979). The effective management of volunteer programs. Boulder, CO:
Volunteer Management Associates.

Wilson, M. (1981). Survival skills for managers. Boulder, CO: Volunteer Management
Associates.

W.K. Kellogg Foundation. (2000). Logic model development guide. Battle Creek, MI:
Author.

408 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:20:43.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

CHAPTER 15

Evaluating the Volunteer Program
Contexts and Models

Jeffrey L. Brudney, PhD
Cleveland State University

Tamara G. Nezhina, PhD
DePaul University

The importance of volunteers and the effects of their donated services on the pros-pects of clients, organizations, society, and the volunteers themselves have be-
come important matters of discussion and measurement. With resources in short
supply and funders continually stressing organizational accountability for grants, con-
tracts, and other financial support and the results achieved from these initiatives, the
nature of evaluation of volunteer programs has changed over recent decades.
Whereas, previously, counting the number of volunteers and the number of hours
they contribute to an agency over a given period (such as a year) may have been
considered sufficient “evaluation,” organizations have turned to more elaborate meth-
ods to assess and demonstrate the contributions of their volunteers. The implications
for the field of volunteer administration are evident: Safrit (2010) shows that conduct-
ing evaluations has become an accepted (and expected) competency of volunteer re-
source managers (also known as volunteer administrators).

Volunteer administrators and their host organizations need to be concerned about
evaluating volunteer programs to satisfy the information needs of various constituen-
cies. These constituencies or so-called stakeholders are persons or groups who have a
stake in, or a claim on, the program, whether perceived or actual. For example, as one
of the most prominent stakeholders, funders are no longer content merely with
an organization having volunteers onboard but wish to know the results or outcomes
or even the long-term impact of their involvement. Another important set of
stakeholders, board members are interested in whether all organizational resources,
including volunteers, have been put to good, if not best, use. Similarly, a third stake-
holder group, organizational leadership, is eager to derive the most benefit from
the volunteer program. At a more operational level, managers would like to

363

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.

C
o
p
yr

ig
h
t
©

2
0
1
1
.
Jo

h
n
W

ile
y

&
S

o
n
s,

I
n
co

rp
o
ra

te
d
.
A

ll
ri
g
h
ts

r
e
se

rv
e
d
.

make sure that volunteers are helping their departments, and the organization,
achieve programmatic goals. For their part, volunteers may derive motivation from
learning about the value of their efforts and the results they help to bring about for
organizations and their clients.

Satisfying all of these stakeholders through the same evaluation of the volunteer
program is not easy, and perhaps not even feasible. Accordingly, in this chapter we
present an evaluation framework for assisting the volunteer resource manager with
understanding and conducting different types of evaluation based on stakeholder
involvement. Based on the evaluation literature, we then describe how volunteering
might be valued by host organizations, volunteers themselves, and agency clients.
We then present a logic model framework to guide the evaluation of a volunteer
program. We begin by considering the meaning and purpose of evaluation.

Defining Evaluation

Evaluation entails an assessment or judgment of the value or worth of an endeavor or
initiative (Carman & Fredericks, 2008; McDavid & Hawthorne, 2006; Wholey, Hatry, &
Newcomer, 2004). Fitzpatrick, Sanders, and Worthen (2004) argue that the purpose of
evaluation is to “render judgments about the value of whatever is being evaluated.”
These assessments or judgments of value can be put to different uses, but the central
purpose is to determine “the significance, the merit or worth” of something of interest
(Scriven, 1991). We adopt Scriven’s definition for our goal of providing a framework
for the evaluation of volunteer programs. By conducting an evaluation of volunteer
participation and contribution, the evaluator attempts to assess the worth or value of
volunteer efforts for various stakeholders.

Measuring volunteer value can be undertaken to meet the evaluation needs of
various stakeholders. As other researchers have noted, clients, organizations, volun-
teers themselves, and the community are the recognized beneficiaries of volunteer
contribution (Brown, 1999; Handy & Srinivasan, 2004; Quarter, Mook, & Richmond,
2003). We argue that the needs of stakeholders must dictate to evaluators the purpose
(s) of evaluation, and the kinds of methods to be used for measuring volunteer value.
Volunteer resource managers charged with conducting, or assisting, an evaluation of
the volunteer program should be aware of the influence of these various constituen-
cies in the evaluation.

Role of Stakeholders in Evaluation

Some evaluation scholars attribute great importance to stakeholders in defining
the purpose of the evaluation and setting evaluation goals (Berk & Rossi, 1990;
Fitzpatrick et al., 2004; Patton, 1997; Rossi, Lipsey, & Freeman, 2004; Rutman, 1984).
Indeed, stakeholders may hold varying degrees of interest in knowing the value or
effectiveness of volunteering for society, a particular organization, or a specific pro-
gram. Therefore, before the evaluation begins, the volunteer resource manager (as
evaluator) should determine the stakeholders most concerned about the evaluation
and establish communication with them to identify their goals for this endeavor
(Rossi et al., 2004). Of course, ready communication with stakeholders will almost

364 Evaluating the Volunteer Program

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

certainly prove useful to the volunteer program for other purposes as well, such as
promotion, support, outreach, etc. Involving interested stakeholders in the evaluation
builds support for the evaluation process and commitment to the results. Patton
(1997) contends that individuals, rather than organizations, use evaluation informa-
tion. Thus, evaluation information should be targeted to specific persons or groups of
identifiable persons or stakeholders, rather than to what was traditionally identified as
the general “audience” for evaluation; He considers audiences rather amorphous and
anonymous (Patton, 1997)

Rossi et al. (2004) provide a comprehensive listing of potential stakeholders in the
evaluation process, including policymakers and decision makers, program sponsors,
evaluation sponsors, target participants, program managers, program staff, program
competitors, contextual stakeholders, and the evaluation and research community.
They argue that these groups and individuals most often pay attention to evaluation.
We refine this listing of stakeholders for the volunteer resource manager in the evalua-
tion framework we present below.

Patton (1997) is similarly concerned about the utility of evaluation for the stake-
holders. His focus is on the practical use of evaluation results. In one study, he found
that among stakeholders, 78% of responding decision makers and 90% of responding
evaluators felt that the evaluation had an impact on the program (Patton, 1997).
Through consultation prior to undertaking the evaluation, the volunteer resource
manager can facilitate this positive practical effect by learning about the purpose
of the evaluation from the primary intended user(s) of the findings (Patton, 1997;
Bingham & Felbinger, 2002). Thus, identifying specific stakeholder groups and under-
standing their goals for the evaluation are important to the evaluator for increasing
potential application of the findings.

Unfortunately, though, when evaluators attempt to serve too many audiences
(stakeholders), they rarely manage to serve all of them well (Horst, Nay, Scanlon &
Wholey, 1974). Horst et al. (1974) encourage program evaluators to identify those offi-
cials or managers who have a direct influence on program decisions and to design the
evaluation goals based on their points of view. This approach is supported by Patton’s
(1997) recommendations to identify the group of primary users and to focus on their
intended use of the evaluation. Such an approach improves the prospects for the eval-
uation results to be utilized by program managers. When involved early in the process
of designing the evaluation study, program managers feel ownership of the evaluation
process and findings and are more likely to use them for program improvement
(Patton, 1997; Posavac & Carey, 1992). Even so, the evaluation might, alternatively, be
tailored to the meet the information needs of other stakeholders of the volunteer pro-
gram, such as funders, organizational leadership, or the broader society. Accordingly,
the model we propose below for volunteer resource managers calls for the involve-
ment of primary stakeholders early in the evaluation process.

A further advantage of involving stakeholders in the evaluation is that their partic-
ipation can help to determine whether a volunteer program has reached a stage
of maturity where it is ready to be evaluated; such an effort is sometimes termed
an “evaluability” assessment (McDavid & Hawthorne, 2006; Rutman, 1984; Wholey
et al., 2004). Undertaking a full-blown impact evaluation of a program that is too
new, unstable, or resource-poor to achieve results is a waste of precious organiza-
tional time and energy; effective implementation of the program must occur first.

Role of Stakeholders in Evaluation 365

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

Patton (1997) argues that the evaluator can facilitate the program’s readiness for eval-
uation by involving intended users in generating meaningful evaluation questions.
This initial scrutiny and discussion among stakeholders and the evaluator can be very
useful in reinforcing the need to support implementation and ongoing administration
of the volunteer program, so that it would then be more capable of achieving its end
goals. At that point, an evaluation of impact would be more appropriate.

Purposes of Evaluation

Following Rutman (1984), we want to understand the role of purpose in designing an
evaluation of a volunteer program. Evaluation purpose shapes the evaluation design
and helps to focus the results. The purpose of the evaluation is defined in consultation
with program stakeholders, such as funders, board members, organizational leader-
ship, managers, program evaluators, clients, and the volunteers themselves, each of
whom may have different information needs. The volunteer resource manager as
evaluator needs to work with relevant stakeholders to clarify the purpose, to design
the evaluation process, and to help the stakeholders utilize the findings to their benefit
(Patton, 1997; Patton & Patrizi, 2005; Posavac & Carey, 1992; Rossi et al., 2004). The
evaluation literature suggests that evaluators need to pay attention to stakeholders’
perceptions and beliefs about the program so that they understand the stakeholder’s
purpose and formulate a specific evaluation question(s) aimed toward the need of the
consumer for certain information.

Identification of stakeholders, then, is the first step to elucidate goals and define the
purpose of the evaluation. However, the determination of relevant stakeholders is often
out of the hands of the volunteer resource manager. Funders may demand an evalua-
tion of program impact as a condition of their financial allocation; board members may
request an audit of volunteer activities; organizational leadership may solicit an analysis
of the costs and benefits of volunteer involvement; operational managers may want to
learn how to deploy volunteers more productively. More diffuse stakeholders may exert
their preferences for the evaluation as well. For example, policymakers, the media, or
the public (through individual citizen inquiries or through other stakeholders acting on
their behalf) may inquire as to the benefit to the larger community of the activities of
volunteers or the agency as a whole. In many nonprofit organizations, the volunteer
administrator is charged with preparing a periodic evaluation (e.g., annually) of the
volunteer program. This effort should take into account the information needs of the
central stakeholders. Rehnborg, Barker, and Pocock (2006) encourage organizations
first to clarify their purpose(s) for placing a value on volunteer efforts, and then to
choose the appropriate evaluation method.

Evaluation literature suggests that evaluation of volunteers, like evaluation of
other phenomena, should be conducted for the purpose defined by users of evalua-
tion findings, and with approaches and techniques shaped by this purpose. Talmage
(1982) defines three major purposes of evaluation information:

1. Meeting a political function
2. Assisting decision makers responsible for a policy or a program
3. Making judgments about the worth of the program

366 Evaluating the Volunteer Program

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

Her function-based approach shows considerable overlap among the functions.
Rutman (1984) develops the purpose-based approach to program evaluation further.
He also defines three purposes for evaluation: accountability, management, and
knowledge. These three purposes are distinct and help to identify evaluation informa-
tion users such as policymakers and organizations, managers, and the research com-
munity. Drawing on Chelimsky’s (1997) definition of evaluation purposes, which
follows Rutman’s classification with some variation,1 Rossi et al. (2004) elaborate the
list of users of the three types of evaluation. All of these approaches are very similar
conceptually, suggesting political, knowledge, and organizational purposes for evalu-
ation. We elaborate these purposes next.

Political Purpose

Following the evaluation purposes described by Talmage (1982), Rutman (1984), Che-
limsky (1997), and Rossi et al. (2004), the first purpose for undertaking an evaluation
is political. From this perspective, the external audience for evaluation of volunteer
effort is society at large and political decision makers. These stakeholders want to
know, for example, how to create and refine policies toward volunteering, how vol-
unteer programs benefit society, and whether such policy instruments as a tax subsidy
to the nonprofit sector are justified by the results of its activities.

Society at large evaluates volunteer involvement based on value judgments con-
cerning volunteering as a social phenomenon. If volunteering is recognized as an in-
herent end goal in the society, the conclusion follows that “the more volunteers the
better.” Yet, the value produced by volunteers for the benefit of the society is hard to
measure in economic terms because it is neither bought nor sold in the marketplace.
To the contrary, it is given, which renders its price beyond economic or monetary
value. Goods produced by volunteers surpass market price for comparable goods be-
cause these goods are infused with value added, such as good intentions; they are
given wholeheartedly, which makes them “priceless.” They are in sharp contrast to
goods and services sold in the market for the purpose of gaining profit. At the societal
level, then, learning about the aggregate volume and value of volunteering for society,
and how to stimulate them, are important evaluation purposes.

Knowledge Purpose

According to Rutman (1984), a second purpose for conducting an evaluation of volun-
teers is the generation of basic knowledge about the value of volunteer contributions.
This general knowledge is produced by those who study the phenomenon and
seek to understand, explain, and predict its future development. The knowledge pro-
duced may not be of immediate use, but it contributes to better understanding of

1 Chelimsky (1997) and Rossi et al. (2004) identify the same three major purposes for evaluation
as Rutman (1984), but they use different terminology to describe them. For example, Rutman
defines one purpose for evaluation as managerial, and Chelimsky and Rossi et al. define the
same purpose as program improvement. In Rutman, the last purpose of evaluation that is not
mentioned in our discussion is defined as the covert purpose. Rossi et al. also identify the same
purpose as hidden agendas.

Purposes of Evaluation 367

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

volunteering and its role in personal life, the community, society at large, and the
economy.

Economic approaches to valuing volunteers can be considered knowledge-
related because they add to our knowledge of volunteering from a novel viewpoint.
Yet, Graff (2005) and Smith and Ellis (2003) advise researchers to be cautious about
relying exclusively on economic measurement, such as dollar valuation, to assess vol-
unteer contributions. Graff recommends the use of a more complex and complete ap-
proach that involves the identification of outcomes of volunteer work, calculation of
the full costs of achieving those outcomes, and a consideration of whether the invest-
ment in volunteers is a productive use of resources. She maintains that if members of
the community are informed about the true social value that volunteers create, they
are more willing to support the mission of an organization.

Organizational and Managerial Purpose

Talmage (1982) identifies assisting policy and program decision makers, guiding
changes and innovations in a program, and informing managers about the worth of a
program as possible functions of evaluation. This purpose converges with the manage-
rial purpose for evaluation defined by Rutman (1984) and Rossi et al. (2004), which
serves those needs of the organization. For example, Hotchkiss, Fottler, and Unruh
(2009) in a study of the value of volunteers in hospitals find that more hours of volun-
teering positively correlates with significant cost savings and higher level of patients’
satisfaction at the hospitals. This type of evaluation is helpful to agency insiders, includ-
ing chief executives or directors, volunteer managers, program managers, and other
staff, who may be interested in improving volunteer involvement and deployment, and
the return on organizational investment in these important human resources. Rutman
describes the management perspective on program evaluation as serving as a tool for
making improved decisions about the design and delivery of programs and about the
type and amount of resources that should be devoted to them.

From the organizational or management perspective, the measuring instruments
for evaluating volunteer programs need to be adjusted or attuned to a set of manage-
rial purposes. For example, it is not very useful for a manager to know the value of the
aggregate hours contributed by volunteers, or the full-time equivalent labor force that
volunteers constitute to make informed decisions about how to manage the program.
Instead, managers need to know about such matters as how many volunteers are
available, how many are needed for achieving departmental or organizational goals,
how they are deployed, and the results that the volunteers help the organization
achieve. This information helps them to plan strategically, and to set recruitment pur-
poses and estimate expenses. Knowing the tasks performed by volunteers allows vol-
unteer resource managers to develop their recruitment tactics based on information
about the available and potential pool of volunteers; make decisions about deploy-
ment of volunteers to maximize their effectiveness; and balance involvement of paid
staff with a volunteer workforce. Program managers are interested in identifying
where and how volunteers can extend the capacity of paid staff and augment paid
labor. Useful to this group is evaluation information pertaining to the effectiveness of
volunteers in performing assigned tasks and in assisting departments and the organi-
zation toward goal achievement.

368 Evaluating the Volunteer Program

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

Purpose-Based Evaluation Framework for Valuing Volunteers

The evaluation literature provided useful insights into the purposes for conducting an
evaluation. Building on this foundation, we have developed a framework for analyz-
ing the purposes of evaluation of volunteer programs which incorporates the informa-
tion needs of important stakeholder groups.

The purpose approach that we suggest helps to clarify evaluation needs by
grouping multiple stakeholders into three major categories: society, organizations,
and managers. At the societal level we define the following interests—society at large,
policymakers, and researchers. At the general organizational level we define as stake-
holders the following groups—the local community, agency board of directors, orga-
nizational decision makers, and funders. Finally, the third, managerial level is
represented by program managers, volunteer coordinators, paid staff, and volunteers.
We derived these categories of stakeholders by modifying earlier approaches of Tal-
mage (1982), Rutman (1984), Chelimsky (1997), and Rossi et al. (2004) to defining the
major evaluation purposes for various groups of stakeholders.

Exhibit 15.1 presents the continuum of evaluation purposes captured in the new
framework from the most general and abstract, to the more applicable, and finally to
the most practical. The figure suggests questions to guide the evaluation of volunteer
value at the different levels. At the most general societal level, the relevant questions
pertain to the value of volunteering for society and the attendant methodologies. At
the organizational level, the questions are similar but more circumscribed. They con-
cern, for example, the relative costs and benefits of volunteering to the organization.
Finally, at the managerial level, the questions address the actual deployment of vol-
unteers, their interactions with paid staff and clients, and issues of operational effec-
tiveness. At this level, the interests of stakeholders guide evaluators to gather

Societal Organizational Managerial

!!

Society/Knowledge

How much is volunteering
worth to society? How
should volunteering be
valued? What approaches
are available for
evaluation?

Organization/Board/Funders

What is the value of volunteers
to the organization? What are
the costs of volunteering to the
organization? Are volunteers
used responsibly? Does the
community support the
organization through
volunteering?

Manager/Staff/Volunteers

How effective is volunteer
involvement in each program?
Do volunteers achieve the
expected results in each area?
How can volunteers extend
and enhance the work of paid
staff? What kind of training do
volunteers need?

EXHIBIT 15.1 Continuum of Evaluation Purposes

Purpose-Based Evaluation Framework for Valuing Volunteers 369

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

program-specific, detailed information for the practical use of organizational
managers.

Exhibit 15.2 elaborates the purpose-based evaluation framework. It depicts the
different levels of evaluation, primary stakeholders, focus of analysis, and major value
or purpose of the evaluation. At the societal level, the focus is on the nonprofit/volun-
tary sector or field, and the central purpose is an assessment of aggregate worth or the
contribution to general knowledge. At the organizational level, the focus is centered
on the host agency, with the purpose of accountability and stewardship for the volun-
teer program. Finally, at the managerial level, the focus is the volunteer program, and
the purpose encompasses operational effectiveness. We describe the different levels
more fully in the sections below.

Societal Interests

The first level of evaluation includes important general interests. We define this level
as societal and knowledge-based. At this level such stakeholders as society at large,
legislators, political decision makers, and researchers need evaluation information to
assess public and economic usefulness of volunteering, and to understand the aggre-
gate worth of volunteering as a social phenomenon. Researchers endeavor to produce
knowledge about volunteering and its value, and to propose various methodologies
and measures to advance this inquiry. Policymakers can potentially use such general
information about volunteering to guide public policy. Answers to these questions
help to assess the aggregate worth of volunteering and improve our understanding of
this phenomenon and the nonprofit sector more generally.

Organizational Interests

The second level of evaluation is more specific. At this level we find organizations
and boards of directors as primary stakeholders, as well as funders. Their interests
include the assessment of the value of volunteering to an organization, the cost of
agency investment in volunteers, and the contribution to the organizational mission
yielded through volunteer labor. Volunteer value evaluation at this level is focused

EXHIBIT 15.2 Evaluation Framework

Evaluation Levels/
Approaches Primary Stakeholders Focus Value/Purpose

Societal/
Knowledge

Society
Legislators
Political decision makers
Researchers

Sector/Field Worth (aggregate)
Learning (knowledge)

Organizational Organizational decision makers
Board of directors

Organization Accountability
Stewardship

Managerial/Staff Managers
Volunteer resource managers
Paid staff

Program Deployment
Effectiveness

370 Evaluating the Volunteer Program

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

on the organization and its volunteers. Major concerns of primary stakeholders at this
level are securing accountability, providing governance, controlling finances, and pur-
suing the organizational mission.

Managerial Interests

The third level of evaluation is tailored to the practical interests of program managers,
volunteer coordinators, paid staff, and volunteers. Their ultimate purpose is to im-
prove program implementation through the involvement of volunteers and the expe-
rience of these participants. Information about the current use of volunteers can assist
these stakeholders in making decisions about deployment and effective involvement
of lay citizens, the types of skills and knowledge required of volunteers to perform
their tasks, the number of volunteers needed, the recruitment and retention rates of
volunteers, etc.

The framework shown in Exhibit 15.2 is useful to define the evaluation purpose
with regard to the various stakeholders. Although program goals tend to be vague and
conflicting (Wholey, 1983), Poister (2003) argues that every program can be eval-
uated. When the purpose of the evaluation and the end-user have been defined, the
evaluator can select and design the kind of measures to be used, the level of detail
required, and the frequency of measuring and reporting to enhance the decision-mak-
ing process (Poister, 2003). The proposed framework assists volunteer
resource managers as evaluators in making these determinations.

Valuing Volunteering for Organizations, Volunteers, and Clients

As we have showed, the evaluation literature presents a persuasive case for the impor-
tance of identifying the purpose and goals for conducting an evaluation. Given these
different purposes, the volunteer resource manager as evaluator may need to assess
the value of volunteering for the organization, volunteers, and clients. We summarize
relevant studies and methods below.

Value of Volunteering to the Host Organization

Measuring volunteer value is important to organizations for managerial, financial, and
fundraising purposes. The most common method for valuing volunteer contributions
is economic—an assessment of the dollar value of volunteer hours given to an organi-
zation. The Independent Sector (IS) organization suggested attributing the average
hourly wage for civilian, nonagricultural, nonsupervisory workers to account for vol-
unteer hours, increased by 12% to account for benefits. The IS average hourly volun-
teer compensation was calculated at $20.85 (in 2009). The total dollar value of
volunteer time for 2009 is estimated at $169 billion to organizations, excluding in-
formal volunteering (Independent Sector, 2011). The advantage of the IS approach is
that it uses a widely cited and available statistic (the average hourly wage for civilian,
nonagricultural, nonsupervisory workers in the United States). The disadvantage is
that it does not distinguish among volunteers’ variable tasks and their respective mon-
etary values.

Valuing Volunteering for Organizations, Volunteers, and Clients 371

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

Several knowledgeable observers make a compelling argument for correcting the
IS valuation method to take into account the variety of tasks performed by volunteers
in the attribution of hourly dollar values to their labor (e.g., Andersen and Zimmerer,
2003; Brudney, 1990; Ellis, 1996; Gaskin, 2003). Brown (1999) suggested devoting at-
tention to the nature of the tasks performed by volunteers and the equivalent compen-
sation for the same (paid) work; she proposed basing these economic measures on
the average compensation in the service sector, because volunteers were assumed to
be mostly engaged in producing services rather than goods. By fixing the hourly dol-
lar value at the level of social workers’ wages, this approach lowered the average vol-
unteer contribution value dramatically (from the IS [2011] estimation) to $9.87 per
hour (in the late 1990s).

Handy and Srinivasan (2004) describe four approaches to valuing hours contrib-
uted by volunteers to the organization based on the opportunity cost and the replace-
ment cost of volunteer time. The opportunity cost approach looks at the value of a
volunteer hour from the perspective of a volunteer and what that hour is worth to this
person. The replacement cost method evaluates the value of volunteers from the per-
spective of the organization, as if the agency had to pay market wage rates when pur-
chasing such a service. Handy and Srinivasan describe the opportunity cost evaluation
method based on the average wages foregone for volunteers who are employed
($16.42) and hypothetical wages for those who are unemployed ($12.58) donating
their time in hospitals, the organizations investigated in their study. This approach,
however, does not consider the fact that volunteers often perform very different tasks
from those on their regular jobs.

The second opportunity cost approach is based on the perceived value of lei-
sure time as assessed by the volunteers themselves. Handy and Srinivasan (2004)
call this approach the L opportunity cost. The L opportunity cost produces a lower
estimation of hourly value contributed to the hospitals by volunteers, in line with the
volunteers’ own assessment of their leisure time. The inconsistency of the L opportu-
nity cost method lies in its reliance on the subjective evaluation of leisure time by
volunteers.

The replacement cost approach assumes that the value of a volunteer hour is
equal to that of a staff hour when volunteers perform the same tasks that paid staff
would have conducted. In the hospital volunteering study Handy and Srinivasan de-
scribe this approach as unrealistic because of the hospitals’ fiscal constraints. They
contend that the staff would not have provided services delivered by volunteers in
the absence of volunteers under conditions of fiscal stringency. Another variation of
the replacement cost method is proposed by Ross (1994), who suggests the industry
wage application method, which takes the hourly wage in that sector(e.g., the aver-
age hospital wage as equal to $19.69) and adjusts it for benefits. Following his ap-
proach, the adjusted hourly value of volunteering for the hospital is calculated at
$23.23.

Several studies discuss whether organizations would choose to pay employees to
provide the same services if volunteers were not available and/or treat the general
topic of the possible interchangeability of paid staff and volunteers (Bowman, 2009;
Handy & Mook, 2010; Handy, Mook, & Quarter, 2008; Simmons & Emanuele, 2010).
Examining volunteers as fair substitutes for paid labor in Canadian organizations,
Handy et al. (2008) find that the organizations are more likely to use volunteers and

372 Evaluating the Volunteer Program

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

staff interchangeably under conditions that both volunteers and employees have com-
parable skills and knowledge, and when volunteers are numerous due to such factors
as fewer jobs in the (paid) labor market. Their study suggests that organizations may
improve their efficiency by relying on volunteer labor.

Quarter et al. (2003) choose to use the replacement cost method in their estima-
tion of volunteering value. They argue that this approach is appropriate because the
replacement-cost framework allows calculating volunteer services at the value of
similar services in the market. However, they admit, the debate continues whether
volunteers substitute for paid labor or supplement paid labor (Brudney, 1990; Ferris,
1984). The replacement-cost approach also fits well with the labor division among
volunteers and staff that is typical for some organizations (Handy and Srinivasan,
2004).

Gaskin (1999) also uses a replacement cost method to construct the Volunteer
Investment and Value Audit (VIVA). VIVA takes into account the variety of tasks that
volunteers perform by analyzing and measuring actual activities and matching them to
paid work in the market, an approach introduced by Karn (1982–1983, 1983) in
the early 1980s. Simultaneously, VIVA addresses the issues of benefit-cost and cost-
effectiveness by examining the organizational inputs, defined as resources used to
support volunteers, in relation to the outputs, defined as the pound (monetary) value
of volunteer time. Dividing the output by the input allows calculation of the VIVA
ratio, which states that for every pound invested in support of volunteers, X pounds
in the value of their services are returned to the organization (Gaskin, 1999). The VIVA
method also bases its assessment of volunteers’ value to the organization by imputing
the value of contributed hours.

Anderson and Zimmerer (2003) discuss five methods for valuing the hours
contributed by volunteers using the average wage, comparable worth (similar to the
replacement-cost method), the Independent Sector (2011) approach (described previ-
ously in this chapter), the living wage, and the minimum wage. Most of these methods
can be classified either as opportunity-cost approaches or as replacement-cost
approaches. Anderson and Zimmerer (2003) criticize the replacement-cost method
for the implicit assumption that volunteers and paid employees are perfect substitutes.
In addition, this method does not take into account the level of compensation for “vol-
unteer substitutes:” it can be either on the entry level or the advanced level of com-
pensation (Hopkins, 2000). The minimum wage method is easy to use, but it does not
value volunteer activity respectfully (i.e., according to the work performed). Ellis
(1996, 1999) contends that most volunteer assignments are above the minimum wage
level, perhaps even higher than the median wage. In her view, this method does not
reflect the value of volunteer expertise.

The living wage method is based on dollars required to subsist; it is closer to the
cost of living approach (Anderson & Zimmerer, 2003). Much like the minimum wage
approach, the cost of living method lacks any relationship to the nature of the particu-
lar tasks performed by volunteers. It inherently places a low value on volunteer ser-
vice, as if the service is performed by an unskilled worker. Having recognized these
many disadvantages of the minimum wage and the living wage methods, most practi-
tioners prefer the average wage or similar methods adjusted to include benefits. For
more specific assessments, local average wages can be used, a procedure that allows
contextualizing the value of volunteering (Anderson & Zimmerer, 2003).

Valuing Volunteering for Organizations, Volunteers, and Clients 373

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

Ross (1994) proposes the computation of “person-years” to account for volunteer
hours, a method that differs conceptually from the dollar-based approaches. This
method suggests valuing volunteer time in terms of full-time, year-round positions or
person-years, equivalent to dividing the total hours contributed by volunteers by the
average annual hours worked by a full-time employee (Ross, 1994). This approach is
well-known as the full-time-equivalent (FTE) method (the maximum number of hours
for an FTE might be set at 52 weeks £ 40 hours D 2,080 hours per year). This method
makes no effort to differentiate the type of work performed by volunteers, or to esti-
mate the dollar value of the volunteer time. It implies that volunteers provide for the
extension of paid staff in an organization.

Goulbourne and Embuldeniya (2002) have developed several measures of volun-
teer contributions that can be used for presenting the effect of volunteers on enhanced
organizational revenues and decreased expenses. They propose looking at the ratio of
volunteers to paid staff; the ratio can inform funders, managers, the general public,
and other stakeholders regarding how much of the organizational success can be
attributed to volunteers’ effort. Goulbourne and Embuldeniya (2002) also consider
community involvement in the activities of a nonprofit organization as an important
measure of success. This approach treats participation of volunteers as an inherently
valued goal in itself. They measure the “community investment ratio” (CIR) by assess-
ing the volunteer contribution, expressed as the dollar value of donated time, relative
to the total dollar amount a particular funder has contributed or may contribute to the
volunteer program or a specific event. This ratio allows comparing community in-
volvement in various localities. Another economic measure they present is community
support, calculated as “volunteer capital contribution” (VCC), which considers the
amount of resources brought by volunteers—donations added to nonreimbursed out-
of-pocket expenses of volunteers.

In their study of hospitals, for example, Hotchkiss et al. (2009) present empirical
evidence that regardless of the cost of training and management, volunteers provided
financial benefits exceeding costs. Irrespective of hospital size, many of the programs
“reported that they had savings of more than a million dollars when they used
volunteer service.” With respect to generalization of results, the researchers acknowl-
edge that hospitals often use medical students as volunteers, whose skills allow the
administration to save on paid labor for particular services (Hotchkiss et al., 2009,
p.125).

Handy and Srinivasan (2004) present a calculation of volunteer contribution
based on the benefit-cost approach. All benefits accruing to hospitals (the unit of anal-
ysis in their study) and to volunteers are summed, and the costs of volunteering to the
hospital and to volunteers themselves are subtracted. Thus, the authors arrive at an
estimate of the net benefits of volunteering to the hospital, which they find to be very
significant, and net benefits to the volunteers themselves, which they find to be nega-
tive. In their study the volunteer contributions are estimated on the basis of the hourly
value of volunteer work.

Another study by Handy and Mook (2010) evaluates direct and indirect benefits
and costs of volunteers to an organization. With respect to benefits, they argue that in
times of crisis many organizations undergoing budget cuts would rely on volunteers
more heavily. In addition, volunteers may help organizations connect with communi-
ties and receive recognition for the services they provide. Involving volunteers may

374 Evaluating the Volunteer Program

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

also pose problems, however, such as conflict with labor unions, liability issues, or
tension with the paid staff (Handy & Mook, 2010). Thus, from the organizational per-
spective, valuing volunteers entails an assessment of benefits and costs.

Using a noneconomic methodology, Hager and Brudney (2005) introduce a
summary measure of “net benefits” of a volunteer program to an organization. This
approach combines benefits and challenges realized by a volunteer program into a
single barometer of (net) benefits to the organization. The method asks
volunteer program managers to rate the benefits received by an organization from
volunteer participation and the challenges encountered. “Net benefits” are calcu-
lated as the difference between the benefits and the challenges. This approach has
certain advantages: It allows identifying the benefits and problems emanating from
volunteer participation, balancing the benefits against the problems in a single mea-
sure, and calculating and comparing results across volunteer programs and host
organizations.

In many studies, volunteers are viewed as producers of outputs and outcomes as
well as beneficiaries. Multiple studies suggest that volunteers consider participation in
nonprofit organization activity as personally beneficial to them (Brudney, 1990; Ellis,
1999; Handy & Mook, 2010; Handy & Srinivasan, 2004; Quarter et al., 2003). In the
next section, we consider suggested methods for calculating the benefits that accrue
to volunteers as a result of participation.

Value of Volunteering to the Volunteers

Many researchers and practitioners conceive of volunteers as providers of services as
well as beneficiaries of participation. To measure the value accruing to volunteers
themselves, Brown (1999) suggests two alternative methods. One method proposes
calculating benefits in terms of opportunity cost. This approach assumes that volunteers
benefited by gaining satisfaction measured in monetary terms in the amount of wages
that they agree to forgo minus taxes (25%) and fringe benefits (i.e., the material gains
that volunteers willingly give up to volunteer). The second way is to measure out-of-
work volunteer time according to valuation given to this time by volunteers themselves.
Based on other research, the value of free time is measured at the rate of half the
employment wage. In addition, Brown argues that volunteers’ hourly value depends
on the motivation to volunteer. When volunteers agree to endure as much stress while
producing donated services as they would experience on their jobs, the level of motiva-
tion is higher. Hence, the volunteer hour must be measured closer to the amount of the
hourly wage (six-sevenths of a regular hourly wage). In such cases, the portion of the
wage, properly adjusted for fringe benefits and taxes, is the measure of the volunteers’
cost of volunteering. Pro bono volunteering is an example of such services. When the
volunteering environment is less stressful and more pleasant, the level of motivation is
lower. In this instance, the economic value (or cost to the volunteer) can be measured
at half the rate of the regular hourly wage.

Handy and Srinivasan (2004) measure benefits to volunteers by means of a sur-
vey. Survey questions include attitude scales asking for ratings from 1 to 5 for specific
benefits gained through volunteering, such as new skills, social contacts, references
for employment, and job and career opportunities. Most volunteers rated these

Value of Volunteering to the Volunteers 375

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

benefits as high—4 or 5 on the respective scales. Handy and Srinivasan use willing-
ness to pay to acquire such benefits as a measure of the monetary value of the volun-
teering experience in the hospital: In their study, the average volunteer professed a
willingness to pay $179.24 (2004) to attain these benefits. As Handy and Srinivasan
recognize, this measurement requests a very subjective assessment and, hence, has
limited reliability and generalizability. Handy and Mook (2010) discuss other non-
economic benefits to volunteers, such as increasing social status and knowledge
(especially from service on agency boards of directors) and the “warm glow” emanat-
ing from volunteering.

In order to evaluate the benefits that volunteers receive from participation,
Quarter et al. (2003) use a different approach based on identifying and pricing a
“surrogate” (i.e., proxy or comparison) in the private marketplace. For example, gains
to volunteers include such non-material benefits as the development of personal lead-
ership skills. Quarter et al. propose as a surrogate measure the cost for a student to
learn these same skills in a university course. (They estimated the cost of an appropri-
ate course at $500 based on college courses that taught similar skills.) The surrogate
methodology allows approximating the value of the benefit received by volunteers
from their participation. Total benefits to volunteers are assessed by multiplying the
percentage of volunteers who report receiving the benefit by the economic value of
the benefit calculated according to the surrogate method. Quarter et al. suggest that
the surrogate methodology can be extended to other benefits that might be gained by
volunteers; depending on the program, volunteers might obtain skills in counseling,
first aid, and so on. To arrive at net benefit estimation for volunteers, the cost of any
training provided by the organization would be subtracted from the total amount of
benefits received by these participants.

Although these methods are useful, the organization and the volunteers are
not usually considered the main beneficiaries of volunteer activity by those who
donate funds, and by those who assess the effectiveness of volunteer programs and
nonprofit organizations. The primary beneficiaries are most often considered the cli-
ents of the organization. The next section discusses the issue of measuring benefits
to clients.

Value of Volunteering to Clients

Because clients typically receive services provided by volunteers at no cost to them,
it can be especially difficult to evaluate benefits to this group. Instead of asking how
much money clients would be willing to pay for the service that they receive for
free, Murray (1994) asked how needy clients would trade off the in-kind services
provided to them against a hypothetical gift of “cold hard cash” (equivalent valuation
method). Murray used this approach to measure the in-kind goods value to recipi-
ents of social security services such as Food Stamps and Temporary Assistance for
Needy Families (TANF) benefits against the offered cash. Murray argued that welfare
recipients were ready to trade such in-kind benefits for a smaller amount of cash,
which equaled the value of in-kind transfer to 73% of the cash value. Economists
call the difference between in-kind goods value and the amount of cash preferred
by the recipients “deadweight loss,” because this difference produces no utility and
satisfies no preferences. Economists maintain that this loss accounts for the desire of

376 Evaluating the Volunteer Program

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

policymakers to change recipients’ behavior. Following Murray’s findings, Brown
(1999) suggests that the value of services to clients should be approximated through
wage-based estimates of the market value of volunteer-produced services adjusted
for the inefficiency of the in-kind resource transfer (i.e., at approximately 73% of the
in-kind service market value).

In their model, Quarter et al. (2003) define direct client benefits as a primary out-
put. They maintain that since recipients of the services do not pay for them, the mar-
ket lacks a signal to help identify the value of the output. Therefore, they advise
attributing a surrogate value to the output by finding and pricing a comparable service
in the market; the method is analogous to the procedure described above to estimate
the value of benefits received by volunteers. Quarter et al. conclude that the value of a
volunteer service to a client is defined by the price for the comparable service or good
in the market. For example, if the intended outcome for clients is independent living
for the elderly, a surrogate measure is the cost of a nursing home.

In their study, Handy and Srinivasan (2004) define the value of volunteer ser-
vices to enhance the quality of care to hospital patients (clients) as nonmaterial.
They argue that volunteers are able to provide many soft, or intangible, services
that are essential to the comfort of patients. Volunteers also reduce the workload of
paid staff by taking on certain tasks and leaving staff members more free to concen-
trate on other tasks (Brudney, 1990). By providing help to patients and supporting
paid staff, volunteers enhance the quality of care, which is an important component
of health care, although difficult to measure in monetary terms (Handy & Srinivasan,
2004). Handy and Srinivasan offer a nonmonetary measurement of quality of care
by volunteers, which we refer to as impact rather than output. We discuss this inno-
vative measurement below.

Other benefits that clients of health care organizations receive from volunteer
services have been documented by Hotchkiss et al. (2009) in their study of Florida
hospitals. They find that higher volunteer hours in patient care areas strongly corre-
late with patient satisfaction, indicating that “hospitals with a significant volunteer
component are likely to provide positive patient experiences.” (Hotchkiss et al.,
2009, p.126).

As we have seen, economic evaluation of volunteer contributions continues to be
a major interest of researchers. The various economic approaches yield great insight
into the value of volunteer effort, yet they do not capture the gamut of volunteers’
contributions and the value they generate, or the ultimate impacts of their activity.
Graff (2005) calls on researchers to be cautious about relying solely on economic
measurement to assess volunteer value. She maintains that the dollar valuation
method underestimates the actual value of volunteer work (Graff, 2005).

A panel discussion on the e-Volunteerism Web site (Fryar, Mook, Brummel, &
Jalandoni, 2003) demonstrates a range of views on the value of economic measure-
ment of volunteer contributions. Fryar begins by stating that “the most enduring and
controversial question within the field of volunteerism is the one that relates to the
‘value’ of volunteers and the hours they contribute.” Brummel suggests that focusing
on the monetary value of volunteering is harmful in the long run because, in his view,
a narrow “economic focus” distracts from social valuation of volunteering and trivial-
izes volunteer impact. Other participants in this discussion find it useful to conduct
and present an economic valuation in combination with alternative indicators of

Value of Volunteering to the Volunteers 377

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

volunteer value (Mook, Jalandoni). Jalandoni takes a middle position that “[b]oth the
quantitative as well as qualitative, anecdotal, and societal value of volunteering are
important depending on what you need to use them for.” As Graff (2005) advises, a
more complex and complete approach would involve the identification of outcomes
or results of volunteer work, calculation of the full costs of achieving those outcomes,
and finally evaluation of whether it is a worthwhile investment of resources.

As in other areas of social life, evaluation of the impact of volunteering is a chal-
lenging task that is rarely measurable by using quantitative tools alone. Impact refers
to the effects or results of an activity, program, or initiative in the larger community or
society. As Safrit (2010) writes, “Thus, impact may be considered the ultimate effects
and changes that a volunteer-based program has brought about upon those involved
with the program (i.e., its stakeholders), including the program’s targeted clientele and
their surrounding neighborhoods and communities, as well as the volunteer organiza-
tion itself and its paid and volunteer staff” (p. 321, emphasis in original). The following
section discusses attempts to measure the impact of volunteer activity.

Impact Measurement

Here we return to an innovative noneconomic approach that Handy and Srinivasan
(2004) suggest for measuring the impact of volunteers in their study of the hospital
services provided by volunteers. To do so, they identify the lack of overlap in roles
between paid professionals and volunteers in the hospital setting, and document the
division of labor between paid professionals and volunteers. Interviews with volun-
teer managers identified enhanced quality of patient care as the most important contri-
bution of volunteers. In order to identify volunteers’ impact on this dimension, Handy
and Srinivasan asked the managers to rank 26 quality programs offered by hospitals
that were described in the literature. To measure the quality of care, the survey asked
managers, paid staff, and volunteers to rate the impact of volunteer services in each of
these domains on a scale of 1 to 10. On average, managers rated volunteers’ impact on
quality of care as 9.0, staff members rated it as 8.43, and volunteers as 8.7. The con-
verging values supported the validity of measurement. Because this evaluation
method is based on stakeholders’ assessment of the impact of volunteer services, it is
often referred to as a “stakeholder approach.” It allows measuring the impact of volun-
teer effort directed to achieve a specific inherently valued goal such as quality of pa-
tient care.

Quarter et al. (2003) present a new approach to impact measurement for volun-
teers and nonprofit organizations: social accounting. They suggest giving recognition to
volunteers by measuring the social impact of volunteer services, and including this
value in the accounting statement of an organization. Quarter et al. propose using the
Expanded Value Added Statement (EVAS) to include the contribution of volunteers to
the total value added produced by the organization. Three types of outcomes identified
by the Community Return on Investment model are incorporated in the EVAS. For the
EVAS statement, Quarter et al. include and measure three types of outputs:

1. Primary outputs. The value of direct services of an organization to clients
2. Secondary outputs. The value of indirect outputs that accrue to the organization’s

members (staff and volunteers) and customers (e.g., skills development)

378 Evaluating the Volunteer Program

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

3. Tertiary outputs. The value of indirect outputs that accrue to those other than the
organization’s members and clients (e.g., consultations provided to other
cooperatives)

Primary outputs to clients can be evaluated according to the surrogate valuation
methods described above. Benefits that accrue to volunteers are classified as second-
ary outputs. Quarter et al. (2003) suggest including these outputs in the EVAS as hours
contributed plus out-of-pocket expenses not reimbursed. In order to value the hours
of volunteers, Quarter et al. again use the replacement-cost approach based on the
price of equivalent paid labor. Tertiary benefits to third parties are calculated by attrib-
uting the market value to the services the third party received for free from the organi-
zation, again using the surrogate method. The summation of all three outputs yields
the value added by volunteering to the organization.

An important part of the EVAS is the idea of distribution of the value added
among stakeholders. Primary services go to clients, secondary benefits to participants
(staff and volunteers), and tertiary to the community. Calculation of the value to the
first two of these groups of stakeholders is more straightforward; estimation of volun-
teer value to the community requires greater ingenuity. Quarter et al. (2003) suggest
using several approaches to measure the impact of volunteers:

1. Surrogate valuation. The comparable service can be found in the market, and the
price for this service will be a surrogate value of the unpriced volunteer service.

2. Survey technique. Worth to clients is measured by compiling the list of either
prices or consumer items and asking the respondents to situate the service in rela-
tion to others on the list.

3. Avoidance cost. Calculated as the cost of undoing the damage. For example, the
loss of or damage to outdoor recreational facilities that are publicly available has
been assessed by the fees needed to replace the facilities (Crutchfield, 1962).

4. Attribution. Assigning a weight to various factors that influence results. This is
achieved by means of using comparison groups, and longitudinal studies that
help to collect information about change. Often it is impossible to determine
precisely the causal effects, but information increases understanding and
knowledge.

5. Stakeholder input. Stakeholders are defined and systematically asked in open
meetings, interviews, confidential focus groups, and surveys about their views on
the desirability of the service and its impact on clients.

Cahn and Rowe (1992) offer a conceptually different view on valuing volunteer
time for the larger community. They propose the time dollar value as an instrument
for valuing volunteers’ contribution, based on the idea of reciprocity. The time
dollar method holds that when a person volunteers an hour of his or her time to help
another person, his or her time-dollar bank is credited one hour. The time-dollar ac-
count may accumulate multiple hours, which indicates that the owner is entitled to
receive an equivalent amount (in hours) of services or goods from other people, or to
receive in-kind compensation, such as reduced tuition cost in college. The value of
a contributed hour is not differentiated by the type of services provided; all hours
count the same regardless of the qualifications of the service providers and the

Value of Volunteering to the Volunteers 379

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

content of the service itself. The philosophical foundation for this method is equality of
good intentions, and not the differentiated effort or service value. Accumulated credit
hours would pay for provision of reciprocal assistance, but will not allow measuring
the impact, even though the impact may be socially and economically significant (e.g.,
increased literacy). Cahn and Rowe’s (1992) time-dollar method requires very accurate
recording and crediting of volunteer hours contributed and a well-defined group of
people or “members” who participate, such as residents in a community, neighbor-
hood, co-op, or college dormitory. For these authors, the “impact” of volunteer activity
is community participation fostered through reciprocity.

Poister (2003) offers practical tips for public and nonprofit organizations on how to
conduct evaluations that measure outputs and outcomes. He elaborates the “logic
model” approach, which helps to specify the resources, activities, and outputs of a pro-
gram or organization, and the outcomes (initial, intermediate and long-term) it is meant
to achieve. The logic model begins with a careful preliminary analysis of the goals of the
organization or program and the conceptual framework for their attainment; in this ap-
proach, outcomes are the expected results conforming to the goals. When the goals are
clear, and the expected results in the short-term and in the long-term periods are speci-
fied, the task of a manager is to define the performance measures. Performance mea-
sures typically include measures of output, efficiency, productivity, service quality,
effectiveness, cost-effectiveness, and customer satisfaction. Application of this compre-
hensive approach would assist the volunteer resource manager as evaluator to capture
the multidimensional nature of the outcomes and the long-term impact created with the
active help of volunteers. We turn to that topic now.

Logic Models

Logic models are often used as a framework for evaluation. A logic model depicts
graphically a causal interpretation of how a program operates and is thought
to achieve its intended results. The logic model is usually presented as a figure
or diagram that shows connections between the important elements of a program.
Organizational or program goals or objectives drive the logic model so that it is first
necessary to agree on what the program is attempting to achieve—a useful process
that can require significant discussion and consultation among stakeholders and
program and organizational staff. As discussed above, the views of important stake-
holders must be taken into account in this determination. However, the more specific
the goal(s) of a program or initiative, the more straightforward will be the ensuing
evaluation task.

The first of the elements in the logic model is the inputs or “resources” invested in
the program, including monetary, personnel, equipment, technological, etc.
Program managers and leaders are charged with converting these resources into pro-
gram “activities,” such as services provided by the organization to clients or outreach
efforts. The first tangible, measurable results of program activities are labeled
“outputs,” which indicate the work the program has performed. Depending on the
goal(s) of the program, outputs may include the number of sessions conducted with
clients, the number of visits to clients, the number of promotional activities carried out
by the program, and so forth.

380 Evaluating the Volunteer Program

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

Some evaluations go no farther than examination of the interconnections between
program resources, activities, and outputs. Because these elements are largely under
the control of the organization, it is of considerable interest to some stakeholders (al-
though not all) to fix the evaluation on them. As we described briefly above, an evalu-
ability assessment concentrates on the resources allocated to the program to determine
whether they are sufficient to achieve the results intended. Such a perspective is critical
not only to the organization—so that decision makers may understand any limitations
of resources uncovered and hopefully address the situation—but also to the type of
evaluation that may ultimately be conducted. Unless resources are in place to make
the achievement of program effects for clients conceivable, undertaking more ambi-
tious forms of evaluation (e.g., assessment of impact) makes little sense. Other forms
of evaluation center on the conversion process of program inputs, to activities, to out-
puts. Because these forms look primarily on the operation of the program rather than
its results, they are typically termed “process evaluations.”

By way of example, consider a program that enlists volunteers and paid staff in
mentoring middle school students, with the goal of improving school attendance and
academic performance. The inputs or resources allocated to the program might con-
sist of a single, full-time paid staff person and a large number of part-time volunteers,
and some amount of budgetary resources. In addition to the key mentoring activities
provided by volunteers, other activities undertaken by the program may consist of
recruitment, orientation, placement, learning, supervision, and assessment of the vol-
unteers. Outputs may encompass the number of matches of volunteer mentors and
student mentees, the number of meetings between the mentors and mentees, the
types of activities that they pursue in these relationships, and the amount of time de-
voted to them.

An evaluability assessment would concentrate on the question of whether the re-
sources allocated to the program—paid, volunteer, budgetary, and otherwise—were
sufficient to operate the program. It would consider the relationship between the
inputs available to the program in light of the extent of interest in the program
(as reflected, for example, in the number of students seeking a mentor) as well as the
program features that must be in place to offer mentees a quality experience. Such an
assessment would scrutinize the balance between program resources on the one hand
and program demands on the other in terms of the volume of prospective mentees
and the activities necessary for their welfare. A process evaluation would consider the
inter-relationship among resources, activities, and outputs. It might ask, for example,
whether: the resources allocated to the program were used most efficiently; the train-
ing offered to mentors was of high quality; the operations of the program were suffi-
cient to generate, orient, and place qualified volunteers with mentors; and paid staff
and volunteers were organized and coordinated in a way to achieve most outputs (e.
g., mentor-mentee matches). Although these elements do not pertain to program out-
comes—the changes or results realized by clients (here, middle school students)—the
questions (and answers) are of interest to certain stakeholders, particularly program
managers, organizational decision makers, and the volunteer mentors.

Outcomes can be understood as the effects of a program on its targeted clientele,
especially changes anticipated and desired by the sponsoring organization. As
opposed to the other elements of the logic model considered thus far, outcomes
are not under the control of the program or organization. Rather, they represent

Logic Models 381

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

changes or effects to be realized in the external environment or audience of clients
and communities. They are more ambitious both to achieve and to measure. Out-
comes are related to the overall long-term impact of a program, but they have a
more specific, short-term focus on immediate changes or results for clients
(Safrit, 2010). Even so, as Poister (2003), McDavid and Hawthorne (2006), Wholey
et al. (2004), and others point out, outcomes are normally categorized as initial, inter-
mediate, and long-term. If the evaluator has done her or his job correctly, the out-
comes will correspond to the objectives of the program, i.e., they will measure
the achievement of the goals that are sought for the program in the client population
or community.

Returning to the example of a volunteer mentor program guided by goals of
improving school attendance and performance of middle-school students, we might
conceive of initial outcomes centering on the mentor-mentee relationship. For exam-
ple, does the client (student) appreciate the mentor and attend to the suggestions of
the mentor? With respect to intermediate range outcomes, we might look for more
behavioral indicators (measures) of the positive effects of mentoring: Do the student
mentees participate in school activities, such as recognized groups or associations? Do
they devote more time to their studies? Finally, in the longer term, the outcome mea-
sures might comprise attendance and grades: Do the students show improvement in
school attendance and grade reports? By contrast, the intended impact of the program
might be a change in school culture to incorporate greater emphasis on student learn-
ing, participation, and achievement. Although the initial, intermediate, and longer-
term program outcomes would contribute to this impact, the impact is much more
broad, far-reaching, and difficult both to attain and measure.

Summative Evaluations

Evaluations that attempt to link program resources and activities to outcomes (or im-
pact) are usually termed “summative” evaluations, perhaps because they attempt to
sum up the end results or achievements of a program or initiative. To prepare a sum-
mative evaluation, the volunteer resource manager as evaluator must obtain data from
or about the intended client group regarding outcomes (as well as background factors
concerning the clients). The data collection effort should not be minimized: Develop-
ing the necessary measurement strategy and instruments can be quite time-consuming
and require significant background in measurement. Direct measurements from cli-
ents are frequently sought in evaluations, such as interviews, focus groups, and sur-
veys; supplementing such information with other data sources is highly desirable.
Access to organizational records, for example, can be very valuable.

In the example of the volunteer mentoring program, the relevant data might ema-
nate from independent surveys of the middle school students and the volunteer men-
tors; it might also prove possible to conduct some in-depth interviews. In addition,
official school records pertaining to student attendance and grades would be highly
valuable additions to the store of data to evaluate the effects on the target population.
Preparation of the data collection instruments will be guided by the need to obtain
outcome information.

Summative evaluations incorporate a research design component. The topic of
research design is both vast and technical, and we shall provide only a brief

382 Evaluating the Volunteer Program

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

introduction here. A research design specifies the methodology, or set of steps or pro-
cedures utilized, to test anticipated causal relationships or hypotheses so that the re-
sults can be accepted as valid (Meier, Brudney, & Bohte, 2012). In our now-familiar
example of the volunteer mentoring program, the hypotheses are that the program
will improve student attendance and grade reports. The research design must detail
how we can arrive at valid or credible conclusions concerning these hypotheses. To
do so, in general, the research design will call for a comparison or benchmark to eval-
uate whether or not the program has succeeded in making an improvement in the
outcome measures. Typically, the bases for comparison are either in reference to data
collected at a point earlier in time for the same group of subjects, or with respect to
information collected from a control or comparison group chosen to be as similar as
possible to the program group, or both.

Thus, in our example, the research design might call for a collection of outcome
data on student attendance and grade reports for those in the mentor program over
time—ideally, data collected before the students were involved in the program and
then again after their involvement. Comparison of these two sets of measures can be
used to detect improvement (over time) in the outcome measures of attendance and
grade reports. Alternatively, the research design might specify collecting and compar-
ing data on the outcomes for the group of students in the mentor program (experi-
mental or treatment group) versus a comparable group of students who do not have
mentors but who are otherwise similar to the mentored students (control or compari-
son group). This comparison, too, would allow the evaluator to identify improvement,
in this case between the two groups of students (i.e., mentor program versus control
group). Either form of comparison, over-time measurement or access to a control
group, allows test of the hypotheses that the mentor program led to improved student
performance in the outcome variables of school attendance and grade reports. Testing
these comparisons based on data will require some familiarity with basic statistics
(Meier et al., 2012).

Note that the logic model approach allows the volunteer resource manager as
evaluator to assess the achievement of outcomes in a volunteer program or nonprofit
organization without reference to their economic cost or valuation. This observation is
important, especially in light of the differing opinions presented above in regard to the
value of economic information for an evaluation of a program that includes a signifi-
cant volunteer component. Based on the logic model framework, the volunteer re-
source manager can use the findings from the outcome evaluation to demonstrate
that the program helps the organization to achieve worthwhile objectives. In addition
to program managers, clients, and volunteers, other stakeholders, such as the pubic,
funders, and organizational leaders and board members, would likely be interested in
evaluation results that showed, for example, that a volunteer mentor program had
achieved desirable outcomes, such as improvements in the attendance of middle
school students and their grade reports. Accordingly, the volunteer resource manager
can use this information to press for greater recognition and support of the volunteer
program.

It is quite possible, moreover, to supplement the results of the outcome evaluation
with economic valuation of the volunteer program and its results. Such an enhance-
ment can prove even more persuasive for the volunteer resource manager in making
the case for the program to relevant stakeholders. If some form of cost/benefit

Logic Models 383

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

analysis of the volunteer program were necessary or desired, for example, the methods
elaborated on economic valuation in the sections above can be put to good use. On the
cost side, through maintaining records and files, organizations can usually estimate the
expenses incurred by a department, program, or initiative. On the benefit side, al-
though the positive effects of program or organizational activity are much more elusive
to comprehend and measure, this chapter has presented methodologies for economic
valuation of volunteer participation for organizations, clients, and the volunteers them-
selves, as well as some approaches to impact assessment. Based on these methods the
volunteer resource manager as evaluator can venture considerably beyond the ques-
tion of whether improvements have been achieved in the client population to incorpo-
rate a sophisticated analysis of the associated economic costs to the organization and
benefits to various stakeholder groups.

Conclusion

Effective evaluation combines the rigor of the best methodological techniques with
the realities of organizational context. It represents a rich blend of science and politics.
The volunteer resource manager as evaluator requires some fluency in both domains.
The evaluator seeks to provide methodologically valid answers to the questions that
she or he believes the relevant stakeholders have posed for the program or the
organization.

The earlier portions of the chapter are devoted to the organizational context. They
point out that no single evaluation is likely to meet the information needs of the myr-
iad stakeholders to a volunteer program, including funders, agency leadership, board
of directors, program managers and other staff, volunteers, clients, and even public
decision makers, citizens, and the media. Thus, choices will have to be made concern-
ing evaluation purposes and stakeholders.

The volunteer resource manager as evaluator need not, and should not, guess or
make assumptions about the stakeholders crucial to the evaluation process, their in-
formation needs, and overall purposes. Instead, she or he should be much more pro-
active in inquiring about the purposes of the evaluation and the stakeholders most
invested in it. The volunteer resource manager should involve these groups in formu-
lating the purposes for the evaluation, the questions to be answered by it, and the
corresponding evaluation methodologies. This consultation should continue to
encompass issues of measurement. If, as is frequently the case, the evaluation is to
assess program results, then relevant stakeholders should participate in identifying
the crucial measures of outcomes and impact to be used in the study. For the volun-
teer resource manager as evaluator, turning their (stakeholders’) measures into our
measures for an evaluation team helps to secure buy-in to the study and its results.

The consultation process is useful, too, to ferret out hidden agendas (i.e., situa-
tions in which certain stakeholders hold pre-determined views of the worth of a pro-
gram, initiative, or organization that they seek only to reinforce, and not disturb,
through evaluation). Such ritualistic evaluations are enervating to organizational
actors and resources and fail to advance knowledge. Bringing the relevant stakehold-
ers on board prior to the evaluation can be effectual in identifying such covert pur-
poses and, it is hoped, committing those involved to a genuine openness to the

384 Evaluating the Volunteer Program

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

evaluation questions and confidence in the results. These efforts require sensitivity to
the organizational environment and central actors on the part of the volunteer re-
source manager as well as political savvy.

Also needed for conducting evaluations are skills in methodological techniques,
and the later sections of the chapter have provided an introduction to them. Most of
these procedures turn on the ability of the volunteer resource manager as evaluator to
incorporate into the analysis economic valuation of volunteer-based services for the
organization, clients, and the volunteers themselves. Not all approaches are oriented
to economic valuation, though. The logic model framework depicts the volunteer pro-
gram as a causal sequence linking inputs to the program, the activities it carries out,
and the outputs it generates, with program outcomes and impact (i.e., measures of
the goals sought by the program. The logic model need not use economic measures),
but it does require some knowledge of research design and statistical analysis. Thus,
the volunteer resource manager as evaluator requires some background in the rele-
vant techniques.

This chapter has focused on the role of stakeholders in the evaluation process.
Yet meeting the needs and demands of stakeholders for information is only part of
the evaluation task. We encourage volunteer resource managers to conceive of evalu-
ation more generally for their own purposes as bringing data to bear on questions
about volunteer involvement that can increase its value to the organization and grab
the attention of key decision makers. For example, where are the organizational op-
portunities to extend volunteer efforts? How can volunteers be used more produc-
tively? In what new initiatives might volunteers make a valuable contribution? Astute
volunteer resource managers likely think about such questions. When they begin to
use the tools of evaluation to address them more systematically, they will find that
their answers gain more force and acceptance.

References

Andersen, P., & Zimmerer, M. (2003). Dollar value of volunteer time: A review of five
estimation methods. Journal of Volunteer Administration, 21(2), 39–44.

Berk, R. A., & Rossi, P. H. (1990). Thinking about program evaluation. Newbury Park,
CA: Sage.

Bingham, R., and Felbinger, C. (2002). Evaluation In Practice: A Methodological
Approach (2nd ed.) Chatham, NJ: Chatham House Publishers.

Bowman, W. (2009). The economic value of volunteers to nonprofit organizations.
Nonprofit Management and Leadership, 19 (4), 491–506.

Brown, E. (1999). Assessing the value of volunteer activity. Nonprofit and Voluntary
Sector Quarterly, 28(1), 3–17

Brudney, J. L. (1990). Fostering volunteer programs in the public sector: Planning,
initiating, and managing voluntary activities. San Francisco, CA: Jossey-Bass.

Cahn, E., & Rowe, J. (1992). Time dollars: The new currency that enables Americans
to turn their hidden resource—time—into personal security and community
renewal. Emmaus, PA: Rodale Press.

Carman, J.G., & Fredericks, K.A. (Eds.). (2008). Nonprofits and evaluation. San
Francisco, CA: Jossey-Bass.

References 385

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

Chelimsky, E. (1997). The coming transformations in evaluation. In E. Chelimsky and
W. R. Shadish (Eds.). Evaluation for the 21st century: A handbook. Ch 1, 1–24.
Thousand Oaks, CA: Sage.

Crutchfield, J. (1962, May). Valuation of fishing resources. Land Economics, pp. 145–154.
Ellis, S. J. (1996). From the top down: The executive role in volunteer program success

(revised ed.). Philadelphia, PA: Energize, Inc.
Ellis, S. J. (1999). The dollar value of volunteer time. Focus on volunteering. (Kopykit,

2nd ed.). Philadelphia, PA: Energize, Inc.
Ferris, J. (1984). Coprovision: Citizens time and money donation in public service pro-

vision. Public Administration Review, 44(4), 324–333.
Fitzpatrick, J., Sanders, J., & Worthen, B. (2004). Program evaluation. Alternative

approaches and practical guidelines (3rd ed.). Boston, MA: Pearson Education.
Fryar, A., Mook, L., Brummel, A., & Jalandoni, N. (2003). Is assessing a financial

value to volunteering a good idea? E-Volunteerism, 3(2). Retrieved from
www.e-volunteerism.com/quarterly/03jan/03jan-keyboard

Gaskin, K. (1999). Valuing volunteers in Europe: A comparative study of the volunteer
investment and value audit. Voluntary Action, 2(1), pp. 35–49.

Gaskin, K. (2003). VIVA in Europe: A comparative study of the volunteer investment
and value audit. Journal of Volunteer Administration, 21(2), 45–48.

Goulborne, M., & Embuldeniya, D. (2002) Assigning economic value to volunteer ac-
tivity: Eight tools for efficient program management. Toronto, Ontario:
Canadian Centre for Philanthropy.

Graff, L. (2005). Best of all: The quick reference guide to effective volunteer involve-
ment. Linda Graff and Associates Inc.

Hager, M. A., & Brudney, J. L. (2005). Net benefits: Weighing the challenges and bene-
fits of volunteers. Journal of Volunteer Administration, 23(1), 26–31.

Handy, F., & Mook, L. (2010, November 8). Volunteering and volunteers: Benefit-cost
analysis. Research on Social Work Practice. Retrieved from http://rsw.sagepub.
com/content/early/2010/11/01/1049731510386625.full Chtml

Handy, F., Mook, L., & Quarter, J. (2008). The interchangeability of paid staff and vol-
unteers in nonprofit organizations. Nonprofit and Voluntary Sector Quarterly,
37(1), 76–92.

Handy, F., & Srinivasan, N. (2004). Valuing volunteers: An economic evaluation of the
net-benefit of hospital volunteers. Nonprofit and Voluntary Sector Quarterly,
33(1), 28–54.

Hopkins, S. (2000). Researching VET and the voluntary sector: Dealing with ambigui-
ties. Retrieved from Retrieved from www.eric.ed.gov/PDFS/ED470938

Horst, P., J.N. Nay, J.W. Scanlon, and J.S. Wholey. (1974). “Program management
and the federal evaluator.” Public Administration Review. 34(4), July-
August, 1974, pp. 300–308.

Hotchkiss, R. B., Fottler, M. D., & Unruh, L. (2009). Valuing volunteers: The impact of
volunteerism on hospital performance. Journal of Healthcare Management,
34(2), 119–128.

Independent Sector. (2011). Value of volunteer time. Retrieved from www.independent
sector.org/volunteer_time

Karn, G. N. (1982–1983). Money talks: A guide to establishing the true dollar value of
volunteer time, part I. Journal of Volunteer Administration, 1 (Winter), 1–17.

386 Evaluating the Volunteer Program

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

http://www.e-volunteerism.com/quarterly/03jan/03jan-keyboard

http://rsw.sagepub.com/content/early/2010/11/01/1049731510386625.full +html

http://www.eric.ed.gov/PDFS/ED470938

http://www.independentsector.org/volunteer_time

Karn, G. N. (1983). Money talks: A guide to establishing the true dollar value of volun-
teer time, part II. Journal of Volunteer Administration, 1 (Spring), 1–19.

McDavid, J. C., & Hawthorn, L. R. L. (2006). Program evaluation and performance
measurement: An introduction to practice. Thousand Oaks, CA: Sage.

Meier, K. J., Brudney, J. L., & Bohte, J. (2012). Applied statistics for public and non-
profit administration (8th ed.). Boston, MA: Wadsworth Cengage Learning.

Murray, M. (1994). How efficient are multiple in-kind transfers? Economic Inquiry, 32,
209–227.

Patton, M. Q. (1986) Utilization Focused Evaluation. (2nd Ed.) Beverley Hills,
CA: Sage

Patton M. Q. (1997). Utilization-focused evaluation (3rd ed.). Beverly Hills, CA: Sage.
Patton, Q. P., & Patrizi, P. (2005). Case teaching and evaluation. New Directions for

Evaluation 2005 (105–Special Issue: Teaching evaluation using the case
method), 5–14.

Poister, T. (2003). Measuring performance in public and nonprofit organizations.
San Francisco, CA: Jossey-Bass.

Posavac, E., & Carey, R. (1992). Program evaluation (4th ed.). Englewood Cliffs, NJ:
Prentice Hall.

Quarter, J., Mook, L., & Richmond, B. (2003). What counts: Social accounting for
nonprofits and cooperatives. Upper Saddle River, NJ: Prentice Hall.

Rehnborg, S. J., Barker, C., & Pocock, M. (2006, January–March). How much is an
hour of volunteer time worth? Various methods to monetize the contributions of a
volunteer’s time. E-Volunteerism, Electronic Journal of the Volunteer Commu-
nity. 6(2), 14–19. Retrieved from www.e-volunteerism.com/quarterly/06jan/
06jan-rehnborg

Ross, D. (1994). How to estimate the economic contribution of volunteer work.
Ottawa, Ontario: Department of Canadian Heritage. Retrieved from www.nald.
ca/fulltext/heritage/ComPartnE/estvole.htm

Rossi, P. H., Lipsey, M.W., & Freeman, H. E. (2004). Evaluation: A systematic ap-
proach (7th ed.). Thousand Oaks, CA: Sage.

Rutman, L. (1984). Evaluation research methods: A basic guide (2nd ed.). Beverly
Hills, CA: Sage.

Safrit, R. D. (2010). Evaluation and outcome measurement. In K. Seel (Ed.),
Volunteer administration: Professional practice 313–317 Markham, Ontario:
LexisNexis Canada.

Scriven, M. (1991). Key evaluation checklist. In M. Scriven (Ed.), Evaluation
thesaurus, 204–210. Thousand Oaks, CA: Sage.

Simmons, W. O., & Emanuele, R. (2010). Are volunteers substitute for paid labor in
nonprofit organizations? Journal of Economics and Business, 62(1), 65–77.

Smith, D., & Ellis, A. (2003). Valuing volunteering. Journal of Volunteer Administra-
tion, 21(2), 49–52.

Talmage, H. (1982). Evaluation of programs. In H. E. Mitzel (Ed.), Encyclopedia of
educational research (5th ed.) 592–611 New York, NY: Free Press.

Wholey, J. S. (1983). Evaluation and effective public management. Boston, MA:
Little, Brown.

Wholey, J. S., Hatry, H. P., & Newcomer, K. E. (Eds.). (2004). Handbook of practical
program evaluation (2nd ed.). San Francisco, CA: Jossey-Bass.

References 387

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

http://www.e-volunteerism.com/quarterly/06jan/06jan-rehnborg

http://www.nald.ca/fulltext/heritage/ComPartnE/estvole.htm

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:19:35.
C
o
p
yr
ig
h
t
©
2
0
1
1
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 269

Chapter 20

Measuring the Volunteer Program

Beth Kanter
Author and Master Trainer

When my first book, The Networked Nonprofit, came out in 2010, a lot of people
in the nonprofit world wondered how its ideas could ever be applied to volunteer
engagement. Facebook and Twitter may make sense for fundraising campaigns
and awareness building, I heard, but they don’t quite fit with the needs of
volunteer programs. Now, four years later, many of those naysayers use those
exact tools to strengthen volunteer relationships and leverage connections for
greater impact.

My second book, Measuring The Networked Nonprofit, which was awarded the
Terry McAdam Nonprofit Book Award in 2013, focused on what nonprofits can do
to measure and assess their social networking. Again, I heard from many nonprofit
professionals in volunteering that these ideas didn’t really apply to them. Yes, they
were tracking and measuring key metrics such as volunteer hours worked, but those
metrics were far removed from the insightful, social-media-oriented data points I
was talking about.

In this case, although I see some changes happening, more remains to be done
to help the volunteer engagement field adopt a culture of data-informed decision-
making. Collecting data to measure success is essential—but often, we only see

269

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.

C
o
p
yr

ig
h
t
©

2
0
1
5
.
Jo

h
n
W

ile
y

&
S

o
n
s,

I
n
co

rp
o
ra

te
d
.
A

ll
ri
g
h
ts

r
e
se

rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 270

part of the equation done well. Sometimes they’re not even collecting the right
data!

Why Measure Your Volunteer Program?

According to the 2014 Volunteer Impact Report from Software Advice and
VolunteerMatch, a bit more than half (55 percent) of all nonprofits1 collect
volunteer engagement data with the intention of measuring it. Many respondents
reported that they didn’t really have a formal process for collecting volunteering
data. They might, for example, collect numbers such as how many people showed
up for a volunteer event, but there was no way to get a sense of the specific social
impact of an individual volunteer or group of volunteers.

But even that is better than not measuring at all. Which begs the question—
why aren’t more nonprofits focused on measurement? Here’s why:

• Lack of resources and tools.

• Lack of skills or knowledge.

• Lack of time.

• And some just don’t see the value of gathering data.

Before this chapter is over, I plan to combat each of the preceding items to
show you what they actually are: limiting beliefs that have no place in an
innovative volunteer program. In fact, insight always improves outcomes—and
those improvements are directly proportional to the care taken when planning
for, gathering, and evaluating volunteer program data. This chapter will help you
understand what kind of information is persuasive based on the story you’re trying
to tell. And it will help you better understand and communicate impact.

More importantly, I’ll also do my best to break through the misconception
that data-informed decision-making may be “right” for some functions at a
nonprofit . . . just not volunteering.

Measurement Helps You Understand Impact

No matter your role, understanding how to pull meaningful insight from your
efforts helps your organization make smarter investments, achieve its mission

270 VOLUNTEER ENGAGEMENT 2.0

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.
C
o
p
yr
ig
h
t
©
2
0
1
5
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 271

with fewer resources, and helps you become just a little bit better at saving the
world. And then there’s this:

Measurement Helps You Understand Volunteer Impact
At a basic level, the right data can hone in on volunteer hours

donated and the dollar value for that work, and help you make sense of
the impact of their work, the cost per volunteer to run your program, and
when it makes sense to change headcount.

Measurement Improves Your Volunteer Relations
Struggling to listen and engage with your volunteer community?

Measurement helps you understand how your community perceives you,
what they do with the information you send out to them, and where to
direct your volunteers’ efforts.

Measurement Helps You Exceed Expectations
Boards and senior management increasingly expect results expressed in

the language of measurement—and funders require data to evaluate impact
(and not just any data; they want to see standardized measurement criteria,
because data without insight is just trivia). Communicating the actual value
of volunteer engagement is one of the more difficult challenges that
nonprofits face. Hours donated is an important metric, but other important
metrics, like the amount of trees planted, meals served, or young minds
opened, need to be quantified in a way that demonstrates the value of the
work and impact on the community. Measurement offers that.

Measurement Recognizes Incremental Success
Some ideas simply don’t work, others see dramatic results—but most

find success with baby steps. Being able to measure even small changes
will put your volunteer program on a steady climb.

Measurement Helps Tell Your Story
Insight breeds insight. Understanding how the big picture of your

volunteer program breaks out into smaller, successful (and unsuccessful)
chunks can result in additional funding and more staff to support your
efforts. How? Because you’ll be able to provide a data-informed story
detailing your efforts.

Gathering the right data and then making sense of it and applying it requires a
balance of “left brain” (number crunching) and “right brain” (creative thinking).
And it’s the first step toward becoming a data-informed organization.

MEASURING THE VOLUNTEER PROGRAM 271

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.
C
o
p
yr
ig
h
t
©
2
0
1
5
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 272

Becoming a Data-Informed Organization

So what is a data-informed volunteer organization, specifically—and what are the
skills required to become one?

First, a distinction: Being “data-informed” is very different from being “data-
driven.” Data-driven is relying on cold, hard data to make decisions. Data-
informed takes that cold, hard data and combines it with information from
multiple sources to make informed decisions.

Data-informed cultures assess, revise, and learn as they go. Every aspect of
their work is tied tightly to the concept of continuous improvement. And their
KPIs (Key Performance Indicators) reflect this commitment, offering mileposts
that don’t simply reflect activity (as many organizations’ KPIs do), but measure
progress toward a goal. Data-informed cultures design measurement into their
projects; they do not just do it so they have measurable outcomes. They provide
the data necessary to improve them over time.

And this brings us to the biggest challenge for organizations, and it is not
collecting or organizing data (though that takes planning, too)—the biggest
challenge is how to make measurement a part of your organization’s DNA and
encourage data literacy skills for your staff.

For example, one of my favorite data-informed nonprofits, DoSomething.org
(whose millennial engagement ideas you can read about in Chapter 5), was only
able to create such a data-friendly culture because it understood the need to get
buy-in from the top first. If the CEO isn’t onboard and supportive of making data-
informed decisions, it won’t happen.

Other key elements of DoSomething’s success include:

Being transparent. Nancy Lublin, DoSomething’s CEO, recognizes the value
of “being “transparent about sharing our dashboards, [as] it generates
feedback and discussion from our stakeholders that leads to improvement.”

Listening to the data and experimenting. When things aren’t working, Lublin
isn’t afraid to take action to change it. Her team will also frequently
“state a specific hypothesis with a number and measure against that,”
relying on various methods like A/B testing to figure out what’s working
and where they can improve.

Embracing failure. DoSomething is fearless about failing. We’ll speak to this
more at the end of the chapter, but know that failure isn’t the end of the
world—and it can actually be inspirational.

272 VOLUNTEER ENGAGEMENT 2.0

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.
C
o
p
yr
ig
h
t
©
2
0
1
5
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 273

How to Build Capacity and Gain Skills

You can’t be a data-informed organization if your staff doesn’t have skills to
collect, organize, and make sense of data! Most nonprofits can’t afford to hire
their own data scientist, so the goal is to build this capability in-house. Being
data-informed and data-literate must become part of your organization’s DNA.

How does this look? First, it’s an attitude of intense curiosity where leadership
is always asking, “What does this data mean?” and using data to dig beneath the
surface of a problem, hypothesize, formulate questions, and learn. But it’s also
shifting from ad-hoc analysis and simple (though detailed) recordkeeping to a
systematic approach to improvement. And although this competency can’t
(shouldn’t) be outsourced to consultants, you can seek additional help:

• Find experts through existing connections.

• Check out LinkedIn’s Board Connect.

• Read blogs that cover data. Try Lucy Bernholz’s Philanthropy 2173 Blog,
MarketsForGood.org, and NTEN’s Change Journal.

• Get online training. NTEN, Leap of Reason, and Ann K. Emery’s free
video tutorials are good places to start.

• Attend a data or measurement panel at your next conference.

• Explore free help options. The Analysis Exchange (web analytics),
DataKind (pro bono data scientists), and the SumAll Foundation (data
analysts) may be worth checking out.

• Engage a student volunteer. Nearby colleges may require capstone projects
where students demonstrate skills in data and measurement.

Now you have some ways to master your data, so let’s get a little granular and
explore how to define outcomes.

The First Step: Defining and Getting Buy-In on Outcomes

Typically, volunteer programs share results that consist solely of numbers. These
results would be much more telling if someone had asked this question: “So what?”

For example, if 10 volunteers put in 500 hours this quarter, and that’s an
increase of 5 percent over last year—so what? Why is this significant? What
change did they accomplish?

MEASURING THE VOLUNTEER PROGRAM 273

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.
C
o
p
yr
ig
h
t
©
2
0
1
5
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 274

When asking “So what?” you’ll come up with answers that can be evaluated,
measured, and used to build organizational capacity. These answers will speak to
your volunteer program’s vision, resources, actions, short-term results, and the
sustained outcome and impact your efforts accomplished. And these answers will
elevate the conversation by demonstrating powerful, data-informed, and results-
oriented volunteer engagement that both inspires and informs ongoing strategy.

But how does asking “So what?” help you measure the impact of your efforts,
specifically? You need to define a framework to follow, a set of outcomes and
metrics designed to comprehensively measure this impact.

Expressing Your Results to Speak to Organizational Goals

You need to express your results clearly and powerfully, showing how your
programs helped your organization achieve its mission. And it’s important to
share both successes and failures.

And you need to keep specific outcomes top of mind. Why? Outcomes are
important. Activities help accomplish outcomes, but they are not end results to be
measured on their own. Outcomes speak to long-term impact, with activities
working to accomplish measurable milestones along the way. These outcomes
should fall into two major buckets—volunteer outcomes and nonprofit outcomes:

1. Volunteer Outcomes. How are your volunteers benefiting personally and
professionally? This will help encourage future participation—track it!

2. Nonprofit Outcomes. How efficient is your operation? Are you continuously
improving your capabilities? And how is your program perceived? Map out
separate outcomes and metrics for each.

Before you get to those outcomes, though, you need to develop a mindset of
thinking strategically about your program from the start.

How to Think Strategically About Your Program

Have a clear vision of both short- and long-term results and how you’ll measure
success along the way. And don’t just think in terms of tactics—know how your
plan will create value for your organization. (One way to do this is listing your
objectives and having your team brainstorm the ultimate value and work
backward.)

274 VOLUNTEER ENGAGEMENT 2.0

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.
C
o
p
yr
ig
h
t
©
2
0
1
5
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 275

Not everything will have a direct causal relationship to tangible or “hard
results.” Tangibles are pretty straightforward when it comes to measurement—
they’re objective, easy to quantify, and easy to assign money or time values to.
Soft results (intangibles) can be difficult to measure, yet they are just as important
when it comes to understanding impact. They’re often measured with transfor-
mational metrics, like building awareness, increasing trust, generating new ideas,
and deepening relationships. As you can imagine, these intangibles can be viewed
as less credible—unless you can demonstrate a logical path of progression from
intangible to tangible.

With those basics in mind, let’s talk about how to express your results in a way
that makes sense for your organization.

Theory of Change (Demonstrating the Value of Soft Results)

A theory of change is a conceptual map, often laid out visually, that identifies the
steps toward a long-term goal with “soft” results. You can have a theory of change
for a specific initiative, or for your organization overall. Either way, it forms the
basis for ongoing decision-making, measurement, and learning.

The goal is to create an objective that answers “So what?” with clear “So
that . . .” statements. “We will do this so that we will achieve that end.”—and
then defining the steps along the way toward results.

And whether tangible or intangible, the actual data you decide to collect in
your organization makes all the difference in the world. It can mean the difference
between stakeholder buy-in or volunteers merely going through the motions,
unheard.

When stakeholders buy in they’re also tuned in—and when they’re not,
you’ve got trouble. It is often easier for nonprofit staff to describe results and
match a key performance indicator (KPI), but it can be much harder to get
consensus, because your board, fundraising directors, and staff all may have
different interpretations of the data based on what they’d like to see. This is
dangerous because it can completely bottleneck a process.

You can avoid organizational politics by using consensus-building techniques
to come to an agreement around defining these outcomes ahead of time. When
facilitating this discussion, be sure to make use of tips from Sam Kaner’s book, The
Facilitator’s Guide to Participatory Decision-Making, about facilitated listening. In
some cases, it may be worth it to hire an outside facilitator to facilitate the
meetings.

MEASURING THE VOLUNTEER PROGRAM 275

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.
C
o
p
yr
ig
h
t
©
2
0
1
5
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 276

Now that you’ve made some decisions about what to measure, let’s talk about
how to measure.

A Simple Formula for Measuring Your Program

Proper measurement requires sticking to seven basic steps if you want valid and
actionable results (and, of course, you do, or why bother?). These are the steps to
measuring anything, whether it is your social media strategy or the outcomes of
your volunteer program:

Step 1: Define your goals.
Ever heard of the “Fire, Ready, Aim” approach? It’s very common

unfortunately—and ultimately fatal to your efforts. Remember to always
ask “To what end, and why?” If your planning doesn’t include clearly
defined time frames, audiences, and outcomes, you cannot objectively
recognize success.

Once you’ve identified your outcomes or intent, translate them into
SMART objectives (Specific, Measurable, Attainable, Realistic, and
Timely). SMART thinking answers questions such as “How many?” and
“By when?”

Step 2: Define your audience.
You will never be able to measure everything you want to measure, so

you need to be selective and set priorities—and defining your audience is
chief among them. Who are you are trying to reach and how will
connecting with them help to achieve your goals?

List all the various groups that influence the success or failure of
your volunteer program and ways in which having a good relationship
with each contributes to that success or failure. Consider that list
prioritized!

Step 3: Define your benchmarks.
Who or what will you compare your results to? Measurement is a

comparative tool, so understanding whether a new number is bigger or
smaller than the previous quarter’s number (or, say, your opposition’s
number) is crucial. Decide who or what you are going to compare
yourself to.

Peer organization comparisons are telling—or you can refer to your
organization’s past performance. Either can prove difficult initially,

276 VOLUNTEER ENGAGEMENT 2.0

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.
C
o
p
yr
ig
h
t
©
2
0
1
5
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 277

particularly if you don’t have any stats (from your organization to start
with as a baseline). Making a best guess works initially, if that’s the case.
You’ll have more accurate results to compare against the next time
around. And when comparing yourself to peer organizations, looking at
share of volunteer hours or share of wallet, for example, is helpful with
time—and becomes more informative with time! The most important
benchmark is what matters to your organization—and your executive
director.

Step 4: Define your metrics.
What are the Key Performance Indicators that you will use to judge

your progress toward your goal(s)? (Remember, KPIs are meaningful,
actionable, and relevant metrics used to chart progress toward your
SMART objectives—and there are thousands you could potentially
collect.)

After completing the first three steps, your KPIs should be apparent,
though. You just need to translate your priorities and goals into a number
you can calculate, such as:

• Percent increase in donations.

• Percent increase in new donors or members.

• Percent increase in number of conversations expressing support for the
cause.

• Percent increase in conversations that contain your key messages.

Step 5: Define your time and costs.
What is your investment? It’s important to identify the true costs

involved in your programs. Most of the cost is going to be in staff time, so
you’ll need to find out how much time your volunteer program or specific
campaign requires, and determine how much time you’re going to invest.
And then the kicker—are your expected results reasonable for the time
investment?

Sometimes you’ll have to manage either the time commitment or the
expectations, if not both. And be sure to consider opportunity cost and
whether potentially shifting resources to accommodate for a promising
endeavor makes sense. There may also be alternative ways to achieve
your goals. You’re much better off being honest with yourself and sorting
this out now than later!

MEASURING THE VOLUNTEER PROGRAM 277

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.
C
o
p
yr
ig
h
t
©
2
0
1
5
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 278

Step 6: Select your data collection tool(s).
Tools are useless if they aren’t helping you connect your activities, their

impact on your audience, your progress toward objectives, and, ultimately,
your goals. There are three general types of measurement tools:

• Content analysis of social or traditional media.

• Primary audience surveys via online, mail, or phone.

• Web and social media analytics.

Will you use Google or other web analytics, surveys, or content
analysis maybe? To sort out which tool you need, consider your goals and
your KPIs.

Step 7: Collect your data, analyze it, turn it into action, and repeat.
Continuous improvement can only happen when results are

consistently assessed and changes are made depending on results. It’s a
never-ending process. Establish a regular reporting schedule and stick to it.

And do not give in to the temptation of focusing on the best results.
Being proud of successes is one thing (and expected), but do not let it
blind you to the big picture. Get rid of things that aren’t working—even
if they provided one flash-in-the-pan moment of awesome.

Measurement can be a tough sell. And the planning process can seem
a bit overwhelming to organizations new to the process. So what can you
do to ease your volunteer program into the land of hard and soft data?

Simple Tools

If you aren’t fortunate enough to have a central database that can handle
everything you need to track, you’ll want to explore some auxiliary options
like spreadsheets and custom databases:

Spreadsheets
The spreadsheet is your most powerful tool because it can capture

strategy, outcomes, tactics, KPIs, and other metrics—and it is relatively
easy to organize. Collecting data from free or paid measurement tools is
the easy part—the tough stuff starts when you’re sorting out how to work
with the data. It just takes a little elbow grease!

But be careful: Making sure you’re using your spreadsheet correctly is
really important.

278 VOLUNTEER ENGAGEMENT 2.0

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.
C
o
p
yr
ig
h
t
©
2
0
1
5
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 279

My colleague, David Geilhufe, points out that 85 percent of
spreadsheets contain errors, so be sure to crosscheck totals and formulas.2

Custom Databases
Depending on your technological savvy, there are also custom

database options that are relatively inexpensive and flexible, like
Microsoft Access and Filemaker Pro—but you’ll need to invest time to
make sure everyone knows how to use these tools effectively.

There are also proactive data-gathering options, where you ask your
community for feedback by way of surveys, apps, and texts.

Online Surveys
Surveys are certainly cost effective, but can be hit or miss as survey

participation can be a tough sell unless your community is particularly
engaged. To encourage higher levels of participation, make sure
they’re timely (sent immediately after an event, for example). There
are lots of free or low-cost survey tools available like SurveyMonkey
and PollDaddy.

Apps
Mobile apps can encourage community members to “check in” at

your event or location (with the best offering a social share option
to help spread the word) and provide data about who is
participating. They can also be used to take attendance at events.
Event Check-in from Constant Contact is one example of a popular
attendance app.

Texts
SMS (text messaging) is a fantastic option for programs seeking to

reach constituents immediately and they work very well for demographics
that are text-friendly. Unfortunately, most nonprofits don’t maintain
robust lists of phone numbers anymore (but there’s no time like the
present to start, right?).

Data visualization tools are probably the most exciting of the pack. Data
visualization is a fantastic way to make sense of data. I spend 30 percent of my
measurement time collecting and organizing data and 70 percent thinking about
what it means. Seeing it helps me—a lot.

Consolidating your insights with a DIY infographic requires some inspiration
and a little bit of perspiration. For inspiration, check out Pinterest and search for

MEASURING THE VOLUNTEER PROGRAM 279

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.
C
o
p
yr
ig
h
t
©
2
0
1
5
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 280

infographics. Or learn some basic design skills! PiktoChart and Infigr.am both
offer popular infographic-making tools for free. Microsoft PowerPoint and
Microsoft Publisher are also both great data visualization options, because
each has layout tools that make infographic or chart projects a snap.

The goal, ultimately, is to create an executive dashboard that pulls your
metrics together to create a visually appealing, easy-to-understand snapshot of
your efforts. Deciding which metrics to present is key—and it isn’t as time-
consuming as you’d think.

Keep Calm and Document

Capturing what actually happens as a volunteer event or initiative unfolds is
important because it offers ways to reflect and debrief meaningfully afterwards. It
can be as simple as keeping a journal and taking quick notes during the event.
Dana Nelson from GiveMN, one of the most successful giving days, tells me her
team is always “writing it down as they go.” Here’s how to capture relevant info
for an “After Action Review:”

• Capture the lessons learned (big or small).

• Use a collaborative social site where all members of your team can add and
access (a Google document works great for this).

• Ask team members to reflect on their lessons learned and to share stories
from the event that speak to best practices as well as things they’d do
differently.

• Review it together in a meeting and summarize into a series of “do, improve
(say how), don’t do.”

With reflection comes the realization that you’d do some things differently, of
course—and that is valuable info that should be celebrated, not feared. When you
identify opportunities for improvement, it’s time to take a failure bow!

Failing Forward

No one likes to make mistakes and placing blame is counterproductive, so smart
organizations are finding ways to make failure productive and fun—one of my
favorites being the “failure bow.”

280 VOLUNTEER ENGAGEMENT 2.0

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.
C
o
p
yr
ig
h
t
©
2
0
1
5
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 281

It was developed by Seattle-based improvisation teacher Matt Smith, and is
transformative because it alters our physiological response to failure by removing
the demons of self-doubt and self-judgment.3

You raise your hand, share your failure, take a bow, and move on. Trapeze
artists, acrobats, and other athletes are trained to take a failure bow after a
stumble because it releases them from the fear of making a mistake.

MomsRising, a grassroots organization that runs online campaigns to promote
family-friendly policies, holds “joyful funerals” where they give unsuccessful
initiatives a formal burial and eulogy during which they surface new ideas to
improve future campaigns. Executive Director Kristin Rowe-Finkbeiner says
removing the stigma from failed campaigns encourages people to take risks
and try new things.4

People won’t try out new ideas or approaches if failure is seen as a career-killer.
But when it’s treated like what it is—an opportunity to learn—it can be a fun and
rewarding process.

Summary

As I said at the start of the chapter, many volunteer engagement teams say
they don’t measure because—like a lot of nonprofit staff—they don’t believe
they have time, resources, or training to measure. But if you’ve read this
far, you can probably now see that the real barrier to effective measurement
is simply not seeing the value of gathering data on what’s working and
what isn’t.

Fortunately, I am meeting fewer and fewer nonprofit people who believe
this every year. But while it’s true that those who work with volunteers have
tended to be some of the last to embrace a culture of collecting data, assessing
programs, and failing forward, I firmly believe volunteer programs could see
some of the biggest gains from shifting in the direction of measurement.

Just remember that measurement is not a one-time add-on to your
planning process. Much like those ideas that are successful in small steps,
so too is measurement—and it’s an effort that builds social proof (and
enhances your organization’s credibility) with time.

And be sure to have time set aside to reflect and do something meaningful
with what you discover. Make measurement your first measurable goal, in fact—
and get ready to chart your success!

MEASURING THE VOLUNTEER PROGRAM 281

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.
C
o
p
yr
ig
h
t
©
2
0
1
5
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

WEBC20 04/18/2015 4:7:16 Page 282

Beth Kanter is an international leader in nonprofits’ use of social media.
Her first book, The Networked Nonprofit, introduced a new way of thinking
and operating in a connected world, and her follow-up, Measuring the
Networked Nonprofit, is a practical guide for using measurement to achieve
impact. She is the author of Beth’s Blog, the go-to source for using networks
and social media for social change. Beth has 30 years of experience in
nonprofit technology, training, and capacity and has facilitated trainings on
every continent in the world (except Antarctica). Named one of the most
influential women in technology by Fast Company and one of Business-
Week’s Voices of Innovation for Social Media, Beth was visiting scholar at
the David and Lucile Packard Foundation 2009–2013.

Notes

1. Software Advice and VolunteerMatch, “Volunteer Impact Report,” 2014, www.softwareadvice
.com/nonprofit/industryview/volunteer-impact-report-2014.

2. For spreadsheet examples, see: www.bethkanter.org/spreadsheet-sm_re.
3. Matt Smith has a great video on how to take a failure bow: http://tedxtalks.ted.com/video/

The-Failure-Bow-Matt-Smith-at-T.
4. Beth Kanter, “Likes on Facebook Are Not a Victory: Results Are!,” August 9, 2011, www

.bethkanter.org/momsrising-key-results/.

282 VOLUNTEER ENGAGEMENT 2.0

Rosenthal, R. J. (Ed.). (2015). Volunteer engagement 2. 0 : Ideas and insights changing the world. John Wiley & Sons, Incorporated.
Created from ashford-ebooks on 2022-05-20 10:21:46.
C
o
p
yr
ig
h
t
©
2
0
1
5
.
Jo
h
n
W
ile
y
&
S
o
n
s,
I
n
co
rp
o
ra
te
d
.
A
ll
ri
g
h
ts
r
e
se
rv
e
d
.

Expert paper writers are just a few clicks away

Place an order in 3 easy steps. Takes less than 5 mins.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00